[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-11-05 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676295#comment-16676295
 ] 

Toke Eskildsen commented on LUCENE-8374:


Thank you, [~dsmiley] & [~thetaphi]. The jump-tables are field-oriented, so the 
amount of output is currently {{#segments * #DocValue_fields * #reopens = 
verbose}}. Much too fine-grained from what Uwe describes. I'll remove it all.

Same goes for the options for enabling & disabling the caches. Should it be 
relevant at a later point, that part is quite easy to re-introduce.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Fix For: 7.6
>
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch, entire_index_logs.txt, 
> image-2018-10-24-07-30-06-663.png, image-2018-10-24-07-30-56-962.png, 
> single_vehicle_logs.txt, 
> start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  
> start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index 

[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-9.0.4) - Build # 115 - Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/115/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([DA36FB4379B416A6]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([DA36FB4379B416A6]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:158)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-repro - Build # 1873 - Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1873/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/203/consoleText

[repro] Revision: be65b95e80fdddea109a9d850506d6c524911ecb

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=E45A11037FE4A8C5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sv-SE -Dtests.timezone=Asia/Kashgar -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=B3BF8102CAF2065 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ru-RU 
-Dtests.timezone=Indian/Cocos -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be65b95e80fdddea109a9d850506d6c524911ecb
[repro] git fetch
[repro] git checkout be65b95e80fdddea109a9d850506d6c524911ecb

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro] ant compile-test

[...truncated 2703 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=B3BF8102CAF2065 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ru-RU -Dtests.timezone=Indian/Cocos 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 711 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1352 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ScheduledMaintenanceTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=E45A11037FE4A8C5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sv-SE -Dtests.timezone=Asia/Kashgar 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 7351 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ScheduledMaintenanceTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=E45A11037FE4A8C5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sv-SE -Dtests.timezone=Asia/Kashgar 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 1147 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro] git checkout be65b95e80fdddea109a9d850506d6c524911ecb

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12963) change default for 'uninvertible' to 'false' (dependent on new schema 'version')

2018-11-05 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676193#comment-16676193
 ] 

Tim Underwood commented on SOLR-12963:
--

{quote} * should we explicitly "fail" if a user requested {{method}} doesn't 
match the field props?{quote}
Yes please.  I find it very confusing/frustrating when I explicitly request a 
faceting method that then does not get used for whatever reason.  The JSON 
facets ({{FacetField.createFacetProcessor}}) seem especially picky about how it 
chooses its faceting method.  I think a Point field with docValues=true can 
only use FacetFieldProcessorByHashDV.  However Solr will let me request any 
faceting method I want and silently ignore it.  (Shameless plug for SOLR-12880 
which will at least tell you which facet processor was used).

 

> change default for 'uninvertible' to 'false' (dependent on new schema 
> 'version')
> 
>
> Key: SOLR-12963
> URL: https://issues.apache.org/jira/browse/SOLR-12963
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12963.patch
>
>
> We should consider changing the default behavior of the {{uninvertible}} 
> field option to be dependnt on the schema {{version}} property, such that 
> moving forward the fields/fieldtypes will default to {{uninvertible == 
> false}} unless an explicit {{uninvertible=true}} is specified by the user.
> There are a lot of considerations regarding the existing behavior of 
> functionality (like faceting) when the (effective) value of {{uninvertible}} 
> is false because we move forward with changing this in a way that could 
> suprise/confuse new users or existing users w/ long heald expectations that 
> certain behavior would just "work" w/o understanding that was because of 
> FieldCache/uninversion.
> See parent issue for more background/discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2933 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2933/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:41761_solr, 127.0.0.1:39723_solr, 127.0.0.1:36922_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:41761_solr, 127.0.0.1:39723_solr, 
127.0.0.1:36922_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([7716075A0B56DA99:15B519F409D4BE89]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3043 - Still Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3043/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:37581_solr, 127.0.0.1:44317_solr, 127.0.0.1:32915_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:37581_solr, 127.0.0.1:44317_solr, 
127.0.0.1:32915_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([949992ED4AFA660A:F63A8C434878021A]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-11-05 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676154#comment-16676154
 ] 

Tim Underwood commented on LUCENE-8374:
---

[~toke] Here is a delayed follow up on your FieldCacheImpl#Cache.get 
observations:

At first I was a little confused why the FieldCache was showing up at all since 
I have docValues enabled on almost everything in order to avoid the 
uninverting.  However looking at the Solr cache stats page shows the __root__ 
field showing up in the field cache.  That makes sense since I don't have 
docValues=true specified for it and also since I'm requesting the 
"uniqueBlock(__root__)" count for each of my facet fields (since I only care 
how many parent documents match and not how many children).

Anyways..  as to why it shows up in the cpu sampling as taking so much time my 
best guess is that it has something to do with the synchronized blocks in 
FieldCacheImpl#Cache.get.  As an experiment (which ignores the weak keys) I 
swapped out the WeakHashMap<> (which uses nested HashMaps) for a 
ConcurrentHashMap with nested ConcurrentHashMaps in order to allow me to get 
rid of the synchronized blocks.  After doing that FieldCacheImpl#Cache.get 
disappeared from the CPU sampling.  There may have been a minor performance 
increase but it certainly wasn't close to the 68,485ms that showed up on the 
original profiling.  So it might have just been an artifact of the interaction 
between the cpu sampling and the synchronized blocks.

Perhaps I'll go back and play with it some more and try swapping in Guava's 
Cache in order to make the weak keys work properly.  Or maybe I'll try enabling 
docValues on my __root__ field to see what that does.

 

 

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Fix For: 7.6
>
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch, entire_index_logs.txt, 
> image-2018-10-24-07-30-06-663.png, image-2018-10-24-07-30-56-962.png, 
> single_vehicle_logs.txt, 
> start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  
> start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with 

[GitHub] lucene-solr pull request #493: SOLR-12964: Make use of DocValuesIterator.adv...

2018-11-05 Thread tpunder
GitHub user tpunder opened a pull request:

https://github.com/apache/lucene-solr/pull/493

SOLR-12964: Make use of DocValuesIterator.advanceExact() instead of the 
advance()/docID() pattern



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tpunder/lucene-solr SOLR-12964

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/493.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #493


commit 8bb088305591f45f3b9da6ffe9c0a5de8d0fe8ba
Author: Tim Underwood 
Date:   2018-11-05T16:57:32Z

SOLR-12964: Make use of DocValuesIterator.advanceExact() instead of the 
advance()/docID() pattern




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12964) Use advanceExact instead of advance in a few remaining json facet use cases

2018-11-05 Thread Tim Underwood (JIRA)
Tim Underwood created SOLR-12964:


 Summary: Use advanceExact instead of advance in a few remaining 
json facet use cases
 Key: SOLR-12964
 URL: https://issues.apache.org/jira/browse/SOLR-12964
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 7.5
Reporter: Tim Underwood


This updates 2 places in the JSON Facets code that uses the advance()/docID() 
pattern instead of the simpler advanceExact().  Most other usages in the 
faceting code already make use of advanceExact().

The only remaining usage of advance() in org.apache.solr.search.facet is in:
 * UniqueAgg.BaseNumericAcc.collect
 * HLLAgg..BaseNumericAcc.collect

The code for those of those looks very similar and probably makes sense to 
update but it would require changing the return type of the protected 
docIdSetIterator() method to return a DocValuesIterator in order to be able to 
call the advanceExact() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8273) deprecate implicitly uninverted fields, force people to either use docValues, or be explicit that they want query time uninversion

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676113#comment-16676113
 ] 

David Smiley commented on SOLR-8273:


{quote}they have docValues="true" enabled on every fieldtype in their 
schema(s), not because they need/want to use them, but because it's the only 
way to ensure that a stray/mistaken request to sort/facet on one of these 
fields won't cause the heap usage to blow up building FieldCache
{quote}
Wow that's crazy; I've never seen that. I'm shaking my head at this.  My choice 
would be to write a DocValuesFormat that is either "cranky" or that always 
returns some nominal value.  

> deprecate implicitly uninverted fields, force people to either use docValues, 
> or be explicit that they want query time uninversion
> --
>
> Key: SOLR-8273
> URL: https://issues.apache.org/jira/browse/SOLR-8273
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Priority: Major
>
> once upon a time, there was nothing we could do to *stop* people from using 
> the FieldCache - even if they didn't realize they were using it.
> Then DocValues was added - and now people have a choice: they can set 
> {{docValues=true}} on a field/fieldtype and know that when they do 
> functions/sorting/faceting on that field, it won't require a big hunk of ram 
> and a big stall everytime a reader was reopened.  But it's easy to overlook 
> when clients might be doing something that required the FieldCache w/o 
> realizing it -- and there is no way to stop them, because Solr automatically 
> uses UninvertingReader under the covers and automatically allows every field 
> to be uninverted in this way.
> we should change that.
> 
> Straw man proposal...
> * introduce a new boolean fieldType/field property {{uninvertable}}
> * all existing FieldType classes should default to {{uninvertable==false}}
> * a field or fieldType that contains {{indexed="false" uninvertable="true"}} 
> should be an error.
> * the Schema {{version}} value should be incremented, such that any Schema 
> with an older version is treated as if every field with {{docValues==false}} 
> has an implict {{uninvertable="true"}} on it.
> * the Map passed to UninvertedReader should now only list items that have an 
> effective value of {{uninvertable==true}}
> * sample schemas should be updated to use docValues on any field where the 
> examples using those schemas suggest using those fields in that way (ie: 
> sorting, faceting, etc...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8273) deprecate implicitly uninverted fields, force people to either use docValues, or be explicit that they want query time uninversion

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676103#comment-16676103
 ] 

David Smiley commented on SOLR-8273:


+1 woohoo!  Looking forward to this.

bq. while defering on the disccussion to change the default value to true

+1 sure -- "progress not perfection" and all that

> deprecate implicitly uninverted fields, force people to either use docValues, 
> or be explicit that they want query time uninversion
> --
>
> Key: SOLR-8273
> URL: https://issues.apache.org/jira/browse/SOLR-8273
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Priority: Major
>
> once upon a time, there was nothing we could do to *stop* people from using 
> the FieldCache - even if they didn't realize they were using it.
> Then DocValues was added - and now people have a choice: they can set 
> {{docValues=true}} on a field/fieldtype and know that when they do 
> functions/sorting/faceting on that field, it won't require a big hunk of ram 
> and a big stall everytime a reader was reopened.  But it's easy to overlook 
> when clients might be doing something that required the FieldCache w/o 
> realizing it -- and there is no way to stop them, because Solr automatically 
> uses UninvertingReader under the covers and automatically allows every field 
> to be uninverted in this way.
> we should change that.
> 
> Straw man proposal...
> * introduce a new boolean fieldType/field property {{uninvertable}}
> * all existing FieldType classes should default to {{uninvertable==false}}
> * a field or fieldType that contains {{indexed="false" uninvertable="true"}} 
> should be an error.
> * the Schema {{version}} value should be incremented, such that any Schema 
> with an older version is treated as if every field with {{docValues==false}} 
> has an implict {{uninvertable="true"}} on it.
> * the Map passed to UninvertedReader should now only list items that have an 
> effective value of {{uninvertable==true}}
> * sample schemas should be updated to use docValues on any field where the 
> examples using those schemas suggest using those fields in that way (ie: 
> sorting, faceting, etc...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676096#comment-16676096
 ] 

David Smiley commented on SOLR-12959:
-

I concur with Noble & Gus's opinion that handling duplicate keys where you want 
to convey a map is more likely indicative of a bug that ought to be loudly 
reported to the user than it is a feature.  Well said guys.  But perhaps a 
realistic example could present itself to the contrary.  One way to help find 
it may be to temporarily override add() to assert the key doesn't already exist 
then see which tests fail.

I guess my satisfaction point is merely this: in 8.0 SimpleOrderedMap is 
nowhere to be found.  Either remove or rename it or do it some other way (e.g. 
boolean on NL).  We can do better folks.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-11) - Build # 116 - Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/116/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

30 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 1) 
Thread[id=1780, name=test-613-thread-1, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=1781, name=test-613-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 
   1) Thread[id=1780, name=test-613-thread-1, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@11/java.lang.Thread.run(Thread.java:834)
   2) Thread[id=1781, name=test-613-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@11/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([D2BF5D911A976C6E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1780, name=test-613-thread-1, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 

[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676078#comment-16676078
 ] 

Noble Paul edited comment on SOLR-12959 at 11/6/18 3:13 AM:


bq.[~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created

Yes. you are right. Let's try to understand the requirement

>From the serialization point of view, it's just a {{Map}} and nothing else

We can have an interface called {{SolrMap}} which is always be serialized into 
a {{Map}} like structure. We can have memory efficient/inefficient 
implemenatations of the same class. 
The problem today is we have tied the wire format () with a Solid java class 
which actually means nothing to the users (or even developers)
 
bq.The handling of duplicate keys is a feature..
it's not a feature. It's a price we pay for the memory efficiency of using a 
{{NamedList}} like class. 

bq.So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 
The performance impact is is well known . We should keep sensible interfaces 
and multiple implementations depending on how much price we want to pay.

JSON only understands a {{Map}} like Object. it does not matter for other 
serialization formats anyway


was (Author: noble.paul):
bq.[~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created

Yes. you are right. Let's try to understand the requirement

>From the serializatiopn point of view, it's just a {{Map}} and nothing else

We can have an interface called {{SolrMap}} which is always be serialized into 
a {{Map}} like structure. We can have memory efficient/inefficient 
implemenatations of the same class. 
The problem today is we have tied the wire format () with a Solid java class 
which actually means nothing to the users (or even developers)
 
bq.The handling of duplicate keys is a feature..
it's not a feature. It's a price we pay for the memory efficiency of using a 
{{NamedList}} like class. 

bq.So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 
The performance impact is is well known . We should keep sensible interfaces 
and multiple implementations depending on how much price we want to pay.

JSON only understands a {{Map}} like Object. it does not matter for other 
serialization formats anyway

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676078#comment-16676078
 ] 

Noble Paul commented on SOLR-12959:
---

bq.[~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created

Yes. you are right. Let's try to understand the requirement

>From the serializatiopn point of view, it's just a {{Map}} and nothing else

We can have an interface called {{SolrMap}} which is always be serialized into 
a {{Map}} like structure. We can have memory efficient/inefficient 
implemenatations of the same class. 
The problem today is we have tied the wire format () with a Solid java class 
which actually means nothing to the users (or even developers)
 
bq.The handling of duplicate keys is a feature..
it's not a feature. It's a price we pay for the memory efficiency of using a 
{{NamedList}} like class. 

bq.So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 
The performance impact is is well known . We should keep sensible interfaces 
and multiple implementations depending on how much price we want to pay.

JSON only understands a {{Map}} like Object. it does not matter for other 
serialization formats anyway

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676061#comment-16676061
 ] 

Gus Heck edited comment on SOLR-12959 at 11/6/18 2:47 AM:
--

[~noble.paul] I agree that it would be very nice to be able to use a standard 
collection class instead of NamedList from a programmer convenience standpoint. 
I've had this thought many times too, but expected such a road to be bumpy and 
controversial and so I didn't start down it :).

It seems from the above so far that 2 things stand in the way (identified thus 
far)
 # [~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created, named 
lists are mentioned in SOLR-17 so they clearly predate our bug tracking system)
 # The handling of duplicate keys is a feature. Personally, if I were the user 
I'd want the above example [~hossman] gave with accidentally duplicated names 
to throw an error, not give me back something that *I would* probably 
subsequently throw away when I stuffed the results into a map or tried to build 
a JavaScript object out of it...

So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 

Also note that the code you quote is using SimpleOrderedMap to override and 
ignore the json.nl setting.


was (Author: gus_heck):
[~noble.paul] I agree that it would be very nice to be able to use a standard 
collection class instead of NamedList from a programmer convenience standpoint. 
I've had this thought many times too, but expected such a road to be bumpy and 
controversial and so I didn't start down it :).

It seems from the above so far that 2 things stand in the way (identified thus 
far)
 # [~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created, named 
lists are mentioned in SOLR-17 so they clearly predate our bug tracking system)
 ## The handling of duplicate keys is a feature. Personally, if I were the user 
I'd want the above example [~hossman] gave with accidentally duplicated names 
to throw an error, not give me back something that *I would* probably 
subsequently throw away when I stuffed the results into a map or tried to build 
a JavaScript object out of it...

So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 

Also note that the code you quote is using SimpleOrderedMap to override and 
ignore the json.nl setting.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676061#comment-16676061
 ] 

Gus Heck edited comment on SOLR-12959 at 11/6/18 2:47 AM:
--

[~noble.paul] I agree that it would be very nice to be able to use a standard 
collection class instead of NamedList from a programmer convenience standpoint. 
I've had this thought many times too, but expected such a road to be bumpy and 
controversial and so I didn't start down it :).

It seems from the above so far that 2 things stand in the way (identified thus 
far)
 # [~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created, named 
lists are mentioned in SOLR-17 so they clearly predate our bug tracking system)
 ## The handling of duplicate keys is a feature. Personally, if I were the user 
I'd want the above example [~hossman] gave with accidentally duplicated names 
to throw an error, not give me back something that *I would* probably 
subsequently throw away when I stuffed the results into a map or tried to build 
a JavaScript object out of it...

So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 

Also note that the code you quote is using SimpleOrderedMap to override and 
ignore the json.nl setting.


was (Author: gus_heck):
[~noble.paul] I agree that it would be very nice to be able to use a standard 
collection class instead of NamedList from a programmer convenience standpoint. 
I've had this thought many times too, but expected such a road to be bumpy and 
controversial and so I didn't start down it :).

It seems from the above so far that 2 things stand in the way (identified thus 
far)
 # [~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created, named 
lists are mentioned in SOLR-17 so they clearly predate our bug tracking system)
 # The handling of duplicate keys is a feature. Personally, if I were the user 
I'd want the above example hos gave with accidentally duplicated names to throw 
an error, not give me back something that *I would* probably subsequently throw 
away when I stuffed the results into a map or tried to build a JavaScript 
object out of it...

So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 

Also note that the code you quote is using SimpleOrderedMap to override and 
ignore the json.nl setting.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676061#comment-16676061
 ] 

Gus Heck commented on SOLR-12959:
-

[~noble.paul] I agree that it would be very nice to be able to use a standard 
collection class instead of NamedList from a programmer convenience standpoint. 
I've had this thought many times too, but expected such a road to be bumpy and 
controversial and so I didn't start down it :).

It seems from the above so far that 2 things stand in the way (identified thus 
far)
 # [~ab] says the current classes were meant to save memory (something worth 
testing since java has evolved a great deal since they were created, named 
lists are mentioned in SOLR-17 so they clearly predate our bug tracking system)
 # The handling of duplicate keys is a feature. Personally, if I were the user 
I'd want the above example hos gave with accidentally duplicated names to throw 
an error, not give me back something that *I would* probably subsequently throw 
away when I stuffed the results into a map or tried to build a JavaScript 
object out of it...

So it would be necessary to verify performance and find widespread agreement 
that that back compatibility break is feasible and worthwhile 

Also note that the code you quote is using SimpleOrderedMap to override and 
ignore the json.nl setting.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676003#comment-16676003
 ] 

Noble Paul edited comment on SOLR-12959 at 11/6/18 2:20 AM:


* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} .

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}
So , again, what's the feature that we want to preserve here? If we replace 
{{SimpleOrderedMap}} with a {{LinkedHashMap}} , we get exactly the same 
behavior as we get today.

The problem with {{SimpleOrderedMap}} is that it doesn't guarantee that the 
keys are unique


was (Author: noble.paul):
* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} .

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}
So , again, what's the feature that we want to preserve here? If we replace 
{{SimpleOrderedMap}} with a {{LinkedHashMap}} , we get exactly the same 
behavior as we get today.

The problem with {{SimpleOrderedMap}} is that it doesn't guarantee that the 
keys are unique and

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676003#comment-16676003
 ] 

Noble Paul edited comment on SOLR-12959 at 11/6/18 2:20 AM:


* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} .

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}
So , again, what's the feature that we want to preserve here? If we replace 
{{SimpleOrderedMap}} with a {{LinkedHashMap}} , we get exactly the same 
behavior as we get today.

The problem with {{SimpleOrderedMap}} is that it doesn't guarantee that the 
keys are unique and


was (Author: noble.paul):
* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} . 

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}

So , again, what's the feature that we want to preserve here? If we replace 
{{SimpleOrderedMap}} with a   {{LinkedHashMap}} , we get exactly the same 
behavior as we get today.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23160 - Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23160/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

18 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([B38CAAF334D94B51:4F367EC7CCF9FA9B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth(TestMiniSolrCloudClusterSSL.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676005#comment-16676005
 ] 

Gus Heck commented on SOLR-12959:
-

Ah, just noticed the set of constants in JsonTextWriter... those plus an 
USER_SPECIFIED (the default) could be used for the enum, and the value in the 
named list overrides the value in json.nl creating the same effect and giving 
us options of using any of those formats where we want  (without creating 
subclasses)

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676003#comment-16676003
 ] 

Noble Paul edited comment on SOLR-12959 at 11/6/18 2:08 AM:


* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} . 

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}

So , again, what's the feature that we want to preserve here? If we replace 
{{SimpleOrderedMap}} with a   {{LinkedHashMap}} , we get exactly the same 
behavior as we get today.


was (Author: noble.paul):
* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} . 

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}

So , again, what's the feature that we want to preserve here? If we 
replace{{SimpleOrderedMap}} with a   {{LinkedHashMap}} , we get exactly the 
same behavior as we get today.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676003#comment-16676003
 ] 

Noble Paul commented on SOLR-12959:
---

* In code {{SimpleOrderedMap}} is exactly same as {{NamedList}}
 * In javabin or XML or any other format , {{SimpleOrderedMap}} is exactly the 
same
 * In {{JSON}} response, the default behavior for both {{NamedList}} and 
{{SimpleOrderedMap}} are serialized exactly same way
 * When we choose the {{namedListStyle}} to be something else , 
{{SimpleOrderedMap}} is serialized as {{JSON_NL_MAP}} . 

The following is the code for serializing {{NaMedList}} / {{SimpleOrderedMap}}

 
{code:java}
default void writeNamedList(String name, NamedList val) throws IOException {
String namedListStyle = getNamedListStyle();
if (val instanceof SimpleOrderedMap) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_FLAT) {
  writeNamedListAsFlat(name, val);
} else if (namedListStyle == JSON_NL_MAP) {
  writeNamedListAsMapWithDups(name, val);
} else if (namedListStyle == JSON_NL_ARROFARR) {
  writeNamedListAsArrArr(name, val);
} else if (namedListStyle == JSON_NL_ARROFMAP) {
  writeNamedListAsArrMap(name, val);
} else if (namedListStyle == JSON_NL_ARROFNTV) {
  throw new UnsupportedOperationException(namedListStyle
  + " namedListStyle must only be used with 
ArrayOfNameTypeValueJSONWriter");
}
  }
{code}

So , again, what's the feature that we want to preserve here? If we 
replace{{SimpleOrderedMap}} with a   {{LinkedHashMap}} , we get exactly the 
same behavior as we get today.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675999#comment-16675999
 ] 

Gus Heck commented on SOLR-12959:
-

This is interesting and enlightening :) Some thoughts:

1) I don't get the phrase  "order is _secondary_ in importance." When I read 
that in the javadoc I'm not sure (based on that phrase) if order is required or 
if it can be sacrificed in some refactoring later. Looking around will answer 
the question, but the phrase muddies the water that would otherwise be clear 
(imho). Either order is required or it's not...

2) It sounds like the subclass is a solely rendering hint and intended to be 
otherwise identical. I think if we leave things as is, the javadoc should say 
THAT explicitly (assuming I got it right). 

3) It sounds like this *could* be a boolean flag on NamedList instead of a 
subclass? or if we want to leave open other implementations an enum? Maybe 
NamedList.Render with values ARRAY_1D and ARRAY_TUPLES (and perhaps some point 
ARRAY_2D or something else...) I'd also advocate staying away from the word 
"map" since the allowance of duplicate keys is extremely surprising for 
anything called a map
{code:java}
"features":[
"adapter",2,
"car",2,
"power",2,
"white",2]
 {code}
{code:java}
 "features":
  [
{"adapter":2},
{"car":2},
{"power":2},
{"white":2}]
{code}
I seearched for cases where we check "instanceof SimpleOrderedMap" and they all 
occur where we know we have a NamedList, so a boolean or enum attribute should 
work.

4) Finally, a question:  besides representing parameters from the GET request 
line, when do we handle/expect/use duplicate keys? Examples are not popping to 
mind...

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4914 - Still Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4914/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/49)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/49)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10003_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([8380A4AA77D96278:3A0C184669A8ADE]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
  

[GitHub] lucene-solr pull request #492: Answer to TODO: Replace Manual Encoding with ...

2018-11-05 Thread MarcusSorealheis
GitHub user MarcusSorealheis opened a pull request:

https://github.com/apache/lucene-solr/pull/492

Answer to TODO: Replace Manual Encoding with JSON Module

This commit adds the python `json` module to replace manual json encoding.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MarcusSorealheis/lucene-solr 
enhancement/example_todo_json_module

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #492


commit bf4d831c792946dee8a586c8659f1666d5ead208
Author: Marcus Eagan 
Date:   2018-11-06T01:27:56Z

added json module to replace  manual json encoding.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-05 Thread Shalin Shekhar Mangar
Congratulations and welcome Tim!

On Fri, Nov 2, 2018 at 9:50 PM Erick Erickson 
wrote:

> Hi all,
>
> Please join me in welcoming Tim Allison as the latest Lucene/Solr
> committer!
>
> Congratulations and Welcome, Tim!
>
> It's traditional for you to introduce yourself with a brief bio.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (SOLR-12963) change default for 'uninvertible' to 'false' (dependent on new schema 'version')

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675941#comment-16675941
 ] 

Hoss Man commented on SOLR-12963:
-

marking this issue dependent on SOLR-12962 since obviously it makes no sense to 
change the default of an option unless/until the option is added.

> change default for 'uninvertible' to 'false' (dependent on new schema 
> 'version')
> 
>
> Key: SOLR-12963
> URL: https://issues.apache.org/jira/browse/SOLR-12963
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12963.patch
>
>
> We should consider changing the default behavior of the {{uninvertible}} 
> field option to be dependnt on the schema {{version}} property, such that 
> moving forward the fields/fieldtypes will default to {{uninvertible == 
> false}} unless an explicit {{uninvertible=true}} is specified by the user.
> There are a lot of considerations regarding the existing behavior of 
> functionality (like faceting) when the (effective) value of {{uninvertible}} 
> is false because we move forward with changing this in a way that could 
> suprise/confuse new users or existing users w/ long heald expectations that 
> certain behavior would just "work" w/o understanding that was because of 
> FieldCache/uninversion.
> See parent issue for more background/discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12963) change default for 'uninvertible' to 'false' (dependent on new schema 'version')

2018-11-05 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12963:
---

 Summary: change default for 'uninvertible' to 'false' (dependent 
on new schema 'version')
 Key: SOLR-12963
 URL: https://issues.apache.org/jira/browse/SOLR-12963
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man


We should consider changing the default behavior of the {{uninvertible}} field 
option to be dependnt on the schema {{version}} property, such that moving 
forward the fields/fieldtypes will default to {{uninvertible == false}} unless 
an explicit {{uninvertible=true}} is specified by the user.

There are a lot of considerations regarding the existing behavior of 
functionality (like faceting) when the (effective) value of {{uninvertible}} is 
false because we move forward with changing this in a way that could 
suprise/confuse new users or existing users w/ long heald expectations that 
certain behavior would just "work" w/o understanding that was because of 
FieldCache/uninversion.

See parent issue for more background/discussion.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12962) add an 'uninvertible' field(type) option that defaults to "true"

2018-11-05 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12962:
---

 Summary: add an 'uninvertible' field(type) option that defaults to 
"true"
 Key: SOLR-12962
 URL: https://issues.apache.org/jira/browse/SOLR-12962
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man


field & fieldtype declarations should support an {{uninvertible}} option (which 
defaults to "true") for backcompat that dictates wether or not Uninversion can 
be performed on fields.

See parent issue for more background/discussion.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8273) deprecate implicitly uninverted fields, force people to either use docValues, or be explicit that they want query time uninversion

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675931#comment-16675931
 ] 

Hoss Man commented on SOLR-8273:


At the Activate Conference last month I talked with some folks who have some 
very big Solr installations who mentioned that they have {{docValues="true"}} 
enabled on every fieldtype in their schema(s), not because they need/want to 
use them, but because it's the only way to ensure that a stray/mistaken request 
to sort/facet on one of these fields won't cause the heap usage to blow up 
building FieldCache – they wind up pay a huge indexing & disk usage cost for 
these docValues that they explicitly don't want!

That got me rethinking this issue, and how easy I remembered thinking it would 
be to add an {{uninvertible=false}} option for fieldTypes, and wanting to 
sanity check how hard the impl would actaully be. I tried it out and the answer 
is "very easy" ... to the point that I'm incredibly embarassed at the fact that 
we haven't done so yet.

I think we should *definitely* add {{uninvertible=false}} as an option in 
soonest possible release...

... _however_ ...

... the more i look at it and how existing code deals with 
docValues/FieldCache, the less convinced I am that we should "rush" changing 
the default to {{uninvertible=false}} (when schema {{version > 1.6}} ). The key 
reasons for my hesitation have to do with the existing behavior of faceting 
(both SimpleFacets and JSON Facets) when dealing with fields that are 
{{docValues="false" indexed="false"}} – both the default behavior as well as 
what happens if you try to force an expliit facet algorithm (ie: 
{{facet.method=XXX}} and {{method: XXX}} ) on a field that is only indexed or 
only docValues, or neither ... the short version is we don't ever return an 
explicit error message if we can't facet on a field (in the method requested) 
we just return an empty list of buckets.

That existing behavior makes me very leary of changing the default FieldCache 
behavior – even dependent on a new {{version="1.7"}} for schemas – just because 
of how confusing it might be for new users, or existing users who create new 
collections using the new {{_default}} schema (not to mention users who might 
be reading old tutorials/docs/blogs/etc...).

I feel like _before_ we consider changing the default behavior, we should 
probably have a much more in depth conversation as a community about if/how we 
want to change the automatic facet method selection for fields based on if/when 
they are uninvertible, and if/how we want to "fail loudly" when an explicit 
method is provided by the user. ... *BUT* ... I still think we should ASAP 
provide the _option_ for users who *know* they don't want FieldCaches to be 
created to be able to say that – and give these users/fields facet behavior 
consistent with what would happen if a they were {{indexed="false"}}

With that in mind, I'm going to create 2 sub-tasks for this jira, and attach 
the patch(es) with my work in progress so far (and associated "TODO" lists) for 
consideration.

I'm interested in feedback – not just on the patches (ideally as comments in 
the sub-task issues themselves), but also (here) if anyone has any specific 
concerns on the idea of spliting up my previous proposal such that: we can 
support this {{uninvertible=false}} option available ASAP (ideally in the next 
7.x release), while defering on the disccussion to change the default value to 
{{true}}

?

> deprecate implicitly uninverted fields, force people to either use docValues, 
> or be explicit that they want query time uninversion
> --
>
> Key: SOLR-8273
> URL: https://issues.apache.org/jira/browse/SOLR-8273
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Priority: Major
>
> once upon a time, there was nothing we could do to *stop* people from using 
> the FieldCache - even if they didn't realize they were using it.
> Then DocValues was added - and now people have a choice: they can set 
> {{docValues=true}} on a field/fieldtype and know that when they do 
> functions/sorting/faceting on that field, it won't require a big hunk of ram 
> and a big stall everytime a reader was reopened.  But it's easy to overlook 
> when clients might be doing something that required the FieldCache w/o 
> realizing it -- and there is no way to stop them, because Solr automatically 
> uses UninvertingReader under the covers and automatically allows every field 
> to be uninverted in this way.
> we should change that.
> 
> Straw man proposal...
> * introduce a new boolean fieldType/field property {{uninvertable}}
> * all existing FieldType classes should default to 

Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-05 Thread Koji Sekiguchi

Welcome Tim!

Koji


On 2018/11/03 1:20, Erick Erickson wrote:

Hi all,

Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!

Congratulations and Welcome, Tim!

It's traditional for you to introduce yourself with a brief bio.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 3042 - Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3042/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:36721_solr, 127.0.0.1:40391_solr, 127.0.0.1:45983_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:36721_solr, 127.0.0.1:40391_solr, 
127.0.0.1:45983_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([5D01170EB7658BC2:3FA209A0B5E7EFD2]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-05 Thread Christian Moen
Congrats, Gus!

On Tue, Nov 6, 2018 at 9:11 AM Otis Gospodnetić 
wrote:

> Another welcome, Gus!
>
> Otis
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> On Thu, Nov 1, 2018 at 8:22 AM David Smiley 
> wrote:
>
>> Hi all,
>>
>> Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
>>
>> Congratulations and Welcome, Gus!
>>
>> Gus, it's traditional for you to introduce yourself with a brief bio.
>>
>> ~ David
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>


[JENKINS] Lucene-Solr-repro - Build # 1870 - Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1870/

[...truncated 36 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1008/consoleText

[repro] Revision: 9a53617e17649e8e0cb3cfc7a76348ba396871d3

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=raceConditionOnDeleteAndRegisterReplica 
-Dtests.seed=D77CBD56BDCCFD0B -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=fr-CH -Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteLiveReplicaTest -Dtests.seed=D77CBD56BDCCFD0B 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=fr-CH 
-Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=D77CBD56BDCCFD0B 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=fr-CH 
-Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestHdfsCloudBackupRestore 
-Dtests.method=test -Dtests.seed=D77CBD56BDCCFD0B -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=mt -Dtests.timezone=America/Miquelon 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testCollectionDoesntExist -Dtests.seed=ADA50EADDADCAFF2 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-OM 
-Dtests.timezone=Africa/Casablanca -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be65b95e80fdddea109a9d850506d6c524911ecb
[repro] git fetch
[repro] git checkout 9a53617e17649e8e0cb3cfc7a76348ba396871d3

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestHdfsCloudBackupRestore
[repro]   DeleteReplicaTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3580 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestHdfsCloudBackupRestore|*.DeleteReplicaTest" 
-Dtests.showOutput=onerror  -Dtests.seed=D77CBD56BDCCFD0B -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=mt -Dtests.timezone=America/Miquelon 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 121 lines...]
[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=ADA50EADDADCAFF2 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ar-OM -Dtests.timezone=Africa/Casablanca -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2115 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DeleteReplicaTest
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 103 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=ADA50EADDADCAFF2 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ar-OM -Dtests.timezone=Africa/Casablanca -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2121 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-OM 
-Dtests.timezone=Africa/Casablanca -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1889 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout be65b95e80fdddea109a9d850506d6c524911ecb


Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-05 Thread Otis Gospodnetić
Welcome, Tim!

Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson 
wrote:

> Hi all,
>
> Please join me in welcoming Tim Allison as the latest Lucene/Solr
> committer!
>
> Congratulations and Welcome, Tim!
>
> It's traditional for you to introduce yourself with a brief bio.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-05 Thread Otis Gospodnetić
Another welcome, Gus!

Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



On Thu, Nov 1, 2018 at 8:22 AM David Smiley 
wrote:

> Hi all,
>
> Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
>
> Congratulations and Welcome, Gus!
>
> Gus, it's traditional for you to introduce yourself with a brief bio.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 203 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/203/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
missing cleanup event: [CapturedEvent{timestamp=23794640944133397, 
stage=STARTED, actionName='null', event={   
"id":"54891990d7e638Te3u72ksis0w5dymor7uerrp5u",   
"source":".scheduled_maintenance",   "eventTime":23794640940951096,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457240002,  
   "_enqueue_time_":23794640942975135}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794640962219697, 
stage=BEFORE_ACTION, actionName='inactive_shard_plan', event={   
"id":"54891990d7e638Te3u72ksis0w5dymor7uerrp5u",   
"source":".scheduled_maintenance",   "eventTime":23794640940951096,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457240002,  
   "_enqueue_time_":23794640942975135}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance}, config={   "trigger":".scheduled_maintenance",  
 "stage":[ "STARTED", "ABORTED", "SUCCEEDED", "FAILED"],   
"beforeAction":"inactive_shard_plan",   "afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794640987904710, 
stage=AFTER_ACTION, actionName='inactive_shard_plan', event={   
"id":"54891990d7e638Te3u72ksis0w5dymor7uerrp5u",   
"source":".scheduled_maintenance",   "eventTime":23794640940951096,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457240002,  
   "_enqueue_time_":23794640942975135}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance, 
properties.inactive_shard_plan={staleLocks={ScheduledMaintenanceTriggerTest_collection1/staleShard-splitting={stateTimestamp=1541284439964380096,
 currentTimeNs=1541457240049575494, deltaSec=172800, ttlSec=20}}}, 
properties.AFTER_ACTION=[inactive_shard_plan, execute_plan, test]}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794640997247031, stage=SUCCEEDED, 
actionName='null', event={   "id":"54891990d7e638Te3u72ksis0w5dymor7uerrp5u",   
"source":".scheduled_maintenance",   "eventTime":23794640940951096,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457240002,  
   "_enqueue_time_":23794640942975135}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794646066790241, stage=STARTED, 
actionName='null', event={   "id":"54891ac12a6f33Te3u72ksis0w5dymor7uerrp60",   
"source":".scheduled_maintenance",   "eventTime":23794646046633779,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457245108,  
   "_enqueue_time_":23794646046777671}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794646079220872, 
stage=BEFORE_ACTION, actionName='inactive_shard_plan', event={   
"id":"54891ac12a6f33Te3u72ksis0w5dymor7uerrp60",   
"source":".scheduled_maintenance",   "eventTime":23794646046633779,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541457245108,  
   "_enqueue_time_":23794646046777671}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance}, config={   "trigger":".scheduled_maintenance",  
 "stage":[ "STARTED", "ABORTED", "SUCCEEDED", "FAILED"],   
"beforeAction":"inactive_shard_plan",   "afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23794646082102414, 
stage=AFTER_ACTION, 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 873 - Still Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/873/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

14 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling

Error Message:
Both triggers should have fired by now

Stack Trace:
java.lang.AssertionError: Both triggers should have fired by now
at 
__randomizedtesting.SeedInfo.seed([4740A97878135A13:BC62015DAAB9B981]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling(TriggerIntegrationTest.java:270)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling

Error Message:
Both triggers should have fired by now

Stack Trace:
java.lang.AssertionError: Both triggers should have fired by now
at 
__randomizedtesting.SeedInfo.seed([4740A97878135A13:BC62015DAAB9B981]:0)

[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675885#comment-16675885
 ] 

Noble Paul commented on SOLR-12959:
---

The order is never changed. It's always in the same order as NamedList. When 
you read it at the other end, if you use any of the ordered maps it's good 
enough

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675876#comment-16675876
 ] 

Hoss Man commented on SOLR-12959:
-

{quote}How many people use the other formats of JSON representation other than 
the simple object representation?
{quote}
anybody who cares about the order of facet counts is going to care if you force 
simple maps

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675873#comment-16675873
 ] 

Hoss Man commented on SOLR-12959:
-

{quote}Neither echoParams (a SolrParams) nor the stored fields (a SolrDocument) 
are held as NamedLists and thus don't apply in your example.
{quote}
To be clear: echoParams _absolutely_ applies in my example ... it's *BECAUSE* 
the params are put in the response as a SimpleOrderedMap (which, as a reminder, 
is a _subclass_ of NamedList) that they are *always* returned to the json 
client as "simple (ordered) map" regardless of the value of {{json.nl}}

(but yes, SolrDocuments are an analogous structure in the response, not 
actually implemented as  SimpleOrderedMaps ... sorry if that was missleading)
{quote}I wonder... in places where we are using SimpleOrderedMap in a response, 
and thus the "access by key" is most significant... (i.e. it's map-ness is most 
significant)... maybe we should just switch over to say LinkedHashMap? [...] 
the demands of the two seem to me to compete with each other: ease of access by 
key & repeated keys are kinda incompatible ...
{quote}
Not really ... the _allowance_ of duplicated keys goes back to the fact that 
NamedLists (which reminder: SimpleOrderedMap is a subclass of) are 
fundementally "lists of things which may have names" – and the names frequently 
comes from users, and we allow them to duplicated because that's what the user 
asked for, and (it was decided long ago that) when it is in fact very easy to 
give the user what they asked for, that's better then silently/accidentally 
throwing away data by *only* using a (Linked)Map, or throwing an explicit 
error. 

For example: a user who accidently uses the same key to in two diff 
{{facet.field}} params...

{noformat}
$ curl 
'http://localhost:8983/solr/techproducts/select?q=ipod=0=true=%7B!key=xxx%7Dfeatures=%7B!key=xxx%7Dmanu_id_s=1=true=xml'






  
  

  2


  2

  
  
  
  


{noformat}

Should the faceting code hard fail on this (or silently drop one of them) 
because it *MIGHT* cause a problem/confusion in serialization to JSON ... even 
though the user may be using a format like XML where they don't actaully care 
about the "key" and plan on consuming them in order?


> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675872#comment-16675872
 ] 

Noble Paul commented on SOLR-12959:
---

Let's get this clear , Only JSON makes a distinction between these two.

Javabin writes the response in exactly the same format irrespective of the 
output.

 

How many people use the other formats of JSON representation other than the 
simple object representation?

 

I would say we should even get rid of these complex representations of JSON

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675863#comment-16675863
 ] 

David Smiley commented on SOLR-12959:
-

Neither echoParams (a SolrParams) nor the stored fields (a SolrDocument) are 
held as NamedLists and thus don't apply in your example. Nonetheless I get your 
drift.

I wonder... in places where we are using SimpleOrderedMap in a response, and 
thus the "access by key" is most significant... (i.e. it's map-ness is most 
significant)... maybe we should just switch over to say LinkedHashMap? Keys 
cannot repeat in a LinkedHashMap but the demands of the two seem to me to 
compete with each other: ease of access by key & repeated keys are kinda 
incompatible – better off using a list of values. I should spot-check some 
SimpleOrderedMap usages in Solr to see how easily they might be redone as 
LinkedHashMap.

As an aside, we're missing a "maparr" json.nl value that could represent values 
inside arrays.  But I suppose that could not be done in a streaming manner.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Call for help: moving from ant build to gradle

2018-11-05 Thread Yago Riveiro
Yago Riveiro smiled at you
Spark by Readdle


[JENKINS] Lucene-Solr-Tests-master - Build # 2932 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2932/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at 
https://127.0.0.1:37124/solr/collection1_shard2_replica_n2: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n2/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:37124/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([EFDFC7C41472BA78:2D68FBAC17324A00]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)

[jira] [Updated] (SOLR-12795) Introduce 'rows' and 'offset' parameter in FacetStream

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'rows' and 'offset' parameter in FacetStream
> --
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Call for help: moving from ant build to gradle

2018-11-05 Thread Erick Erickson
Edward:

Of course. You may have to coordinate how to get your contributions
added to the patch is all.

I'd coordinate with Dat first though just for efficiency's sake. Just
work with the branch/gradle version of the code from the main Git
repo.
On Mon, Nov 5, 2018 at 2:22 PM Edward Ribeiro  wrote:
>
> Is this work open to contribution of non committers?
>
> Edward
>
>
> Em seg, 5 de nov de 2018 15:01, Gus Heck >
>> I'm quite fond of gradle, and even wrote a very simple plugin for uploading 
>> and downloading solr configs to zookeeper from gradle. +1 to use gradle.
>>
>> I'll definitely check it out and give it a whirl, maybe I'll help some if I 
>> can.
>>
>> On Sun, Nov 4, 2018 at 2:13 PM Đạt Cao Mạnh  wrote:
>>>
>>> Hi guys,
>>>
>>> Recently, I had a chance of working on modifying different build.xml of our 
>>> project. To be honest that was a painful experience, especially the number 
>>> of steps for adding a new module in our project. We reach the limitation 
>>> point of Ant and moving to Gradle seems a good option since it has been 
>>> widely used in many projects. There are several benefits of the moving here 
>>> that I would like to mention
>>> * The capability of caching result in Gradle make running task much faster. 
>>> I.e: rerunning forbiddenApi check in Gradle only takes 5 seconds (comparing 
>>> to more than a minute of Ant).
>>> * Adding modules is much easier now.
>>> * Adding dependencies is a pleasure now since we don't have to run ant 
>>> clean-idea and ant idea all over again.
>>> * Natively supported by different IDEs.
>>>
>>> On my very boring long flight from Montreal back to Vietnam, I tried to 
>>> convert the Lucene/Solr Ant to Gradle, I finally achieved something here by 
>>> being able to import project and run tests natively from IntelliJ IDEA 
>>> (branch jira/gradle).
>>>
>>> I'm converting ant precommit for Lucene to Gradle. But there are a lot of 
>>> things need to be done here and my limitation understanding in our Ant 
>>> build and Gradle may make the work take a lot of time to finish.
>>>
>>> Therefore, I really need help from the community to finish the work and we 
>>> will be able to move to a totally new, modern, powerful build tool.
>>>
>>> Thanks!
>>>
>>
>>
>> --
>> http://www.the111shift.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Call for help: moving from ant build to gradle

2018-11-05 Thread Edward Ribeiro
Is this work open to contribution of non committers?

Edward


Em seg, 5 de nov de 2018 15:01, Gus Heck  I'm quite fond of gradle, and even wrote a very simple plugin for
> uploading and downloading solr configs to zookeeper from gradle. +1 to use
> gradle.
>
> I'll definitely check it out and give it a whirl, maybe I'll help some if
> I can.
>
> On Sun, Nov 4, 2018 at 2:13 PM Đạt Cao Mạnh 
> wrote:
>
>> Hi guys,
>>
>> Recently, I had a chance of working on modifying different build.xml of
>> our project. To be honest that was a painful experience, especially the
>> number of steps for adding a new module in our project. We reach the
>> limitation point of Ant and moving to Gradle seems a good option since it
>> has been widely used in many projects. There are several benefits of the
>> moving here that I would like to mention
>> * The capability of caching result in Gradle make running task much
>> faster. I.e: rerunning forbiddenApi check in Gradle only takes 5 seconds
>> (comparing to more than a minute of Ant).
>> * Adding modules is much easier now.
>> * Adding dependencies is a pleasure now since we don't have to run ant
>> clean-idea and ant idea all over again.
>> * Natively supported by different IDEs.
>>
>> On my very boring long flight from Montreal back to Vietnam, I tried to
>> convert the Lucene/Solr Ant to Gradle, I finally achieved something here by
>> being able to import project and run tests natively from IntelliJ IDEA
>> (branch jira/gradle).
>>
>> I'm converting ant precommit for Lucene to Gradle. But there are a lot of
>> things need to be done here and my limitation understanding in our Ant
>> build and Gradle may make the work take a lot of time to finish.
>>
>> Therefore, I really need help from the community to finish the work and
>> we will be able to move to a totally new, modern, powerful build tool.
>>
>> Thanks!
>>
>>
>
> --
> http://www.the111shift.com
>


Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-05 Thread Karl Wright
Welcome!
Karl

On Mon, Nov 5, 2018 at 1:39 PM Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> wrote:

> Welcome Tim!
>
> From: dev@lucene.apache.org At: 11/02/18 16:20:52
> To: dev@lucene.apache.org
> Subject: Welcome Tim Allison as a Lucene/Solr committer
>
> Hi all,
>
>
> Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!
>
> Congratulations and Welcome, Tim!
>
> It's traditional for you to introduce yourself with a brief bio.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>


[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675815#comment-16675815
 ] 

Hoss Man commented on SOLR-12959:
-

{quote}I imagine the reach will be broad – yet I hope no big deal 
(inconsequential). Users at least have "json.nl" at their disposal.
{quote}
Again {{json.nl}} is only useful *because* we have both of these impls ... to 
eliminate one impl and say "use json.nl to control the output" would make our 
result structures either 2x complex to consume *OR* (in the flip chose) useless 
for preserving order in most JSON client libraries.

 

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675811#comment-16675811
 ] 

Hoss Man commented on SOLR-12959:
-

{quote} I thought "json.nl" is what toggles these two representations: 
{quote}
As documented in the ref guide: {{json.nl}} is how users indicate how they 
would like solr to deal with NamedLists *"where order is more important than 
access by name. "* ... SimpleOrderedMap instances are returned by Solr in use 
cases where order is *NOT* more important then access by name.

These are not competing/duplicated classes ... it is not a "mistake" that we 
have & use both in diff places in the code (although it has been argued in the 
past that it's a mistake we have/use _either_ instead of requiring more type 
safe objects).

NamedList exists as a way to store & return an *ordered* list of items which 
can have names (where the names are not required to be unique). SimpleOrdereMap 
was added as a subclass later as a way to indicate in building up response 
structures that while there is an order to the elements, that order is 
_secondary_ in importance to the key=>value mapping. (In situations where there 
is no ordering, then absolutely Map should be used.)

This allows us to have general purpose response *structures* that can be 
agnostic to when/where the resulting serialization – and the choosen 
serialization impl can preserve order whenever possible/convinient based on the 
format (ie: xml/javabin regarldess of NamedList impl), but when dealing with 
some serialization formats / client libraries (ie: json/javascript) where it 
would be simplier/desirable in many cases to ignore the inherient ordering of 
the pairs (ie: stored fields in a document) we can do so while still having an 
option ("json.nl") for controlling/allowing a more verbose syntax when it's 
fundemental to the nature of hte data (ie: facet terms+>value mappings)

This difference is fundemental to *why* and how clients consuming JSON can see 
get simple "Map" style representation (where most JSON parsing libraries will 
throw away the ordering) of the overall response, or of individual documents, 
or the echParam output, – while still being able to retrieve a strictly ordered 
set of results for things like facet terms (where the representation can be 
varried by modifiing {{json.nl}} *w/o affecting other ordered lists lists like 
documents, echoParams, etc...*

Compare/contrast...
{noformat}
$ curl 
"http://localhost:8983/solr/techproducts/select?q=ipod=id,name=true=features=4;
{
  "responseHeader":{
"status":0,
"QTime":2,
"params":{
  "q":"ipod",
  "facet.limit":"4",
  "facet.field":"features",
  "fl":"id,name",
  "facet":"true"}},
  "response":{"numFound":3,"start":0,"docs":[
  {
"id":"IW-02",
"name":"iPod & iPod Mini USB 2.0 Cable"},
  {
"id":"F8V7067-APL-KIT",
"name":"Belkin Mobile Power Cord for iPod w/ Dock"},
  {
"id":"MA147LL/A",
"name":"Apple 60 GB iPod with Video Playback Black"}]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "features":[
"adapter",2,
"car",2,
"power",2,
"white",2]},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}}
{noformat}
{noformat}
$ curl 
"http://localhost:8983/solr/techproducts/select?q=ipod=id,name=true=features=4=arrmap;
{
  "responseHeader":{
"status":0,
"QTime":1,
"params":{
  "q":"ipod",
  "facet.limit":"4",
  "facet.field":"features",
  "json.nl":"arrmap",
  "fl":"id,name",
  "facet":"true"}},
  "response":{"numFound":3,"start":0,"docs":[
  {
"id":"IW-02",
"name":"iPod & iPod Mini USB 2.0 Cable"},
  {
"id":"F8V7067-APL-KIT",
"name":"Belkin Mobile Power Cord for iPod w/ Dock"},
  {
"id":"MA147LL/A",
"name":"Apple 60 GB iPod with Video Playback Black"}]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "features":
  [
{"adapter":2},
{"car":2},
{"power":2},
{"white":2}]},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}}
{noformat}

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7607 - Still Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7607/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:54884_solr, 
127.0.0.1:54885_solr, 127.0.0.1:54890_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"raceDeleteReplica_true_shard1_replica_n2", 
  "base_url":"http://127.0.0.1:54899/solr;,   
"node_name":"127.0.0.1:54899_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"http://127.0.0.1:54899/solr;,   
"node_name":"127.0.0.1:54899_solr",   "state":"down",   
"type":"NRT"}, "core_node3":{   
"core":"raceDeleteReplica_true_shard1_replica_n1",   
"base_url":"http://127.0.0.1:54885/solr;,   
"node_name":"127.0.0.1:54885_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:54884_solr, 127.0.0.1:54885_solr, 127.0.0.1:54890_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"http://127.0.0.1:54899/solr;,
  "node_name":"127.0.0.1:54899_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"http://127.0.0.1:54899/solr;,
  "node_name":"127.0.0.1:54899_solr",
  "state":"down",
  "type":"NRT"},
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"http://127.0.0.1:54885/solr;,
  "node_name":"127.0.0.1:54885_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([5D2A6F2AB33C0BFE:373C0EFADBCE4134]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Comment Edited] (SOLR-12947) SolrJ Helper for JSON Request API

2018-11-05 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16674493#comment-16674493
 ] 

Jason Gerlowski edited comment on SOLR-12947 at 11/5/18 9:10 PM:
-

Thanks for the feedback guys.

Thanks for the pointer Teny.  The stuff I have planned for faceting looks very 
similar to what you've got there.  The main point of divergence is that the 
patch I have attached subclasses QueryRequest (rather than the higher-level 
SolrRequest as "facet-helper" does).  I made this choice when I was writing it 
last week because I wanted the response objects to still have all the getters 
that QueryResponse has.  A JsonQueryRequest can pass arbitrary params under the 
"params" JSON property.  If users take advantage of this, then they would 
probably like the response from a JsonQueryRequest to still have all the 
getters that they're used to for parsing out other common response values 
(highlighting info, regular faceting info, etc.).  But there might be other 
ways around this.  I'll think about it and maybe we could use your stuff more 
directly.  In any case, thanks for the pointer as it helped me organize my 
thoughts.

And agreed Jan.  The JsonQueryRequest right now takes in a Map<> to represent 
more complex facets, which still leaves the user in charge of knowing the 
syntax/structure of facets.  Builders or other objects that abstract users 
further away from building the Map themselves would definitely help here.


was (Author: gerlowskija):
Thanks for the feedback guys.

Thanks for the pointer Teny.  The stuff I have planned for faceting looks very 
similar to what you've got there.  The main point of divergence is that the 
patch I have attached subclasses QueryRequest (rather than the higher-level 
SolrRequest as "facet-helper" does).  I made this choice when I was writing it 
last week because I wanted the response objects to still have all the getters 
that QueryResponse has.  A JsonQueryRequest can pass arbitrary params under the 
"params" JSON property.  If users take advantage of this, then they would 
probably like the response from a JsonQueryRequest to still have all the 
getters that they're used to for parsing out other common response values 
(highlighting info, regular faceting info, etc.).  But there might be other 
ways around this.  I'll think about it and maybe we could use your stuff more 
directly.  In any case, thanks for the pointer as it helped me organize my 
thoughts.

And agreed Jan.  The JsonQueryRequest right now takes in a Map<> to represent 
more complex queries, which still leaves the user in charge of knowing the 
syntax/structure of facets.  Builders or other objects that abstract users 
further away from building the Map themselves would definitely help here.

> SolrJ Helper for JSON Request API
> -
>
> Key: SOLR-12947
> URL: https://issues.apache.org/jira/browse/SOLR-12947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 7.5
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-12947.patch, SOLR-12947.patch
>
>
> The JSON request API is becoming increasingly popular for sending querying or 
> accessing the JSON faceting functionality. The query DSL is simple and easy 
> to understand, but crafting requests programmatically is tough in SolrJ. 
> Currently, SolrJ users must hardcode in the JSON body they want their request 
> to convey.  Nothing helps them build the JSON request they're going for, 
> making use of these APIs manual and painful.
> We should see what we can do to alleviate this.  I'd like to tackle this work 
> in two pieces.  This (the first piece) would introduces classes that make it 
> easier to craft non-faceting requests that use the JSON Request API.  
> Improving JSON Faceting support is a bit more involved (it likely requires 
> improvements to the Response as well as the Request objects), so I'll aim to 
> tackle that in a separate JIRA to keep things moving.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12947) SolrJ Helper for JSON Request API

2018-11-05 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675442#comment-16675442
 ] 

Jason Gerlowski edited comment on SOLR-12947 at 11/5/18 9:10 PM:
-

Updated patch moves classes into {{org.apache.solr.client.solrj.request.json}} 
and makes a few other minor tweaks.  Will commit soon if there's no objections.

Still on-the-way is faceting support and facet-builder types, but I think those 
are big enough for separate JIRAs, or at least commits.


was (Author: gerlowskija):
Updated patch moves classes into {{org.apache.solr.client.solrj.request.json}} 
and makes a few other minor tweaks.  Will commit soon if there's no objections.

Still on-the-way is faceting support and query-builder types, but I think those 
are big enough for separate JIRAs, or at least commits.

> SolrJ Helper for JSON Request API
> -
>
> Key: SOLR-12947
> URL: https://issues.apache.org/jira/browse/SOLR-12947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 7.5
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-12947.patch, SOLR-12947.patch
>
>
> The JSON request API is becoming increasingly popular for sending querying or 
> accessing the JSON faceting functionality. The query DSL is simple and easy 
> to understand, but crafting requests programmatically is tough in SolrJ. 
> Currently, SolrJ users must hardcode in the JSON body they want their request 
> to convey.  Nothing helps them build the JSON request they're going for, 
> making use of these APIs manual and painful.
> We should see what we can do to alleviate this.  I'd like to tackle this work 
> in two pieces.  This (the first piece) would introduces classes that make it 
> easier to craft non-faceting requests that use the JSON Request API.  
> Improving JSON Faceting support is a bit more involved (it likely requires 
> improvements to the Response as well as the Request objects), so I'll aim to 
> tackle that in a separate JIRA to keep things moving.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675770#comment-16675770
 ] 

David Smiley commented on SOLR-12959:
-

bq. Right, but there's also the default behavior that would be different with 
this change.

Sure; 8.0 is fine.  

I think the first step is only to make the output agnostic of the NamedList 
subclass impl and then see what tests break and then better understand the 
scope of the change (reporting back here first for consideration before 
proceeding with removal).  SimpleOrderedMap is used all over the place so I 
imagine the reach will be broad -- yet I hope no big deal (inconsequential). 
Users at least have "json.nl" at their disposal.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12698) SolrFeature: no-fq optimisation

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675743#comment-16675743
 ] 

Christine Poerschke edited comment on SOLR-12698 at 11/5/18 8:58 PM:
-

Started to look at the initial SOLR-12698.patch in conjunction with the 
existing code; this is how far i got so far for today:
* 
[SolrFeatureWeight.scorer|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L218-L234]
 is the method being changed.
* existing code:
** 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L175-L184
 illustrates when/why {{SolrFeatureWeight.solrQueryWeight}} might be null.
** If {{SolrFeatureWeight.queryAndFilters}} is empty (because there was no 
{{fq}} and {{q}} (if present) resulted in a null {{solrQueryWeight}}) then 
{{getDocIdSetIteratorFromQueries}} will return null and 
{{SolrFeatureWeight.scorer}} will return null.
** If {{SolrFeatureWeight.queryAndFilters}} contains one element (because there 
was no {{fq}} but there was a {{q}} which resulted in a non-null 
{{solrQueryWeight}}) then ... \[to be continued\]


was (Author: cpoerschke):
Started to look at the initial SOLR-12698.patch in conjunction with the 
existing code; this is how far i got so far for today:
* 
[SolrFeatureWeight.scorer|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L218-L234]
 is the method being changed.
* existing code:

** 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L175-L184
 illustrates when/why {{SolrFeatureWeight.solrQueryWeight}} might be null.

** If {{SolrFeatureWeight.queryAndFilters}} is empty (because there was no 
{{fq}} and {{q}} (if present) resulted in a null {{solrQueryWeight}}) then 
{{getDocIdSetIteratorFromQueries}} will return null and 
{{SolrFeatureWeight.scorer}} will return null.

** If {{SolrFeatureWeight.queryAndFilters}} contains one element (because there 
was no {{fq}} but there was a {{q}} which resulted in a non-null 
{{solrQueryWeight}}) then ... \[to be continued\]

> SolrFeature: no-fq optimisation
> ---
>
> Key: SOLR-12698
> URL: https://issues.apache.org/jira/browse/SOLR-12698
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12698.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... SolrFeature was not optimally implemented for the case when no fq 
> parameter was passed. I'm not absolutely sure what was the intention to 
> introduce both q(which is supposed to be a function query) and fq parameter 
> for the same SolrFeature at all(Is there a case when they will be used 
> together ? ), so I decided not to change behavior but just optimize described 
> case ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-11-05 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-12699.

   Resolution: Fixed
 Assignee: Christine Poerschke
Fix Version/s: master (8.0)
   7.6

Thanks [~slivotov] and [~eribeiro]!

> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Assignee: Christine Poerschke
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675750#comment-16675750
 ] 

Christine Poerschke commented on SOLR-12699:


bq. ... I took a look at your patch and I don't have any concerns from my side. 
...

Thanks for taking a look, and for the indirect ping on this ticket! The two 
commits above are for master and branch_7x respectively, the latter for the 
upcoming 7.6 release then.

> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1868 - Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1868/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/208/consoleText

[repro] Revision: 5ad78734384104d7e26d51917d04936b849a692d

[repro] Repro line:  ant test  -Dtestcase=TestSolrCloudWithKerberosAlt 
-Dtests.method=testBasics -Dtests.seed=F5ED2FA789C57E6A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=lv 
-Dtests.timezone=Etc/GMT-6 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=NodeLostTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=F5ED2FA789C57E6A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-MT 
-Dtests.timezone=Asia/Singapore -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=F5ED2FA789C57E6A 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-EC -Dtests.timezone=America/Yakutat -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testCollectionDoesntExist -Dtests.seed=66A19153D7FA998E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-LY -Dtests.timezone=Pacific/Tarawa -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be65b95e80fdddea109a9d850506d6c524911ecb
[repro] git fetch
[repro] git checkout 5ad78734384104d7e26d51917d04936b849a692d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSolrCloudWithKerberosAlt
[repro]   NodeLostTriggerTest
[repro]   ScheduledMaintenanceTriggerTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3580 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestSolrCloudWithKerberosAlt|*.NodeLostTriggerTest|*.ScheduledMaintenanceTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=F5ED2FA789C57E6A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=lv 
-Dtests.timezone=Etc/GMT-6 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 7359 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=66A19153D7FA998E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ar-LY -Dtests.timezone=Pacific/Tarawa 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2035 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestSolrCloudWithKerberosAlt
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.NodeLostTriggerTest
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ScheduledMaintenanceTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=F5ED2FA789C57E6A -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=es-EC -Dtests.timezone=America/Yakutat 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 1132 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 447 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=66A19153D7FA998E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ar-LY -Dtests.timezone=Pacific/Tarawa 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 142 lines...]
[repro] Failures at the tip of branch_7x:
[repro]   0/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro] git checkout be65b95e80fdddea109a9d850506d6c524911ecb

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]


[jira] [Commented] (SOLR-12909) Fix all tests in org.apache.solr.update and begin a defense of them.

2018-11-05 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675748#comment-16675748
 ] 

Mark Miller commented on SOLR-12909:


I'm going to start a more thorough test reckoning and defense by package. I'll 
start with update.

This will include looking at whats ignored / awaitsfix and possibly addressing 
or spinning out a unique JIRA for the problem. Also, @Nightly tests need to be 
addressed.

> Fix all tests in org.apache.solr.update and begin a defense of them.
> 
>
> Key: SOLR-12909
> URL: https://issues.apache.org/jira/browse/SOLR-12909
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12698) SolrFeature: no-fq optimisation

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675743#comment-16675743
 ] 

Christine Poerschke commented on SOLR-12698:


Started to look at the initial SOLR-12698.patch in conjunction with the 
existing code; this is how far i got so far for today:
* 
[SolrFeatureWeight.scorer|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L218-L234]
 is the method being changed.
* existing code:

** 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L175-L184
 illustrates when/why {{SolrFeatureWeight.solrQueryWeight}} might be null.

** If {{SolrFeatureWeight.queryAndFilters}} is empty (because there was no 
{{fq}} and {{q}} (if present) resulted in a null {{solrQueryWeight}}) then 
{{getDocIdSetIteratorFromQueries}} will return null and 
{{SolrFeatureWeight.scorer}} will return null.

** If {{SolrFeatureWeight.queryAndFilters}} contains one element (because there 
was no {{fq}} but there was a {{q}} which resulted in a non-null 
{{solrQueryWeight}}) then ... \[to be continued\]

> SolrFeature: no-fq optimisation
> ---
>
> Key: SOLR-12698
> URL: https://issues.apache.org/jira/browse/SOLR-12698
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12698.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... SolrFeature was not optimally implemented for the case when no fq 
> parameter was passed. I'm not absolutely sure what was the intention to 
> introduce both q(which is supposed to be a function query) and fq parameter 
> for the same SolrFeature at all(Is there a case when they will be used 
> together ? ), so I decided not to change behavior but just optimize described 
> case ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12698) SolrFeature: no-fq optimisation

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675742#comment-16675742
 ] 

Christine Poerschke commented on SOLR-12698:


bq. ... the intention to introduce both q(which is supposed to be a function 
query) and fq parameter for the same SolrFeature at all(Is there a case when 
they will be used together ? ), ...

A fair question. We could probably document this better, somehow. The 
[SolrFeature 
javadocs|http://lucene.apache.org/solr/7_5_0/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html]
 currently only say "... The value of the feature will be the score of the 
given query for the current document. ..." and although the example 
configurations have both {{fq}} and {{q}} (individually) it's unstated when one 
might wish to use which or both. In essence (as far as a i recall) like with 
the [fq query 
parameter|https://lucene.apache.org/solr/guide/7_5/common-query-parameters.html#fq-filter-query-parameter]
 itself the {{fq}} in {{SolrFeature}} can be used to restrict which documents 
score non-zero but without influencing the document score itself.

> SolrFeature: no-fq optimisation
> ---
>
> Key: SOLR-12698
> URL: https://issues.apache.org/jira/browse/SOLR-12698
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12698.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... SolrFeature was not optimally implemented for the case when no fq 
> parameter was passed. I'm not absolutely sure what was the intention to 
> introduce both q(which is supposed to be a function query) and fq parameter 
> for the same SolrFeature at all(Is there a case when they will be used 
> together ? ), so I decided not to change behavior but just optimize described 
> case ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675739#comment-16675739
 ] 

Andrzej Bialecki  commented on SOLR-12959:
--

bq. The json.nl influences how named lists are output

Right, but there's also the default behavior that would be different with this 
change.

bq. I've always wondered why they didn't just implement Map
AFAIK originally they were meant as a memory optimization as compared to Java 
collections, and strictly speaking they should implement a MultiMap but that 
was rejected because it's not a core Java API. Duplicate keys are allowed in 
order to represent more conveniently multi-valued mappings, without actually 
introducing a proper MultiMap and without requiring users to deal with value 
arrays (or lists)

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1174 - Still Failing

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1174/

No tests ran.

Build Log:
[...truncated 23410 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2436 links (1988 relative) to 3200 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Commented] (SOLR-12961) Ref Guide: Add keyword metadata to pages

2018-11-05 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675723#comment-16675723
 ] 

Cassandra Targett commented on SOLR-12961:
--

There's two ways to approach this, and we can do either or both of them.

# Populate the keywords with alternate terms or key phrases the give clues to 
what the page is about. These would not be pre-determined, but instead decided 
by page authors as they see fit.
# Populate the keywords with a predetermined list of topics the subject of the 
page fits into and/or relates to. This requires coming up with a reasonably 
complete list of topics and agreeing on their form. It also requires more 
precommit-style validation to ensure the entries used by authors fit the 
accepted form of terms.

(We could also mix them both and just let everyone do what they want...but if 
we want to use them for facet buckets, a free-for-all would probably result in 
GIGO metadata.)

They both have their place and purpose. The first helps users identify a page 
based on words they use which may not be in the title of the page. The second 
helps users find all the pages about the same topic more easily.

I'm a librarian at heart and by training, so of course I like both. The second 
option is more work, so I'll start out with the first option and see what I can 
come up with.

> Ref Guide: Add keyword metadata to pages
> 
>
> Key: SOLR-12961
> URL: https://issues.apache.org/jira/browse/SOLR-12961
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> As a continuation of improvements in SOLR-12746, another thing we should do 
> is add keyword metadata to the HTML pages. Currently our pages have this in 
> the {{}} section:
> {code}
> 
> {code}
> We have the structure in place for it in the page templates, we just need to 
> populate with some keywords.
> The idea behind doing this is that these terms could be a source for facet 
> buckets when we get a Ref Guide search going via SOLR-10299.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-11-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675704#comment-16675704
 ] 

ASF subversion and git services commented on SOLR-12699:


Commit 7ddaff6f838df7054318856de486065ea88f7715 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7ddaff6 ]

SOLR-12699: Make contrib/ltr LTRScoringModel immutable and cache its hashCode.
(Stanislav Livotov, Edward Ribeiro, Christine Poerschke)


> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 1008 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1008/

5 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:35447_solr, 
127.0.0.1:40833_solr, 127.0.0.1:42673_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:42727/solr;,   
"node_name":"127.0.0.1:42727_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:42727/solr;,   
"node_name":"127.0.0.1:42727_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:35447_solr, 127.0.0.1:40833_solr, 127.0.0.1:42673_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:42727/solr;,
  "node_name":"127.0.0.1:42727_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:42727/solr;,
  "node_name":"127.0.0.1:42727_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([D77CBD56BDCCFD0B:BD6ADC86D53EB7C1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[jira] [Updated] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12795) Introduce 'rows' and 'offset' parameter in FacetStream

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Summary: Introduce 'rows' and 'offset' parameter in FacetStream  (was: 
Introduce 'offset' and 'rows' parameter in FacetStream.)

> Introduce 'rows' and 'offset' parameter in FacetStream
> --
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675672#comment-16675672
 ] 

Gus Heck commented on SOLR-12959:
-

I've always wondered... are these two classes used merely for historical 
reasons, perhaps because LinkedHashMap maybe wasn't appreciated/known at the 
very start of things? Or is it that we replaced the collections classes to get 
big performance gains from these custom structures and serializations? 

I've always wondered why they didn't just implement Map, and why duplicate keys 
would be allowed (apparently according to the javadocs)...

 

 

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-11-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675666#comment-16675666
 ] 

ASF subversion and git services commented on SOLR-12699:


Commit be65b95e80fdddea109a9d850506d6c524911ecb in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be65b95 ]

SOLR-12699: Make contrib/ltr LTRScoringModel immutable and cache its hashCode.
(Stanislav Livotov, Edward Ribeiro, Christine Poerschke)


> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12259) Robustly upgrade indexes

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675651#comment-16675651
 ] 

Christine Poerschke commented on SOLR-12259:


{quote}... WDYT of co-opting UninvertDocValuesMergePolicyTest? ... all I'd need 
to do is replace the optimize step at line 114 with a call to my new ...
{quote}
Since {{UninvertDocValuesMergePolicy}} and {{RewriteWithDocValuesMergePolicy}} 
do similar things them sharing test code doesn't seem unreasonable to me -- 
perhaps UninvertDocValuesMergePolicyTest could be renamed somehow 
(RewritingMergePoliciesTest?) to reflect the extended dual coverage?
{quote}... I could just randomize the optimize and rewriteWithPolicy 
approaches. ...
{quote}
+1

> Robustly upgrade indexes
> 
>
> Key: SOLR-12259
> URL: https://issues.apache.org/jira/browse/SOLR-12259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12259.patch
>
>
> The general problem statement is that the current upgrade path is trappy and 
> cumbersome.  It would be a great help "in the field" to make the upgrade 
> process less painful.
> Additionally one of the most common things users want to do is enable 
> docValues, but currently they often have to re-index.
> Issues:
> 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go 
> to 7x all the segments have been rewritten in 6x format. Say I have a segment 
> at max size that has no deletions. It'll never be rewritten until it has 
> deleted docs. And perhaps 50% deleted docs currently.
> 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad.
> 3> in a large distributed system, running IndexUpgraderTool on all the nodes 
> is cumbersome even if <2> is acceptable.
> 4> Users who realize specifying docValues on a field would be A Good Thing 
> have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be 
> nice to be able to have this done all at once without forceMerging to one 
> segment.
> Proposal:
> Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. 
> It will make TMP respect max segments size so can avoid forceMerges that 
> result in one segment. What it does _not_ do is rewrite segments with zero 
> (or a small percentage) deleted documents.
> So it  doesn't seem like a huge stretch to be able to specify to TMP the 
> option to rewrite segments that have no deleted documents. Perhaps a new 
> parameter to optimize?
> This would likely require another change to TMP or whatever.
> So upgrading to a new solr would look like
> 1> install the new Solr
> 2> execute 
> "http://node:port/solr/collection_or_core/update?optimize=true=true;
> What's not clear to me is whether we'd require 
> UninvertDocValuesMergePolicyFactory to be specified and wrap TMP or not.
> Anyway, let's discuss. I'll create yet another LUCENE JIRA for TMP do rewrite 
> all segments that I'll link.
> I'll also link several other JIRAs in here, they're coalescing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675653#comment-16675653
 ] 

David Smiley commented on SOLR-12959:
-

I mistook the class level javadocs to be obsolete, and I've not understood it 
well (embarrassing perhaps).  Even if it's docs are correct, I still find the 
class odd.  One thing -- I thought "json.nl" is what toggles these two 
representations:  
{noformat}
{"foo":10,"bar":20}
vs
["foo",10,"bar",20]
{noformat}
Yet the docs seem to suggest it's SimpleOrderedMap vs (plain) NamedList that 
will as well?  IMO: Yuck.  that latter format is unfortunate as it doesn't 
semantically represent the structure; it should merely be an _option_ that the 
user can toggle with json.nl if they so choose.  Perhaps we should shy away 
from even having repeated keys in the first place, favoring array values 
instead.

Second is the name... as a subclass of NamedList I think it could certainly be 
better.  "Simple"ness isn't interesting (is there a complex variant?).  
OrderedMap... maybe not a great name given "Map" has loaded assumptions in the 
JDK (i.e. no repeated key).  If we need a subclass of NamedList then it 
probably ought to have "NamedList" as part of its name.

I think we can _do something_ to improve things here.  Feel free to recommend 
something Hoss or AB.

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12746) Ref Guide HTML output should adhere to more standard HTML5

2018-11-05 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-12746.
--
Resolution: Done

Four consecutive Jenkins build runs were as expected, so I backported the 
change to 7x.

> Ref Guide HTML output should adhere to more standard HTML5
> --
>
> Key: SOLR-12746
> URL: https://issues.apache.org/jira/browse/SOLR-12746
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> The default HTML produced by Jekyll/Asciidoctor adds a lot of extra {{}} 
> tags to the content which break up our content into very small chunks. This 
> is acceptable to a casual website reader as far as it goes, but any Reader 
> view in a browser or another type of content extraction system that uses a 
> similar "readability" scoring algorithm is going to either miss a lot of 
> content or fail to display the page entirely.
> To see what I mean, take a page like 
> https://lucene.apache.org/solr/guide/7_4/language-analysis.html and enable 
> Reader View in your browser (I used Firefox; Steve Rowe told me offline 
> Safari would not even offer the option on the page for him). You will notice 
> a lot of missing content. It's almost like someone selected sentences at 
> random.
> Asciidoctor has a long-standing issue to provide a better more 
> semantic-oriented HTML5 output, but it has not been resolved yet: 
> https://github.com/asciidoctor/asciidoctor/issues/242
> Asciidoctor does provide a way to override the default output templates by 
> providing your own in Slim, HAML, ERB or any other template language 
> supported by Tilt (none of which I know yet). There are some samples 
> available via the Asciidoctor project which we can borrow, but it's otherwise 
> unknown as of yet what parts of the output are causing the worst of the 
> problems. This issue is to explore how to fix it to improve this part of the 
> HTML reading experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12746) Ref Guide HTML output should adhere to more standard HTML5

2018-11-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675636#comment-16675636
 ] 

ASF subversion and git services commented on SOLR-12746:


Commit 2633e0e0cfc2278aa08ae1f066e02b681e7b7fee in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2633e0e ]

SOLR-12746: Simplify the Ref Guide HTML structure and use semantic HTML tags 
where possible. Adds new template files for Asciidoctor HTML conversion.


> Ref Guide HTML output should adhere to more standard HTML5
> --
>
> Key: SOLR-12746
> URL: https://issues.apache.org/jira/browse/SOLR-12746
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> The default HTML produced by Jekyll/Asciidoctor adds a lot of extra {{}} 
> tags to the content which break up our content into very small chunks. This 
> is acceptable to a casual website reader as far as it goes, but any Reader 
> view in a browser or another type of content extraction system that uses a 
> similar "readability" scoring algorithm is going to either miss a lot of 
> content or fail to display the page entirely.
> To see what I mean, take a page like 
> https://lucene.apache.org/solr/guide/7_4/language-analysis.html and enable 
> Reader View in your browser (I used Firefox; Steve Rowe told me offline 
> Safari would not even offer the option on the page for him). You will notice 
> a lot of missing content. It's almost like someone selected sentences at 
> random.
> Asciidoctor has a long-standing issue to provide a better more 
> semantic-oriented HTML5 output, but it has not been resolved yet: 
> https://github.com/asciidoctor/asciidoctor/issues/242
> Asciidoctor does provide a way to override the default output templates by 
> providing your own in Slim, HAML, ERB or any other template language 
> supported by Tilt (none of which I know yet). There are some samples 
> available via the Asciidoctor project which we can borrow, but it's otherwise 
> unknown as of yet what parts of the output are causing the worst of the 
> problems. This issue is to explore how to fix it to improve this part of the 
> HTML reading experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675635#comment-16675635
 ] 

Bar Rotstein commented on SOLR-12955:
-

{quote}What happens if somebody upgrades a Solr where 
DistributedUpdateProcessor is being used in their config{quote}
No change in config needed in that case.

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate SimpleOrderedMap

2018-11-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675629#comment-16675629
 ] 

Christine Poerschke commented on SOLR-12959:


bq. ... JSON responses from Solr.

The {{json.nl}} influences how named lists are output: 
https://lucene.apache.org/solr/guide/7_5/response-writers.html#json-nl

bq. ... class level javadocs.

Locating them - 
http://lucene.apache.org/solr/7_5_0/solr-solrj/org/apache/solr/common/util/SimpleOrderedMap.html
 - reminded me of this being a {{solrj}} rather than a {{solr}} class. For 
{{solr}} classes we regularly use the {{luceneMatchVersion}} to put things onto 
the deprecate-with-warning-and-then-remove pathway, is something similar (but 
different) possible for {{solrj}} classes?

> Deprecate SimpleOrderedMap
> --
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12959:
---
Component/s: SolrJ

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12959) Deprecate solrj SimpleOrderedMap

2018-11-05 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12959:
---
Summary: Deprecate solrj SimpleOrderedMap  (was: Deprecate SimpleOrderedMap)

> Deprecate solrj SimpleOrderedMap
> 
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12961) Ref Guide: Add keyword metadata to pages

2018-11-05 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-12961:


 Summary: Ref Guide: Add keyword metadata to pages
 Key: SOLR-12961
 URL: https://issues.apache.org/jira/browse/SOLR-12961
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: 7.6, master (8.0)


As a continuation of improvements in SOLR-12746, another thing we should do is 
add keyword metadata to the HTML pages. Currently our pages have this in the 
{{}} section:

{code}

{code}

We have the structure in place for it in the page templates, we just need to 
populate with some keywords.

The idea behind doing this is that these terms could be a source for facet 
buckets when we get a Ref Guide search going via SOLR-10299.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2931 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2931/

4 tests failed.
FAILED:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef

Error Message:
ReaderPool is already closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: ReaderPool is already closed
at 
__randomizedtesting.SeedInfo.seed([F83E85243C8B9F05:11A3F2164A4278F8]:0)
at org.apache.lucene.index.ReaderPool.get(ReaderPool.java:367)
at 
org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3336)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:519)
at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:398)
at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
at 
org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:465)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:32778_solr, 
127.0.0.1:33018_solr, 127.0.0.1:42179_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",  

[jira] [Updated] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12959) Deprecate SimpleOrderedMap

2018-11-05 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675581#comment-16675581
 ] 

Hoss Man commented on SOLR-12959:
-

bq. Simple removal is not sufficient, we should verify that it doesn't break 
back-compat - currently SimpleOrderedMap is serialized in a different way from 
NamedList, ...

that is 100% the entire point of why SimpleORderedMap exists, as explained in 
it's class level javadocs.  

Deprecating/removing it w/o some sort of replacement for distinguishing 
when/how/if response writers need to care about the order of keys in a 
NamedList is a non-starter.

> Deprecate SimpleOrderedMap
> --
>
> Key: SOLR-12959
> URL: https://issues.apache.org/jira/browse/SOLR-12959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Minor
>
> There is no difference between a NamedList and a  SumpleOrderedMap. It 
> doesn't help to have both of them when they are doing exactly free same things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675570#comment-16675570
 ] 

Bar Rotstein edited comment on SOLR-12955 at 11/5/18 6:40 PM:
--

{quote}I believe there will be no change to configs and back-compat. People 
configure a factory, and that factory can in turn instantiate one thing or 
another depending on ZK.{quote}
Would we want CdcrUpdateProcessorFactory to throw an exception if the cluster 
is not Zookeeper enabled?
OR perhaps just return an instance of DistributedUpdateProcessor?


was (Author: brot):
{quote}I believe there will be no change to configs and back-compat. People 
configure a factory, and that factory can in turn instantiate one thing or 
another depending on ZK.{quote}
Would we want CdcrUpdateProcessorFactory to throw an exception if the cluster 
is not Zookeeper enabled?

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675570#comment-16675570
 ] 

Bar Rotstein commented on SOLR-12955:
-

{quote}I believe there will be no change to configs and back-compat. People 
configure a factory, and that factory can in turn instantiate one thing or 
another depending on ZK.{quote}
Would we want CdcrUpdateProcessorFactory to throw an exception if the cluster 
is not Zookeeper enabled?

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:Welcome Tim Allison as a Lucene/Solr committer

2018-11-05 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Welcome Tim!

From: dev@lucene.apache.org At: 11/02/18 16:20:52To:  dev@lucene.apache.org
Subject: Welcome Tim Allison as a Lucene/Solr committer

Hi all,

Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!

Congratulations and Welcome, Tim!

It's traditional for you to introduce yourself with a brief bio.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re:Welcome Gus Heck as Lucene/Solr committer

2018-11-05 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Welcome Gus!

From: dev@lucene.apache.org At: 11/01/18 12:22:35To:  dev@lucene.apache.org
Subject: Welcome Gus Heck as Lucene/Solr committer

Hi all,

Please join me in welcoming Gus Heck as the latest Lucene/Solr committer! 

Congratulations and Welcome, Gus!

Gus, it's traditional for you to introduce yourself with a brief bio.

~ David-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675561#comment-16675561
 ] 

David Smiley commented on SOLR-12955:
-

I believe there will be no change to configs and back-compat.  People configure 
a factory, and that factory can in turn instantiate one thing or another 
depending on ZK.

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675560#comment-16675560
 ] 

Joel Bernstein commented on SOLR-12795:
---

All existing tests pass with the latest patch, so back compatibility seems to 
be in place. l I'll begin working on tests for the new functionality.

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675554#comment-16675554
 ] 

Shawn Heisey commented on SOLR-12955:
-

My only real concern is how existing configs are handled.

What happens if somebody upgrades a Solr where DistributedUpdateProcessor is 
being used in their config?  Will it continue to work as before and just log a 
deprecation warning? If they upgrade to a new major version and don't change to 
the appropriate new class, would their core(s) fail to start?  Those two 
courses of action would be IMHO the best way to go.


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23158 - Unstable!

2018-11-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23158/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseParallelGC

25 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FCE8581784C116C5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FCE8581784C116C5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675480#comment-16675480
 ] 

Bar Rotstein commented on SOLR-12955:
-

{quote}as CDCR is a strict SolrCloud feature{quote}

Oh I was not aware of that!
This will make it a lot easier for me to implement, thanks for the heads up.
I will not change cdcr logic, as I would not want to cause conflicts with your 
changes.
Hopefully we'll both be able to get them ready for Solr 8.0 :)

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Call for help: moving from ant build to gradle

2018-11-05 Thread Gus Heck
I'm quite fond of gradle, and even wrote a very simple plugin for uploading
and downloading solr configs to zookeeper from gradle. +1 to use gradle.

I'll definitely check it out and give it a whirl, maybe I'll help some if I
can.

On Sun, Nov 4, 2018 at 2:13 PM Đạt Cao Mạnh  wrote:

> Hi guys,
>
> Recently, I had a chance of working on modifying different build.xml of
> our project. To be honest that was a painful experience, especially the
> number of steps for adding a new module in our project. We reach the
> limitation point of Ant and moving to Gradle seems a good option since it
> has been widely used in many projects. There are several benefits of the
> moving here that I would like to mention
> * The capability of caching result in Gradle make running task much
> faster. I.e: rerunning forbiddenApi check in Gradle only takes 5 seconds
> (comparing to more than a minute of Ant).
> * Adding modules is much easier now.
> * Adding dependencies is a pleasure now since we don't have to run ant
> clean-idea and ant idea all over again.
> * Natively supported by different IDEs.
>
> On my very boring long flight from Montreal back to Vietnam, I tried to
> convert the Lucene/Solr Ant to Gradle, I finally achieved something here by
> being able to import project and run tests natively from IntelliJ IDEA
> (branch jira/gradle).
>
> I'm converting ant precommit for Lucene to Gradle. But there are a lot of
> things need to be done here and my limitation understanding in our Ant
> build and Gradle may make the work take a lot of time to finish.
>
> Therefore, I really need help from the community to finish the work and we
> will be able to move to a totally new, modern, powerful build tool.
>
> Thanks!
>
>

-- 
http://www.the111shift.com


[jira] [Updated] (SOLR-12795) Introduce 'offset' and 'rows' parameter in FacetStream.

2018-11-05 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'offset' and 'rows' parameter in FacetStream.
> ---
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1865 - Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1865/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1689/consoleText

[repro] Revision: 45b772f4a995c618b48ff05c6129c5683df92f88

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=FullSolrCloudDistribCmdsTest 
-Dtests.method=test -Dtests.seed=436B9276AB629A12 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=id-ID -Dtests.timezone=America/Porto_Velho -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=5630E8C5E6A4B819 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es -Dtests.timezone=Asia/Qyzylorda -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
45b772f4a995c618b48ff05c6129c5683df92f88
[repro] git fetch
[repro] git checkout 45b772f4a995c618b48ff05c6129c5683df92f88

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   FullSolrCloudDistribCmdsTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3567 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.FullSolrCloudDistribCmdsTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=436B9276AB629A12 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=id-ID -Dtests.timezone=America/Porto_Velho -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 46416 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk1.8.0_191/jre/bin/java -ea -esa 
-Dtests.prefix=tests -Dtests.seed=436B9276AB629A12 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=id-ID -Dtests.timezone=America/Porto_Velho 
-Dtests.directory=random 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/logging.properties
 -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core
 -Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0
 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.maxfailures=5 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 208 - Still Unstable

2018-11-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/208/

4 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Could not load collection from ZK: testkerberoscollection

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
testkerberoscollection
at 
__randomizedtesting.SeedInfo.seed([F5ED2FA789C57E6A:C835818BB12B201A]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1321)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:737)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:148)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:131)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:154)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testCollectionCreateSearchDelete(TestSolrCloudWithKerberosAlt.java:137)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics(TestSolrCloudWithKerberosAlt.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

  1   2   >