[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207697969
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -912,4 +904,18 @@ public void testCascadingParsers() throws Exception {
 }
 assertTrue(mixedDates.isEmpty());
   }
+
+  private Date temporalToDate(TemporalAccessor in, ZoneId timeZoneId) {
+if(in instanceof OffsetDateTime) {
+  return Date.from(((OffsetDateTime) in).toInstant());
+} else if(in instanceof ZonedDateTime) {
+  return Date.from(((ZonedDateTime) 
in).withZoneSameInstant(timeZoneId).toInstant());
--- End diff --

This line should be `return Date.from(((ZonedDateTime) in).toInstant());` 
-- needn't use timeZoneId  (tests pass).  It's functionally equivalent but 
avoids a needless intermediate step.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207700607
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -172,4 +181,34 @@ public void init(NamedList args) {
   return (null == type) || type instanceof DateValueFieldType;
 };
   }
+
+  public static void validateFormatter(DateTimeFormatter formatter) {
+// check it's valid via round-trip
+try {
+  parseInstant(formatter, formatter.format(Instant.now()));
+} catch (Exception e) {
+  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
+  "Bad or unsupported pattern: " + 
formatter.toFormat().toString(), e);
+}
+  }
+
+  private static Instant parseInstant(DateTimeFormatter formatter, String 
dateStr) {
+final TemporalAccessor temporalAccessor = formatter.parse(dateStr);
+// parsed successfully.  But is it a full instant or just to the day?
+if (temporalAccessor.isSupported(ChronoField.INSTANT_SECONDS)) { // 
has time
+  // has offset time
+  if(temporalAccessor.isSupported(ChronoField.OFFSET_SECONDS)) {
--- End diff --

This part, checking OFFSET_SECONDS ought not to be present here, I think.  
INSTANT_SECONDS should be self-sufficient.  See 
java.time.Parsed.resolveInstant() which gives clues as to what's going on.  The 
override "zone" we set earlier on the formatter is taking precedence over the 
OFFSET_SECONDS, leading you to try this as a work-around.  I wonder if this is 
a JDK bug; IMO it ought to be flipped.  

I think we may need to be smarter about when to set the default zone on the 
formatter and when not to.  If the format specifies the zone or has a 'Z' 
literal, then I think we don't; otherwise we set it.  Or we don't set it at all 
there and incorporate it here in parseInstant() which may be safer.  I'll 
propose something more concrete after I get some sleep; I have some WIP code.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207699324
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -209,9 +200,8 @@ public void testParseDateFormats() throws Exception {
 IndexSchema schema = h.getCore().getLatestSchema();
 assertNotNull(schema.getFieldOrNull("dateUTC_dt")); // should match 
"*_dt" dynamic field
 
-String dateTimePattern = "-MM-dd'T'HH:mm:ss.SSSZ";
-DateTimeFormatter dateTimeFormatterUTC = 
DateTimeFormat.forPattern(dateTimePattern);
-DateTime dateTimeUTC = 
dateTimeFormatterUTC.parseDateTime(formatExamples[1]);
+String dateTimePattern = "-MM-dd'T'HH:mm:ss.SSSZ";
+Instant UTCInstant = Instant.parse(formatExamples[1]);
--- End diff --

IMO `UTCInstant` is suggestive that somehow other Instants might not be UTC 
yet this one is.  An instant is an unambiguous instant in time that has no 
relation to time zones.  Someone might _express_ / format an instant in a zone, 
but the instant itself is neutral to the notion.  Perhaps "theInstant" or 
"expectInstant" is better as it's the "expect" side of the assert we test in 
the loop below.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-9.0.4) - Build # 73 - Still Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/73/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([73DC7460569C0348:FB884BBAF8606EB0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:104)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:97)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12616) Track down performance slowdowns with ExportWriter

2018-08-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569031#comment-16569031
 ] 

Varun Thacker commented on SOLR-12616:
--

Patch which tests {{SingleValueSortDoc}} vs SortDoc 

Indexed 25M docs onto a 1 shard X 1 replica collection. 

query - {{/export?q=*:*=id=id desc}} 

With  {{-Dtest.export.writer.optimized=true}} = 7m13 , 7m23

Without {{-Dtest.export.writer.optimized=true}} = 10m27 , 10m31 

I haven't started looking into what's difference b/w SortDoc and 
SingleValueSortDoc because of which we see such speed differences.

> Track down performance slowdowns with ExportWriter
> --
>
> Key: SOLR-12616
> URL: https://issues.apache.org/jira/browse/SOLR-12616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12616.patch
>
>
> Just to be clear for users glancing through this Jira : The performance 
> slowdown is currently on an unreleased version of Solr so no versions are 
> affected by this.
> While doing some benchmarking for SOLR-12572 , I compared the export writers 
> performance against Solr 7.4 and there seems to be some slowdowns that have 
> been introduced. Most likely this is because of SOLR-11598
> In an 1 shard 1 replica collection with 25M docs. We issue the following 
> query 
> {code:java}
> /export?q=*:*=id desc=id{code}
> Solr 7.4 took 8:10 , 8:20 and 8:22 in the 3 runs that I did
> Master took 10:46
> Amrit's done some more benchmarking so he can fill in with some more numbers 
> here. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12616) Track down performance slowdowns with ExportWriter

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12616:
-
Attachment: SOLR-12616.patch

> Track down performance slowdowns with ExportWriter
> --
>
> Key: SOLR-12616
> URL: https://issues.apache.org/jira/browse/SOLR-12616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12616.patch
>
>
> Just to be clear for users glancing through this Jira : The performance 
> slowdown is currently on an unreleased version of Solr so no versions are 
> affected by this.
> While doing some benchmarking for SOLR-12572 , I compared the export writers 
> performance against Solr 7.4 and there seems to be some slowdowns that have 
> been introduced. Most likely this is because of SOLR-11598
> In an 1 shard 1 replica collection with 25M docs. We issue the following 
> query 
> {code:java}
> /export?q=*:*=id desc=id{code}
> Solr 7.4 took 8:10 , 8:20 and 8:22 in the 3 runs that I did
> Master took 10:46
> Amrit's done some more benchmarking so he can fill in with some more numbers 
> here. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22593 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22593/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

45 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.JDBCStreamTest

Error Message:
Error from server at https://127.0.0.1:40055/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:40055/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([CB6DCFDCD99459A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.io.stream.JDBCStreamTest.setupCluster(JDBCStreamTest.java:76)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.JDBCStreamTest

Error Message:
Captured an uncaught exception in thread: Thread[id=438, 
name=OverseerAutoScalingTriggerThread-72100355675783176-127.0.0.1:34861_solr-n_00,
 state=RUNNABLE, group=Overseer autoscaling triggers]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=438, 
name=OverseerAutoScalingTriggerThread-72100355675783176-127.0.0.1:34861_solr-n_00,
 state=RUNNABLE, group=Overseer autoscaling triggers]
Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
at __randomizedtesting.SeedInfo.seed([CB6DCFDCD99459A]:0)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
at 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 115 - Still Unstable

2018-08-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/115/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.prometheus.collector.SolrCollectorTest

Error Message:
Error from server at https://127.0.0.1:40195/solr: KeeperErrorCode = Session 
expired for /configs/collection1_config

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:40195/solr: KeeperErrorCode = Session expired 
for /configs/collection1_config
at __randomizedtesting.SeedInfo.seed([636CE60D1FF01E1D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.prometheus.exporter.SolrExporterTestBase.setupCluster(SolrExporterTestBase.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected 1x2 collections null Live Nodes: [127.0.0.1:35840_solr, 
127.0.0.1:38873_solr, 127.0.0.1:39983_solr, 127.0.0.1:43883_solr] Last 
available state: null

Stack Trace:
java.lang.AssertionError: Expected 1x2 collections
null
Live Nodes: [127.0.0.1:35840_solr, 127.0.0.1:38873_solr, 127.0.0.1:39983_solr, 
127.0.0.1:43883_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([5FF6ACC14610FDB8:35E0CD112EE2B772]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:237)
at 

[JENKINS] Lucene-Solr-repro - Build # 1137 - Still Unstable

2018-08-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1137/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/120/consoleText

[repro] Revision: 61db4ab8acc33c0cb8a649629a5e67405bea

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=B37F958BF72B6815 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-Latn-BA -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=B37F958BF72B6815 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-Latn-BA -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f8db5d0afd34ebea4ae414a2eb148f926830be34
[repro] git fetch
[repro] git checkout 61db4ab8acc33c0cb8a649629a5e67405bea

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3334 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=B37F958BF72B6815 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr-Latn-BA -Dtests.timezone=Etc/GMT0 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 6428 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout f8db5d0afd34ebea4ae414a2eb148f926830be34

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2479 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2479/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection

Error Message:
Error from server at http://127.0.0.1:39561/solr: Could not find collection : 
testDeleteWithCollection_abc

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:39561/solr: Could not find collection : 
testDeleteWithCollection_abc
at 
__randomizedtesting.SeedInfo.seed([704E65C5677D452B:B9792283EE79795]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestWithCollection.testDeleteWithCollection(TestWithCollection.java:197)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207692433
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -50,10 +55,9 @@ public void testParseDateRoundTrip() throws Exception {
 String dateString = "2010-11-12T13:14:15.168Z";
 SolrInputDocument d = processAdd("parse-date", doc(f("id", "9"), 
f("date_dt", dateString)));
 assertNotNull(d);
-DateTimeFormatter dateTimeFormatter = ISODateTimeFormat.dateTime();
-DateTime dateTime = dateTimeFormatter.parseDateTime(dateString);
+ZonedDateTime localDateTime = ZonedDateTime.parse(dateString, 
DateTimeFormatter.ISO_DATE_TIME);
 assertTrue(d.getFieldValue("date_dt") instanceof Date);
-assertEquals(dateTime.getMillis(), ((Date) 
d.getFieldValue("date_dt")).getTime());
+
assertEquals(localDateTime.withZoneSameInstant(ZoneOffset.UTC).toInstant().toEpochMilli(),
 ((Date) d.getFieldValue("date_dt")).getTime());
--- End diff --

You are right. I have pushed a commit to address this issue. I will try and 
look into other areas that could be improved/simplified.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11792) tvrh component doesn't work if unique key has stored="false"

2018-08-03 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568943#comment-16568943
 ] 

Erick Erickson commented on SOLR-11792:
---

It was right there in front of me when I looked at 11770 so I fixed it there.

> tvrh component doesn't work if unique key has stored="false"
> 
>
> Key: SOLR-11792
> URL: https://issues.apache.org/jira/browse/SOLR-11792
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>Priority: Major
>
> If I create index with unique key defined like
> {code}
>  docValues="true"/>
> {code}
> then searches seem to be working, but {{tvrh}} doesn't return any vectors for 
> fields that have one stored.
> Upon a cursory look at the code it looks like {{tvrh}} component requires 
> unique key to be specifically stored.
> Ideally {{tvrh}} should work fine with docValues. And at the very least this 
> gotcha should be documented, probably here: 
> https://lucene.apache.org/solr/guide/6_6/field-properties-by-use-case.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2018-08-03 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568939#comment-16568939
 ] 

Erick Erickson commented on SOLR-11770:
---

Here's a patch. The fix for TVRH isn't very interesting, but there are a couple 
of things I'd like to draw your attention to [~dsmiley] and [~caomanhdat2] in 
particular.

1> I had to make several methods in RetrieveFieldOptimizer public to use them 
outside the package RFO is defined in. I consider this a stopgap until we 
tackle SOLR-12625.

2> Along the way I randomized the stored/docValues fields for "schema.xml" for 
my TVRH test. The defaults are as they are now so it "shouldn't change 
anything". HOWEVER:

2a> I screwed it up on the first attempt and set both to "true" by default the 
first time, and then QueryElevationComponentTest started failing. The fix 
(which seems safe, but isn't really complete) is in BaseEditorialTransformer. 
Anyone interested please take a look. All tests pass.

2b> It's scary that turning on docValues=true stored=true causes this kind of 
unintended consequence, how many others are lurking around? I propose we 
randomize the two environment variables you'll see in schema.xml in this patch 
in the framework and flush out any more. This presupposes that the three valid 
combinations should all be supported (stored=false,docValues=false doesn't make 
sense).

I'll commit this in over the weekend probably, the interesting work will be in 
SOLR-12625

> NPE in tvrh if no field is specified and document doesn't contain any fields 
> with term vectors
> --
>
> Key: SOLR-11770
> URL: https://issues.apache.org/jira/browse/SOLR-11770
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-11770.patch
>
>
> It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and 
> document doesn't have any fields with term vectors then Solr returns NPE.
> Request: 
> {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.
> On our 'old' schema we had some fields with {{termVectors}} and even more 
> fields with position data. In our new schema we tried to remove unused data 
> so we dropped a lot of position data and some term vectors.
> Our documents are 'sparsely' populated - not all documents contain all fields.
> Above request was returning fine for our 'old' schema and returns 500 for our 
> 'new' schema - on exactly same Solr (6.6.2).
> Stack trace:
> {code}
> 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
> r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> 

[jira] [Commented] (LUCENE-8060) Enable top-docs collection optimizations by default

2018-08-03 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568940#comment-16568940
 ] 

Steve Rowe commented on LUCENE-8060:


Another separate reproducing failure, of 
{{TestIndexWriterReader.testDuringAddIndexes()}}, from 
[https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/22/]; 
{{git bisect}} says the first bad commit is also {{99dbe93}} on this issue:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestIndexWriterReader -Dtests.method=testDuringAddIndexes 
-Dtests.seed=195B60C59D5CF096 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-AT -Dtests.timezone=Asia/Yangon -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 7.95s J2 | TestIndexWriterReader.testDuringAddIndexes <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([195B60C59D5CF096:EB820ED6FAF5E600]:0)
   [junit4]>at 
org.apache.lucene.index.TestIndexWriterReader.testDuringAddIndexes(TestIndexWriterReader.java:772)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   1> TEST: now get reader
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterReader_195B60C59D5CF096-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{field1=FSTOrd50, indexname=PostingsFormat(name=LuceneFixedGap), 
field=PostingsFormat(name=LuceneVarGapFixedInterval), 
foo=PostingsFormat(name=LuceneVarGapFixedInterval), id=FSTOrd50, 
field3=PostingsFormat(name=LuceneFixedGap), 
field2=PostingsFormat(name=LuceneVarGapFixedInterval), field5=FSTOrd50, 
field4=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
 docValues:{}, maxPointsInLeafNode=258, maxMBSortInHeap=7.974682291420739, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@7f24c04),
 locale=de-AT, timezone=Asia/Yangon
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_172 (64-bit)/cpus=4,threads=1,free=176866848,total=520617984
{noformat}

> Enable top-docs collection optimizations by default
> ---
>
> Key: LUCENE-8060
> URL: https://issues.apache.org/jira/browse/LUCENE-8060
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8060.patch
>
>
> We are getting optimizations when hit counts are not required (sorted 
> indexes, MAXSCORE, short-circuiting of phrase queries) but our users won't 
> benefit from them unless we disable exact hit counts by default or we require 
> them to tell us whether hit counts are required.
> I think making hit counts approximate by default is going to be a bit trappy, 
> so I'm rather leaning towards requiring users to tell us explicitly whether 
> they need total hit counts. I can think of two ways to do that: either by 
> passing a boolean to the IndexSearcher constructor or by adding a boolean to 
> all methods that produce TopDocs instances. I like the latter better but I'm 
> open to discussion or other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207691834
  
--- Diff: solr/licenses/joda-time-NOTICE.txt ---
@@ -1,5 +0,0 @@

-=
--- End diff --

Sure thing.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12625) Combine SolrDocumentFetcher and RetrieveFieldsOptimizer

2018-08-03 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12625:
-

 Summary: Combine SolrDocumentFetcher and RetrieveFieldsOptimizer
 Key: SOLR-12625
 URL: https://issues.apache.org/jira/browse/SOLR-12625
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson


We have SolrDocumentFetcher and RetrieveFieldsOptimizer. The
relationship between the two is unclear at first glance. Using
SolrDocumentFetcher by itself is (or can be) inefficient.

WDYT about combining the two? Is there a good reason you would want to
use SolrDocumentFetcher _instead_ of RetrieveFieldsOptimizer?

Ideally I'd want to be able to write code like:

solrDocumentFetcher.fillDocValuesMostEfficiently

That created an optimizer and "did the right thing".

Assigning to myself to keep track, but if anyone feels motivated feel free to 
take it over.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2018-08-03 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11770:
--
Attachment: SOLR-11770.patch

> NPE in tvrh if no field is specified and document doesn't contain any fields 
> with term vectors
> --
>
> Key: SOLR-11770
> URL: https://issues.apache.org/jira/browse/SOLR-11770
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-11770.patch
>
>
> It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and 
> document doesn't have any fields with term vectors then Solr returns NPE.
> Request: 
> {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.
> On our 'old' schema we had some fields with {{termVectors}} and even more 
> fields with position data. In our new schema we tried to remove unused data 
> so we dropped a lot of position data and some term vectors.
> Our documents are 'sparsely' populated - not all documents contain all fields.
> Above request was returning fine for our 'old' schema and returns 500 for our 
> 'new' schema - on exactly same Solr (6.6.2).
> Stack trace:
> {code}
> 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
> r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at org.eclipse.jetty.server.Server.handle(Server.java:534)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>

[jira] [Commented] (LUCENE-8060) Enable top-docs collection optimizations by default

2018-08-03 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568929#comment-16568929
 ] 

Steve Rowe commented on LUCENE-8060:


Seed for a reproducing {{TestApproximationSearchEquivalence.testReqOpt()}} 
failure, from 
[https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/22/]; 
{{git bisect}} says the first bad commit is {{99dbe93}} on this issue:

{noformat}
Checking out Revision 6afd3d11929a75e3b3310638b32f4ed55da3ea6e 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestApproximationSearchEquivalence -Dtests.method=testReqOpt 
-Dtests.seed=195B60C59D5CF096 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=th-TH -Dtests.timezone=Pacific/Tahiti -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 2.01s J1 | TestApproximationSearchEquivalence.testReqOpt <<<
   [junit4]> Throwable #1: java.lang.AssertionError: target = 4043 < docID 
= 4044
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([195B60C59D5CF096:7DBAE508B63B8854]:0)
   [junit4]>at 
org.apache.lucene.search.AssertingScorer.advanceShallow(AssertingScorer.java:81)
   [junit4]>at 
org.apache.lucene.search.ReqOptSumScorer.advanceShallow(ReqOptSumScorer.java:215)
   [junit4]>at 
org.apache.lucene.search.AssertingScorer.advanceShallow(AssertingScorer.java:82)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer.advanceShallow(BlockMaxConjunctionScorer.java:228)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer$1.moveToNextBlock(BlockMaxConjunctionScorer.java:89)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer$1.advanceTarget(BlockMaxConjunctionScorer.java:110)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer$1.doNext(BlockMaxConjunctionScorer.java:181)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer$1.advance(BlockMaxConjunctionScorer.java:137)
   [junit4]>at 
org.apache.lucene.search.BlockMaxConjunctionScorer$1.nextDoc(BlockMaxConjunctionScorer.java:132)
   [junit4]>at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:261)
   [junit4]>at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:214)
   [junit4]>at 
org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
   [junit4]>at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:71)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:652)
   [junit4]>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:567)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:419)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:430)
   [junit4]>at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSameScores(SearchEquivalenceTestBase.java:251)
   [junit4]>at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSameScores(SearchEquivalenceTestBase.java:228)
   [junit4]>at 
org.apache.lucene.search.TestApproximationSearchEquivalence.testReqOpt(TestApproximationSearchEquivalence.java:296)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=CheapBastard, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@150ed858),
 locale=th-TH, timezone=Pacific/Tahiti
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_172 (64-bit)/cpus=4,threads=1,free=389119792,total=526385152
{noformat}

> Enable top-docs collection optimizations by default
> ---
>
> Key: LUCENE-8060
> URL: https://issues.apache.org/jira/browse/LUCENE-8060
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8060.patch
>
>
> We are getting optimizations when hit counts are not required (sorted 
> indexes, MAXSCORE, short-circuiting of phrase queries) but our users won't 
> benefit from them unless we disable exact hit counts by default or we require 
> them to tell us whether hit counts are required.
> I think making hit counts 

[jira] [Updated] (SOLR-12615) HashQParserPlugin will throw an NPE for string hash key and documents have empty value

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12615:
-
Summary: HashQParserPlugin will throw an NPE for string hash key and 
documents have empty value  (was: Search stream throws an NPE with 
partitionKeys and empty values)

> HashQParserPlugin will throw an NPE for string hash key and documents have 
> empty value
> --
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 

[jira] [Updated] (SOLR-12615) Search stream throws an NPE with partitionKeys and empty values

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12615:
-
Affects Version/s: (was: 7.4)

> Search stream throws an NPE with partitionKeys and empty values
> ---
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> 

[jira] [Commented] (SOLR-12615) Search stream throws an NPE with partitionKeys and empty values

2018-08-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568889#comment-16568889
 ] 

Varun Thacker commented on SOLR-12615:
--

Created SOLR-12624 for this nocommit in the patch
{code:java}
int workers = localParams.getInt("workers", 0);
int worker = localParams.getInt("worker", 0);
//nocommit : if workers and worker both are 0 it will give an 
java.lang.ArithmeticException: / by zero
//nocommit : if workers and worker both are 0 it will give an 
java.lang.ArithmeticException: / by zero{code}
[~joel.bernstein] does the patch looks good to you otherwise ? 

> Search stream throws an NPE with partitionKeys and empty values
> ---
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> 

[jira] [Updated] (SOLR-12615) Search stream throws an NPE with partitionKeys and empty values

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12615:
-
Fix Version/s: 7.5
   master (8.0)

> Search stream throws an NPE with partitionKeys and empty values
> ---
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> 

[jira] [Created] (SOLR-12624) Better validation for HashQParserPlugin

2018-08-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12624:


 Summary: Better validation for HashQParserPlugin
 Key: SOLR-12624
 URL: https://issues.apache.org/jira/browse/SOLR-12624
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


The syntax for the HashQParserPlugin is
{code:java}
fq={!hash workers=11 worker=4 keys=field1,field2}{code}
Today we don't make workers / worker mandatory. This means if a user doesn't 
specify them  worker and workers is 0 and you will get an 
java.lang.ArithmeticException: / by zero error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12615) Search stream throws an NPE with partitionKeys and empty values

2018-08-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568849#comment-16568849
 ] 

Varun Thacker commented on SOLR-12615:
--

Patch demonstrating the problem and a solution for it.

Still a couple of nocommits which need to be addressed.

The problem would only manifest if the partitionKeys is a string field . 
Numeric values would doesn't have this issue.

> Search stream throws an NPE with partitionKeys and empty values
> ---
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> 

[jira] [Updated] (SOLR-12615) Search stream throws an NPE with partitionKeys and empty values

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12615:
-
Attachment: SOLR-12615.patch

> Search stream throws an NPE with partitionKeys and empty values
> ---
>
> Key: SOLR-12615
> URL: https://issues.apache.org/jira/browse/SOLR-12615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12615.patch
>
>
> If I index documents where the partitionKeys keys field is missing from a few 
> docs then the stream expression throws an NPE
> docs:
> {code:java}
> [
> {"id" : "1", "search_term_s" : "query1"},
> {"id" : "2", "search_term_s" : "query1"},
> {"id" : "3"},
> {"id" : "4"}
> ]{code}
>  
> query
> {code:java}
> search(test_empty, q="*:*", fl="search_term_s,id" , sort="search_term_s 
> desc", qt="/export", partitionKeys="search_term_s"){code}
>  
> logs
> {code:java}
> INFO - 2018-08-03 01:44:36.156; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/stream 
> params={expr=search(test_empty,+q%3D"*:*",+fl%3D"search_term_s,id"+,+sort%3D"search_term_s+desc",+qt%3D"/export",+partitionKeys%3D"search_term_s")&_=1533260573672}
>  status=0 QTime=11
> INFO - 2018-08-03 01:44:36.160; [ ] 
> org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
> INFO - 2018-08-03 01:44:36.162; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.common.cloud.ZkStateReader; 
> Updated live nodes from ZooKeeper... (0) -> (1)
> INFO - 2018-08-03 01:44:36.164; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9888 ready
> INFO - 2018-08-03 01:44:36.207; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.core.SolrCore.Request; 
> [test_empty_shard1_replica_n1] webapp=/solr path=/export 
> params={q=*:*=false=off=search_term_s,id=search_term_s+desc=search_term_s={!hash+workers%3D1+worker%3D0}=json=2.2}
>  status=500 QTime=36
> ERROR - 2018-08-03 01:44:36.209; [c:test_empty s:shard1 r:core_node2 
> x:test_empty_shard1_replica_n1] org.apache.solr.servlet.HttpSolrCall; 
> null:java.io.IOException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at 
> org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:743)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:463)
> at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1196)
> at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:836)
> at 
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1044)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1563)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1439)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:586)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.ExportHandler.handleRequestBody(ExportHandler.java:37)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> 

[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 22 - Still Failing

2018-08-03 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/22/

17 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriterReader.testDuringAddIndexes

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([195B60C59D5CF096:EB820ED6FAF5E600]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.index.TestIndexWriterReader.testDuringAddIndexes(TestIndexWriterReader.java:772)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.lucene.search.TestApproximationSearchEquivalence.testReqOpt

Error Message:
target = 4043 < docID = 4044

Stack Trace:
java.lang.AssertionError: target = 4043 < docID = 4044
at 
__randomizedtesting.SeedInfo.seed([195B60C59D5CF096:7DBAE508B63B8854]:0)
at 
org.apache.lucene.search.AssertingScorer.advanceShallow(AssertingScorer.java:81)
at 
org.apache.lucene.search.ReqOptSumScorer.advanceShallow(ReqOptSumScorer.java:215)
at 
org.apache.lucene.search.AssertingScorer.advanceShallow(AssertingScorer.java:82)
at 
org.apache.lucene.search.BlockMaxConjunctionScorer.advanceShallow(BlockMaxConjunctionScorer.java:228)
at 

[jira] [Created] (SOLR-12623) Investigate usage of the SuppressSSL annotation

2018-08-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12623:


 Summary: Investigate usage of the SuppressSSL annotation
 Key: SOLR-12623
 URL: https://issues.apache.org/jira/browse/SOLR-12623
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Varun Thacker


We have quite a few tests that have an annotation like this
{code:java}
@SuppressSSL // Currently unknown why SSL does not work with this test{code}
We should investigate why do some of our tests not work with SSL enabled. My 
guess is that some of the tests just copied that annotation from existing test 
cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12614) Make "Nodes" tab the default in AdminUI Cloud view

2018-08-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12614.

Resolution: Fixed

> Make "Nodes" tab the default in AdminUI Cloud view
> --
>
> Key: SOLR-12614
> URL: https://issues.apache.org/jira/browse/SOLR-12614
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12614.patch
>
>
> The Nodes tab was added in version 7.5, but "Graph" was still the default 
> tab. Make the new Nodes tab the default one when opening the "Cloud" view.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Remove SuppressCodecs Lucene3x Lucene4x?

2018-08-03 Thread Varun Thacker
Quite a few tests have this annotation :

@LuceneTestCase.SuppressCodecs({"Lucene3x",
"Lucene40","Lucene41","Lucene42","Lucene45"})

I don't think the following codecs exist anymore in the codebase and we
should just remove the annotation from tests that try suppressing these
codecs ?

   - Direct
   - Lucene3x
   - Lucene40
   - Lucene41
   - Lucene42
   - Lucene45
   - Appending


[jira] [Commented] (SOLR-12614) Make "Nodes" tab the default in AdminUI Cloud view

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568794#comment-16568794
 ] 

ASF subversion and git services commented on SOLR-12614:


Commit f8db5d0afd34ebea4ae414a2eb148f926830be34 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8db5d0 ]

SOLR-12614: Make "Nodes" view the default in AdminUI "Cloud" tab


> Make "Nodes" tab the default in AdminUI Cloud view
> --
>
> Key: SOLR-12614
> URL: https://issues.apache.org/jira/browse/SOLR-12614
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12614.patch
>
>
> The Nodes tab was added in version 7.5, but "Graph" was still the default 
> tab. Make the new Nodes tab the default one when opening the "Cloud" view.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8446) UnifiedHighlighter DefaultPassageFormatter should merge overlapping offsets

2018-08-03 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568754#comment-16568754
 ] 

David Smiley commented on LUCENE-8446:
--

I pulled these two changes out of LUCENE-8286 so they don't distract and in the 
interest of making some piecemeal progress on that issue.

Admittedly there is no test in this patch but existing tests do no break, and 
the current behavior for this edge case was not tested (i.e. how to handle this 
scenario wasn't necessarily deliberate).  Tests in LUCENE-8286 will depend on 
this behavior so it'll be tested that way.

{noformat}
Old :This is the title field.
New :This is the title field.
{noformat}
(example taken from test in LUCENE-8286)

> UnifiedHighlighter DefaultPassageFormatter should merge overlapping offsets
> ---
>
> Key: LUCENE-8446
> URL: https://issues.apache.org/jira/browse/LUCENE-8446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8446.patch
>
>
> The UnifiedHighlighter's DefaultPassageFormatter (mostly unchanged from the 
> old PostingsHighlighter) will format overlapping matches by closing a tag and 
> immediately opening a tag.  I think this is a bit ugly structurally and it 
> ought to continue the tag is if the matches were merged.  This is extremely 
> rare in practice today since a match is always a word, and thus we'd only see 
> this behavior if multiple words at the same position of different offsets are 
> highlighted.  The advent of matches representing phrases will increase the 
> probability of this, and indeed was discovered while working on LUCENE-8286.  
> Additionally, and related, OffsetsEnums should internally be ordered by the 
> end offset if the start offset is the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Add "Nodes" view to the Admin UI "Cloud" tab

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568751#comment-16568751
 ] 

ASF subversion and git services commented on SOLR-8207:
---

Commit f97a28017e952472e482951a3274e70a344cbf39 in lucene-solr's branch 
refs/heads/branch_7x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f97a280 ]

SOLR-8207: Add "Nodes" view to the Admin UI "Cloud" tab, listing nodes and key 
metrics

(cherry picked from commit 17a02c1)


> Add "Nodes" view to the Admin UI "Cloud" tab
> 
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207.patch, node-compact.png, node-details.png, 
> node-hostcolumn.png, node-toggle-row-numdocs.png, nodes-tab-real.png, 
> nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8207) Add "Nodes" view to the Admin UI "Cloud" tab

2018-08-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8207:
--
Summary: Add "Nodes" view to the Admin UI "Cloud" tab  (was: Modernise 
cloud tab on Admin UI)

> Add "Nodes" view to the Admin UI "Cloud" tab
> 
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207.patch, node-compact.png, node-details.png, 
> node-hostcolumn.png, node-toggle-row-numdocs.png, nodes-tab-real.png, 
> nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8446) UnifiedHighlighter DefaultPassageFormatter should merge overlapping offsets

2018-08-03 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8446:
-
Attachment: LUCENE-8446.patch

> UnifiedHighlighter DefaultPassageFormatter should merge overlapping offsets
> ---
>
> Key: LUCENE-8446
> URL: https://issues.apache.org/jira/browse/LUCENE-8446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8446.patch
>
>
> The UnifiedHighlighter's DefaultPassageFormatter (mostly unchanged from the 
> old PostingsHighlighter) will format overlapping matches by closing a tag and 
> immediately opening a tag.  I think this is a bit ugly structurally and it 
> ought to continue the tag is if the matches were merged.  This is extremely 
> rare in practice today since a match is always a word, and thus we'd only see 
> this behavior if multiple words at the same position of different offsets are 
> highlighted.  The advent of matches representing phrases will increase the 
> probability of this, and indeed was discovered while working on LUCENE-8286.  
> Additionally, and related, OffsetsEnums should internally be ordered by the 
> end offset if the start offset is the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8446) UnifiedHighlighter DefaultPassageFormatter should merge overlapping offsets

2018-08-03 Thread David Smiley (JIRA)
David Smiley created LUCENE-8446:


 Summary: UnifiedHighlighter DefaultPassageFormatter should merge 
overlapping offsets
 Key: LUCENE-8446
 URL: https://issues.apache.org/jira/browse/LUCENE-8446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/highlighter
Reporter: David Smiley
Assignee: David Smiley


The UnifiedHighlighter's DefaultPassageFormatter (mostly unchanged from the old 
PostingsHighlighter) will format overlapping matches by closing a tag and 
immediately opening a tag.  I think this is a bit ugly structurally and it 
ought to continue the tag is if the matches were merged.  This is extremely 
rare in practice today since a match is always a word, and thus we'd only see 
this behavior if multiple words at the same position of different offsets are 
highlighted.  The advent of matches representing phrases will increase the 
probability of this, and indeed was discovered while working on LUCENE-8286.  
Additionally, and related, OffsetsEnums should internally be ordered by the end 
offset if the start offset is the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568734#comment-16568734
 ] 

ASF subversion and git services commented on SOLR-12617:


Commit 79feed97088c736ecd546f8b59c8425c659579af in lucene-solr's branch 
refs/heads/branch_7x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=79feed9 ]

SOLR-12617: remove beanutils license and notice files

(cherry picked from commit 0b59b0e)


> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568733#comment-16568733
 ] 

ASF subversion and git services commented on SOLR-12617:


Commit 0b59b0ed1da4919a7ccd87dd2cbac1148ea64ff9 in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0b59b0e ]

SOLR-12617: remove beanutils license and notice files


> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568726#comment-16568726
 ] 

Varun Thacker commented on SOLR-12617:
--

It should. I'll commit a fix right away. I thought {{ant jar-checksums}} would 
have removed the necessary files. Maybe it was the wrong ant target 

> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Jan Høydahl
Thanks Steve!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 3. aug. 2018 kl. 20:46 skrev Steve Rowe :
> 
> At Infra’s request, I created a JIRA asking for them to transition the 
> affected JIRAs to “Open” status: 
> https://issues.apache.org/jira/browse/INFRA-16874
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Aug 3, 2018, at 2:33 PM, Steve Rowe  wrote:
>> 
>> As I wrote on the thread “[DISCUSS] Request for review of proposed 
>> LUCENE/SOLR JIRA workflow changes” 
>> https://lists.apache.org/thread.html/4a240fb5fe4b0d2c851eba329e0b17d10e686913923f2d1579fc3dd2@%3Cdev.lucene.apache.org%3E
>>  :
>> 
>>> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the 
>>> workflow have been removed, because if they remain, JIRA creates a 
>>> “Workflow” menu and puts the “Attach Patch” button under it, which kind of 
>>> defeats its purpose: an obvious way to submit contributions. I asked Gavin 
>>> to remove the “Progress” related aspects of the workflow because I don’t 
>>> think they’re being used except on a limited ad-hoc basis, not part of a 
>>> conventional workflow.
>> 
>> So when this happened, those issues in “In Progress” status were apparently 
>> not transitioned properly.  I’ll go ask about it on hipchat.
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Aug 3, 2018, at 2:13 PM, Erick Erickson  wrote:
>>> 
>>> I think I had one of these, since it was only a single JIRA I deleted
>>> the old one and created a new one. Which was kinda awful but since
>>> there was only one
>>> 
>>> Mostly saying that Jan isn't the only one who had something like this 
>>> happen.
>>> 
>>> Erick
>>> 
>>> On Fri, Aug 3, 2018 at 9:08 AM, Cassandra Targett  
>>> wrote:
 I agree that the problem is the workflow. Only the Jira system admins
 (Infra) can modify a workflow, so I think an issue for them to transition
 the issues and remove that possible status is a good idea. Or, if we keep
 the In Progress state, we need to make sure we can transition issues and 
 and
 out of it.
 
 On Fri, Aug 3, 2018 at 10:07 AM Jan Høydahl  wrote:
> 
> The error is "NOTE: You do not have permission to perform a bulk
> transition on the selected 7 issues."
> A little googl'ing shows that this is because I do not have permission to
> transitino one of them either.
> And that is probably because our workflow is broken - it contains a
> dangling "IN PROGRESS" sate, but
> it is not connected with any arrows to other states, so it is impossible
> to get out out it.
> And I cannot change (temporarily) the workflow either, it asks me to
> contact Jira admin.
> 
> This makes me think whether we perhaps need to ask Infra to fix our
> workflow and/or transition all IN PROGRESS issues for us.
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
> 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
> 
> Are you sure you don't have the permissions? The user "janhoy" has
> Administrator-level permissions for the SOLR project, and the 
> Administrator
> role has the permissions to transition issues, so you should be able to.
> 
> If you can't make that change it's possible you're trying to transition to
> a status that is not allowed according to the workflow.
> 
> Cassandra
> 
> On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:
>> 
>> Hi,
>> 
>> I have some issues in status "IN PROGRESS" which are impossible to
>> transistion to any other state after the recent changes. The resolve 
>> button
>> is not there and not in menus either. So I believe these are trapped in
>> no-mans-land… There are a total of 64 issues with this status.
>> 
>> I guess it makes sense to bulk transition all of these to state OPEN. But
>> I don't seem to have the JIRA karma to bulk transition issues either. Can
>> some other committer make an attempt?
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568723#comment-16568723
 ] 

David Smiley commented on SOLR-12617:
-

Shouldn't the license & notice file be removed too?

> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy

2018-08-03 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8286:
-
Attachment: (was: LUCENE-8286.patch)

> UnifiedHighlighter should support the new Weight.matches API for better match 
> accuracy
> --
>
> Key: LUCENE-8286
> URL: https://issues.apache.org/jira/browse/LUCENE-8286
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The new Weight.matches() API should allow the UnifiedHighlighter to more 
> accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903.
> In addition, this API should make the job of highlighting easier, reducing 
> the LOC and related complexities, especially the UH's PhraseHelper.  Note: 
> reducing/removing PhraseHelper is not a near-term goal since Weight.matches 
> is experimental and incomplete, and perhaps we'll discover some gaps in 
> flexibility/functionality.
> This issue should introduce a new UnifiedHighlighter.HighlightFlag enum 
> option for this method of highlighting.   Perhaps call it {{WEIGHT_MATCHES}}? 
>  Longer term it could go away and it'll be implied if you specify enum values 
> for PHRASES & MULTI_TERM_QUERY?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12622) Show a ref-guide example to configure SolrSlf4jReporter

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12622:
-
Attachment: SOLR-12622.patch

> Show a ref-guide example to configure SolrSlf4jReporter
> ---
>
> Key: SOLR-12622
> URL: https://issues.apache.org/jira/browse/SOLR-12622
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12622.patch, SOLR-12622.patch
>
>
> To configure SolrSlf4jReporter we must configure a logger in the log4j2.xml 
> file and then the solr.xml file must be able to reference it. 
> It won't be super obvious to users on how to do this so we should show an 
> example in the ref guide
> With log4j.xml we could do it like 
> [https://github.com/vthacker/solr-solutions/blob/master/docs/solr_metrics_logger.md]
>  but not we moved over to log4j2.xml 
> I have a working example for log4j2.xml and I'll post a patch in the next few 
> hours 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12622) Show a ref-guide example to configure SolrSlf4jReporter

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12622:
-
Attachment: SOLR-12622.patch

> Show a ref-guide example to configure SolrSlf4jReporter
> ---
>
> Key: SOLR-12622
> URL: https://issues.apache.org/jira/browse/SOLR-12622
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12622.patch
>
>
> To configure SolrSlf4jReporter we must configure a logger in the log4j2.xml 
> file and then the solr.xml file must be able to reference it. 
> It won't be super obvious to users on how to do this so we should show an 
> example in the ref guide
> With log4j.xml we could do it like 
> [https://github.com/vthacker/solr-solutions/blob/master/docs/solr_metrics_logger.md]
>  but not we moved over to log4j2.xml 
> I have a working example for log4j2.xml and I'll post a patch in the next few 
> hours 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)

2018-08-03 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-8415.
-
   Resolution: Fixed
Fix Version/s: 7.5

> Clean up Directory contracts (write-once, no reads-before-write-completed)
> --
>
> Key: LUCENE-8415
> URL: https://issues.apache.org/jira/browse/LUCENE-8415
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8415.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Created a PR here for early review.
> https://github.com/apache/lucene-solr/pull/424
> I changed:
> * the wording in Directory documentation to be a bit more formalized about 
> what rules a Directory should obey (and users expect).
> * modified the test framework to verify the above in mock classes.
> Currently a number of Directory implementations fail the 
> {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12622) Show a ref-guide example to configure SolrSlf4jReporter

2018-08-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12622:


 Summary: Show a ref-guide example to configure SolrSlf4jReporter
 Key: SOLR-12622
 URL: https://issues.apache.org/jira/browse/SOLR-12622
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


To configure SolrSlf4jReporter we must configure a logger in the log4j2.xml 
file and then the solr.xml file must be able to reference it. 

It won't be super obvious to users on how to do this so we should show an 
example in the ref guide

With log4j.xml we could do it like 
[https://github.com/vthacker/solr-solutions/blob/master/docs/solr_metrics_logger.md]
 but not we moved over to log4j2.xml 

I have a working example for log4j2.xml and I'll post a patch in the next few 
hours 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 120 - Unstable

2018-08-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/120/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/41)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":15740,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}, "core_node4":{   
"core":"testSplitIntegration_collection_shard2_replica_n4",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":15740, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1533322588908072500", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":17240, 
  "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}, "core_node2":{   
"core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":17240,   "node_name":"127.0.0.1:1_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1533322588935753800",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1533322588935478050",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/41)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  

[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207645614
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -116,12 +118,12 @@ public void 
testParseDateNonUTCdefaultTimeZoneRoundTrip() throws Exception {
 ("parse-date-non-UTC-defaultTimeZone", doc(f("id", "99"), 
f("dateUTC_dt", dateStringUTC), 
f("dateNoTimeZone_dt", 
dateStringNoTimeZone)));
 assertNotNull(d);
-String pattern = "-MM-dd'T'HH:mm:ss.SSSZ";
-DateTimeFormatter dateTimeFormatterUTC = 
DateTimeFormat.forPattern(pattern);
-DateTime dateTimeUTC = 
dateTimeFormatterUTC.parseDateTime(dateStringUTC);
+String pattern = "-MM-dd'T'HH:mm:ss.SSSZ";
+DateTimeFormatter UTCForamatter = DateTimeFormatter.ofPattern(pattern, 
Locale.ROOT).withZone(ZoneId.of("America/New_York"));
+OffsetDateTime dateTimeOffsetUTC = OffsetDateTime.parse(dateStringUTC, 
UTCForamatter);
 assertTrue(d.getFieldValue("dateUTC_dt") instanceof Date);
 assertTrue(d.getFieldValue("dateNoTimeZone_dt") instanceof Date);
-assertEquals(dateTimeUTC.getMillis(), ((Date) 
d.getFieldValue("dateUTC_dt")).getTime());
+assertEquals(dateTimeOffsetUTC.toInstant().toEpochMilli(), ((Date) 
d.getFieldValue("dateUTC_dt")).getTime());
--- End diff --

I stared at this test for awhile and came to the conclusion that the 
UTCFormatter related stuff is unnecessary (those 3 lines above), and so is this 
assertEquals line.  You can remove them.  Those several lines were internal 
confusing jirations that didn't actually really test anything other than 
demonstrate how one can use java.time (formerly joda).  The real meat of this 
test IMO is the assertQ with the string constants, which is good.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207624001
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -172,4 +179,21 @@ public void init(NamedList args) {
   return (null == type) || type instanceof DateValueFieldType;
 };
   }
+
+  private static Instant parseInstant(DateTimeFormatter formatter, String 
dateStr) {
+final TemporalAccessor temporalAccessor = formatter.parse(dateStr);
+// parsed successfully.  But is it a full instant or just to the day?
+if(temporalAccessor.isSupported(ChronoField.OFFSET_SECONDS)) {
--- End diff --

Firstly, I think INSTANT_SECONDS should be very first check (presumed fast 
path).  

Secondly, I'm doubtful that OFFSET_SECONDS is the correct thing to check 
on.  I guess I'd have to use a debugger to have any confidence.  Hmm; looking 
at `java.time.OffsetDateTime#from` is interesting.  maybe grab 
`temporal.query(TemporalQueries.localDate()` (and insist we find it otherwise 
pattern is impossibly vague?), and then attempt to get the time optionally 
(otherwise assume start of day) and the timezone optionally (default to 
configured/default zone).  Maybe that is right.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207641129
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -50,10 +55,9 @@ public void testParseDateRoundTrip() throws Exception {
 String dateString = "2010-11-12T13:14:15.168Z";
 SolrInputDocument d = processAdd("parse-date", doc(f("id", "9"), 
f("date_dt", dateString)));
 assertNotNull(d);
-DateTimeFormatter dateTimeFormatter = ISODateTimeFormat.dateTime();
-DateTime dateTime = dateTimeFormatter.parseDateTime(dateString);
+ZonedDateTime localDateTime = ZonedDateTime.parse(dateString, 
DateTimeFormatter.ISO_DATE_TIME);
 assertTrue(d.getFieldValue("date_dt") instanceof Date);
-assertEquals(dateTime.getMillis(), ((Date) 
d.getFieldValue("date_dt")).getTime());
+
assertEquals(localDateTime.withZoneSameInstant(ZoneOffset.UTC).toInstant().toEpochMilli(),
 ((Date) d.getFieldValue("date_dt")).getTime());
--- End diff --

Simply change the left arg to `Instant.parse(dateString)` (and call 
toInstant() on the right arg instead of getTime()), and remove the couple lines 
above that are no longer needed :-)

Part of the work involved, I think, isn't always doing a direct Joda -> 
java.time API translation, but it's also recognizing when the code in question 
is needlessly verbose and making it better.  I try to take that philosophy with 
any code maintenance.  There's a lot of cases in this test suite where you can 
apply the above technique to remove needless time API machinery that is 
needless.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207625264
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactoryTest.java
 ---
@@ -63,8 +63,8 @@ public void testSingleField() throws Exception {
 final String fieldName = "newfield1";
 assertNull(schema.getFieldOrNull(fieldName));
 String dateString = "2010-11-12T13:14:15.168Z";
--- End diff --

This test doesn't even care about formatting, it just wants some Date type. 
 So just use `new Date()`  :-)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207621885
  
--- Diff: solr/core/src/java/org/apache/solr/core/ConfigSetService.java ---
@@ -212,12 +213,12 @@ public SchemaCaching(SolrResourceLoader loader, Path 
configSetBase) {
   super(loader, configSetBase);
 }
 
-public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormat.forPattern("MMddHHmmss");
+public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormatter.ofPattern("MMddHHmmss", Locale.ROOT);
 
 public static String cacheName(Path schemaFile) throws IOException {
   long lastModified = Files.getLastModifiedTime(schemaFile).toMillis();
   return String.format(Locale.ROOT, "%s:%s",
-schemaFile.toString(), 
cacheKeyFormatter.print(lastModified));
+schemaFile.toString(), 
Instant.ofEpochMilli(lastModified).atZone(ZoneOffset.UTC).format(cacheKeyFormatter));
--- End diff --

I think the 2nd arg should be simplified to this: 
`Instant.ofEpochMilli(lastModified).toString()`
This removes the need for a cacheKeyFormatter.  My only concern before 
mentioning this originally was that I did't know what happens with this 
formatted info... did it need to get reparsed somewhere or stored ZooKeeper.  
Neither of those things; it's just a portion of cache key.  So lets keep this 
super simple.



---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207622324
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -144,15 +149,17 @@ public void init(NamedList args) {
 }
 
 Object defaultTimeZoneParam = args.remove(DEFAULT_TIME_ZONE_PARAM);
-DateTimeZone defaultTimeZone = DateTimeZone.UTC;
+ZoneId defaultTimeZone = ZoneOffset.UTC;
 if (null != defaultTimeZoneParam) {
-  defaultTimeZone = 
DateTimeZone.forID(defaultTimeZoneParam.toString());
+  defaultTimeZone = ZoneId.of(defaultTimeZoneParam.toString());
 }
 
 Collection formatsParam = args.removeConfigArgs(FORMATS_PARAM);
 if (null != formatsParam) {
   for (String value : formatsParam) {
-formats.put(value, 
DateTimeFormat.forPattern(value).withZone(defaultTimeZone).withLocale(locale));
+DateTimeFormatter formatter = new 
DateTimeFormatterBuilder().parseCaseInsensitive()
+
.appendPattern(value).toFormatter(locale).withZone(defaultTimeZone);
+formats.put(value, formatter);
--- End diff --

Please add a round-trip check so we can see if parseInstant will handle the 
format.  I think I had this in my other patch; you can see there.  This way if 
someone puts in an invalid pattern (i.e. something silly like only the month), 
they get notified up front that the format is invalid.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207620961
  
--- Diff: solr/licenses/joda-time-NOTICE.txt ---
@@ -1,5 +0,0 @@

-=
--- End diff --

Good but there are two other files in this directory starting with joda 
that should be removed as well.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12402) factor out a SolrDefaultStreamFactory class

2018-08-03 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-12402.

   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> factor out a SolrDefaultStreamFactory class
> ---
>
> Key: SOLR-12402
> URL: https://issues.apache.org/jira/browse/SOLR-12402
> Project: Solr
>  Issue Type: Task
>  Components: streaming expressions
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12402.patch
>
>
> Two motivations behind the proposed factoring out:
> * discoverability of solr/solrj Lang vs. solr/core Lucene/Solr functions
> * support for custom classes that require access to a SolrResourceLoader



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+24) - Build # 2477 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2477/
Java: 64bit/jdk-11-ea+24 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Collection not found: cdcr-cluster2

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: cdcr-cluster2
at 
__randomizedtesting.SeedInfo.seed([1145E4A05D394EA2:549E1442451702E0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.cdcr.CdcrTestsUtil.waitForClusterToSync(CdcrTestsUtil.java:108)
at 
org.apache.solr.cloud.cdcr.CdcrTestsUtil.waitForClusterToSync(CdcrTestsUtil.java:101)
at 
org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:100)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12620) Remove the Cloud -> Graph (Radial) view

2018-08-03 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568617#comment-16568617
 ] 

Christine Poerschke commented on SOLR-12620:


cross-referencing links:
* mailing list threads: https://markmail.org/thread/xeb2otvdh7ei2em3 and 
https://markmail.org/thread/lx4a7waftjp2ino2
* from SOLR-5405 ticket:
bq. Potentially controversial idea, could we *remove the radial cloud graph* ...


> Remove the Cloud -> Graph (Radial) view
> ---
>
> Key: SOLR-12620
> URL: https://issues.apache.org/jira/browse/SOLR-12620
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12620.patch
>
>
> Spinoff from SOLR-8207
> The radial view does not scale well and should be removed in 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5405) Cloud graph view not usable by color-blind users - request small tweak

2018-08-03 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568616#comment-16568616
 ] 

Christine Poerschke commented on SOLR-5405:
---

bq. Potentially controversial idea, could we *remove the radial cloud graph* ...

cross-referencing links:
* mailing list threads: https://markmail.org/thread/xeb2otvdh7ei2em3 and 
https://markmail.org/thread/lx4a7waftjp2ino2
* recent ticket: SOLR-12620

> Cloud graph view not usable by color-blind users - request small tweak
> --
>
> Key: SOLR-5405
> URL: https://issues.apache.org/jira/browse/SOLR-5405
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 4.5
>Reporter: Nathan Neulinger
>Assignee: Stefan Matheis (steffkes)
>Priority: Major
>  Labels: accessibility
> Attachments: 
> SOLR-5405-circle-triangleUp-triangleDown-diamond-square-dashedSquare.png, 
> SOLR-5405-graph.png, SOLR-5405-radial.png, SOLR-5405.patch, SOLR-5405.patch
>
>
> Currently, the cloud view status is impossible to see easily on the graph 
> screen if you are color blind. (On of my coworkers.)
> Would it be possible to put " (X)" after the IP of the node where X is 
> [LARDFG] for the states?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Steve Rowe
At Infra’s request, I created a JIRA asking for them to transition the affected 
JIRAs to “Open” status: https://issues.apache.org/jira/browse/INFRA-16874

--
Steve
www.lucidworks.com

> On Aug 3, 2018, at 2:33 PM, Steve Rowe  wrote:
> 
> As I wrote on the thread “[DISCUSS] Request for review of proposed 
> LUCENE/SOLR JIRA workflow changes” 
> https://lists.apache.org/thread.html/4a240fb5fe4b0d2c851eba329e0b17d10e686913923f2d1579fc3dd2@%3Cdev.lucene.apache.org%3E
>  :
> 
>> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the 
>> workflow have been removed, because if they remain, JIRA creates a 
>> “Workflow” menu and puts the “Attach Patch” button under it, which kind of 
>> defeats its purpose: an obvious way to submit contributions. I asked Gavin 
>> to remove the “Progress” related aspects of the workflow because I don’t 
>> think they’re being used except on a limited ad-hoc basis, not part of a 
>> conventional workflow.
> 
> So when this happened, those issues in “In Progress” status were apparently 
> not transitioned properly.  I’ll go ask about it on hipchat.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Aug 3, 2018, at 2:13 PM, Erick Erickson  wrote:
>> 
>> I think I had one of these, since it was only a single JIRA I deleted
>> the old one and created a new one. Which was kinda awful but since
>> there was only one
>> 
>> Mostly saying that Jan isn't the only one who had something like this happen.
>> 
>> Erick
>> 
>> On Fri, Aug 3, 2018 at 9:08 AM, Cassandra Targett  
>> wrote:
>>> I agree that the problem is the workflow. Only the Jira system admins
>>> (Infra) can modify a workflow, so I think an issue for them to transition
>>> the issues and remove that possible status is a good idea. Or, if we keep
>>> the In Progress state, we need to make sure we can transition issues and and
>>> out of it.
>>> 
>>> On Fri, Aug 3, 2018 at 10:07 AM Jan Høydahl  wrote:
 
 The error is "NOTE: You do not have permission to perform a bulk
 transition on the selected 7 issues."
 A little googl'ing shows that this is because I do not have permission to
 transitino one of them either.
 And that is probably because our workflow is broken - it contains a
 dangling "IN PROGRESS" sate, but
 it is not connected with any arrows to other states, so it is impossible
 to get out out it.
 And I cannot change (temporarily) the workflow either, it asks me to
 contact Jira admin.
 
 This makes me think whether we perhaps need to ask Infra to fix our
 workflow and/or transition all IN PROGRESS issues for us.
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
 
 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
 
 Are you sure you don't have the permissions? The user "janhoy" has
 Administrator-level permissions for the SOLR project, and the Administrator
 role has the permissions to transition issues, so you should be able to.
 
 If you can't make that change it's possible you're trying to transition to
 a status that is not allowed according to the workflow.
 
 Cassandra
 
 On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:
> 
> Hi,
> 
> I have some issues in status "IN PROGRESS" which are impossible to
> transistion to any other state after the recent changes. The resolve 
> button
> is not there and not in menus either. So I believe these are trapped in
> no-mans-land… There are a total of 64 issues with this status.
> 
> I guess it makes sense to bulk transition all of these to state OPEN. But
> I don't seem to have the JIRA karma to bulk transition issues either. Can
> some other committer make an attempt?
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
 
>>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Steve Rowe
As I wrote on the thread “[DISCUSS] Request for review of proposed LUCENE/SOLR 
JIRA workflow changes” 
https://lists.apache.org/thread.html/4a240fb5fe4b0d2c851eba329e0b17d10e686913923f2d1579fc3dd2@%3Cdev.lucene.apache.org%3E
 :

> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the workflow 
> have been removed, because if they remain, JIRA creates a “Workflow” menu and 
> puts the “Attach Patch” button under it, which kind of defeats its purpose: 
> an obvious way to submit contributions. I asked Gavin to remove the 
> “Progress” related aspects of the workflow because I don’t think they’re 
> being used except on a limited ad-hoc basis, not part of a conventional 
> workflow.

So when this happened, those issues in “In Progress” status were apparently not 
transitioned properly.  I’ll go ask about it on hipchat.

--
Steve
www.lucidworks.com

> On Aug 3, 2018, at 2:13 PM, Erick Erickson  wrote:
> 
> I think I had one of these, since it was only a single JIRA I deleted
> the old one and created a new one. Which was kinda awful but since
> there was only one
> 
> Mostly saying that Jan isn't the only one who had something like this happen.
> 
> Erick
> 
> On Fri, Aug 3, 2018 at 9:08 AM, Cassandra Targett  
> wrote:
>> I agree that the problem is the workflow. Only the Jira system admins
>> (Infra) can modify a workflow, so I think an issue for them to transition
>> the issues and remove that possible status is a good idea. Or, if we keep
>> the In Progress state, we need to make sure we can transition issues and and
>> out of it.
>> 
>> On Fri, Aug 3, 2018 at 10:07 AM Jan Høydahl  wrote:
>>> 
>>> The error is "NOTE: You do not have permission to perform a bulk
>>> transition on the selected 7 issues."
>>> A little googl'ing shows that this is because I do not have permission to
>>> transitino one of them either.
>>> And that is probably because our workflow is broken - it contains a
>>> dangling "IN PROGRESS" sate, but
>>> it is not connected with any arrows to other states, so it is impossible
>>> to get out out it.
>>> And I cannot change (temporarily) the workflow either, it asks me to
>>> contact Jira admin.
>>> 
>>> This makes me think whether we perhaps need to ask Infra to fix our
>>> workflow and/or transition all IN PROGRESS issues for us.
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
>>> 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
>>> 
>>> Are you sure you don't have the permissions? The user "janhoy" has
>>> Administrator-level permissions for the SOLR project, and the Administrator
>>> role has the permissions to transition issues, so you should be able to.
>>> 
>>> If you can't make that change it's possible you're trying to transition to
>>> a status that is not allowed according to the workflow.
>>> 
>>> Cassandra
>>> 
>>> On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:
 
 Hi,
 
 I have some issues in status "IN PROGRESS" which are impossible to
 transistion to any other state after the recent changes. The resolve button
 is not there and not in menus either. So I believe these are trapped in
 no-mans-land… There are a total of 64 issues with this status.
 
 I guess it makes sense to bulk transition all of these to state OPEN. But
 I don't seem to have the JIRA karma to bulk transition issues either. Can
 some other committer make an attempt?
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
>>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12602) Reprodicuble failure in StreamExpressionTest

2018-08-03 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12602.
---
Resolution: Duplicate

> Reprodicuble failure in StreamExpressionTest
> 
>
> Key: SOLR-12602
> URL: https://issues.apache.org/jira/browse/SOLR-12602
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ant test  -Dtestcase=StreamExpressionTest 
> -Dtests.method=testSignificantTermsStream -Dtests.seed=AD03B92D45B6F7BD 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-EC 
> -Dtests.timezone=NST -Dtests.asserts=true -Dtests.file.encoding=US-ASCII



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Erick Erickson
I think I had one of these, since it was only a single JIRA I deleted
the old one and created a new one. Which was kinda awful but since
there was only one

Mostly saying that Jan isn't the only one who had something like this happen.

Erick

On Fri, Aug 3, 2018 at 9:08 AM, Cassandra Targett  wrote:
> I agree that the problem is the workflow. Only the Jira system admins
> (Infra) can modify a workflow, so I think an issue for them to transition
> the issues and remove that possible status is a good idea. Or, if we keep
> the In Progress state, we need to make sure we can transition issues and and
> out of it.
>
> On Fri, Aug 3, 2018 at 10:07 AM Jan Høydahl  wrote:
>>
>> The error is "NOTE: You do not have permission to perform a bulk
>> transition on the selected 7 issues."
>> A little googl'ing shows that this is because I do not have permission to
>> transitino one of them either.
>> And that is probably because our workflow is broken - it contains a
>> dangling "IN PROGRESS" sate, but
>> it is not connected with any arrows to other states, so it is impossible
>> to get out out it.
>> And I cannot change (temporarily) the workflow either, it asks me to
>> contact Jira admin.
>>
>> This makes me think whether we perhaps need to ask Infra to fix our
>> workflow and/or transition all IN PROGRESS issues for us.
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
>>
>> Are you sure you don't have the permissions? The user "janhoy" has
>> Administrator-level permissions for the SOLR project, and the Administrator
>> role has the permissions to transition issues, so you should be able to.
>>
>> If you can't make that change it's possible you're trying to transition to
>> a status that is not allowed according to the workflow.
>>
>> Cassandra
>>
>> On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:
>>>
>>> Hi,
>>>
>>> I have some issues in status "IN PROGRESS" which are impossible to
>>> transistion to any other state after the recent changes. The resolve button
>>> is not there and not in menus either. So I believe these are trapped in
>>> no-mans-land… There are a total of 64 issues with this status.
>>>
>>> I guess it makes sense to bulk transition all of these to state OPEN. But
>>> I don't seem to have the JIRA karma to bulk transition issues either. Can
>>> some other committer make an attempt?
>>>
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8445) RandomGeoPolygonTest.testCompareBigPolygons() failure

2018-08-03 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-8445:
--

 Summary: RandomGeoPolygonTest.testCompareBigPolygons() failure
 Key: LUCENE-8445
 URL: https://issues.apache.org/jira/browse/LUCENE-8445
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Reporter: Steve Rowe


Failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22590/], 
reproduces for me on Java8:

{noformat}
Checking out Revision 2a41cbd192451f6e69ae2e6cccb7b2e26af2 
(refs/remotes/origin/master)
[...]
   [junit4] Suite: org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=RandomGeoPolygonTest -Dtests.method=testCompareBigPolygons 
-Dtests.seed=5444688174504C79 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=pt-LU -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.23s J1 | RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[5444688174504C79:CC6BBA71B5FC82A6]} <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 
   [junit4]> Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=-0.3036468642757333, 
lon=0.5616500855257733([X=0.80765773219, Y=0.508219108660839, 
Z=-0.29900221630132817])], [lat=-0.17226782498440368, 
lon=0.8641157866087514([X=0.6397020656700857, Y=0.7492646151846353, 
Z=-0.1714170458549729])], [lat=0.591763073597, 
lon=1.0258877306398073([X=0.43020057589183536, Y=0.7097594028504629, 
Z=0.5578252903622132])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])]], internalEdges={4}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])]], internalEdges={0, 2}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])], [lat=-1.0E-323, lon=0.0([X=1.0, Y=0.0, 
Z=-1.0E-323])]], internalEdges={0}}]}
   [junit4]> Large polygon: GeoComplexPolygon: 
{planetmodel=PlanetModel.SPHERE, number of shapes=1, address=e0a76761, 
testPoint=[lat=0.04032281608974351, 
lon=0.33067345007473165([X=0.945055084899262, Y=0.3244161494642355, 
Z=0.040311889968686655])], testPointInSet=true, shapes={ 
{[lat=1.0468214627857893E-8, lon=8.413079957136915E-7([X=0.6461, 
Y=8.413079957135923E-7, Z=1.0468214627857893E-8])], [lat=-0.3036468642757333, 
lon=0.5616500855257733([X=0.80765773219, Y=0.508219108660839, 
Z=-0.29900221630132817])], [lat=-0.17226782498440368, 
lon=0.8641157866087514([X=0.6397020656700857, Y=0.7492646151846353, 
Z=-0.1714170458549729])], [lat=0.591763073597, 
lon=1.0258877306398073([X=0.43020057589183536, Y=0.7097594028504629, 
Z=0.5578252903622132])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])], [lat=-1.0E-323, lon=0.0([X=1.0, Y=0.0, 
Z=-1.0E-323])]}}
   [junit4]> Point: [lat=-8.763997112262326E-13, 
lon=3.14159265358979([X=-1.0, Y=3.2310891488651735E-15, 
Z=-8.763997112262326E-13])]
   [junit4]> WKT: POLYGON((32.18017946378854 
-17.397683785381247,49.51018758330871 -9.870219317504647,58.77903721991479 
33.90553510354402,2.640604559432277 9.363173880050821,3.1673235739886286E-10 
8.853669066894417E-11,0.0 -5.7E-322,4.820339742500488E-5 
5.99784517213369E-7,32.18017946378854 -17.397683785381247))
   [junit4]> WKT: POINT(179.83 -5.021400461974724E-11)
   [junit4]> normal polygon: true
   [junit4]> large polygon: false
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5444688174504C79:CC6BBA71B5FC82A6]:0)
   [junit4]>at 
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.checkPoint(RandomGeoPolygonTest.java:228)
   [junit4]>at 
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:203)
   [junit4]>at 
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
   [junit4]>at 

[JENKINS] Lucene-Solr-repro - Build # 1136 - Unstable

2018-08-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1136/

[...truncated 36 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2652/consoleText

[repro] Revision: 6afd3d11929a75e3b3310638b32f4ed55da3ea6e

[repro] Repro line:  ant test  -Dtestcase=TestStressCloudBlindAtomicUpdates 
-Dtests.seed=4E3987ED6D2F7DC0 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ga -Dtests.timezone=SystemV/YST9 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e3cdb395a4009f118900397c8a2086620b436455
[repro] git fetch
[repro] git checkout 6afd3d11929a75e3b3310638b32f4ed55da3ea6e

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestStressCloudBlindAtomicUpdates
[repro] ant compile-test

[...truncated 3317 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestStressCloudBlindAtomicUpdates" -Dtests.showOutput=onerror  
-Dtests.seed=4E3987ED6D2F7DC0 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ga -Dtests.timezone=SystemV/YST9 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 1167 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates
[repro] git checkout e3cdb395a4009f118900397c8a2086620b436455

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22590 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22590/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[5444688174504C79:CC6BBA71B5FC82A6]}

Error Message:
 Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=-0.3036468642757333, 
lon=0.5616500855257733([X=0.80765773219, Y=0.508219108660839, 
Z=-0.29900221630132817])], [lat=-0.17226782498440368, 
lon=0.8641157866087514([X=0.6397020656700857, Y=0.7492646151846353, 
Z=-0.1714170458549729])], [lat=0.591763073597, 
lon=1.0258877306398073([X=0.43020057589183536, Y=0.7097594028504629, 
Z=0.5578252903622132])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])]], internalEdges={4}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])]], internalEdges={0, 2}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])], [lat=-1.0E-323, lon=0.0([X=1.0, Y=0.0, 
Z=-1.0E-323])]], internalEdges={0}}]}  Large polygon: GeoComplexPolygon: 
{planetmodel=PlanetModel.SPHERE, number of shapes=1, address=e0a76761, 
testPoint=[lat=0.04032281608974351, 
lon=0.33067345007473165([X=0.945055084899262, Y=0.3244161494642355, 
Z=0.040311889968686655])], testPointInSet=true, shapes={ 
{[lat=1.0468214627857893E-8, lon=8.413079957136915E-7([X=0.6461, 
Y=8.413079957135923E-7, Z=1.0468214627857893E-8])], [lat=-0.3036468642757333, 
lon=0.5616500855257733([X=0.80765773219, Y=0.508219108660839, 
Z=-0.29900221630132817])], [lat=-0.17226782498440368, 
lon=0.8641157866087514([X=0.6397020656700857, Y=0.7492646151846353, 
Z=-0.1714170458549729])], [lat=0.591763073597, 
lon=1.0258877306398073([X=0.43020057589183536, Y=0.7097594028504629, 
Z=0.5578252903622132])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])], [lat=-1.0E-323, lon=0.0([X=1.0, Y=0.0, 
Z=-1.0E-323])]}}  Point: [lat=-8.763997112262326E-13, 
lon=3.14159265358979([X=-1.0, Y=3.2310891488651735E-15, 
Z=-8.763997112262326E-13])]  WKT: POLYGON((32.18017946378854 
-17.397683785381247,49.51018758330871 -9.870219317504647,58.77903721991479 
33.90553510354402,2.640604559432277 9.363173880050821,3.1673235739886286E-10 
8.853669066894417E-11,0.0 -5.7E-322,4.820339742500488E-5 
5.99784517213369E-7,32.18017946378854 -17.397683785381247))  WKT: 
POINT(179.83 -5.021400461974724E-11) normal polygon: true large 
polygon: false 

Stack Trace:
java.lang.AssertionError: 
Standard polygon: GeoCompositePolygon: {[GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=-0.3036468642757333, 
lon=0.5616500855257733([X=0.80765773219, Y=0.508219108660839, 
Z=-0.29900221630132817])], [lat=-0.17226782498440368, 
lon=0.8641157866087514([X=0.6397020656700857, Y=0.7492646151846353, 
Z=-0.1714170458549729])], [lat=0.591763073597, 
lon=1.0258877306398073([X=0.43020057589183536, Y=0.7097594028504629, 
Z=0.5578252903622132])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])]], internalEdges={4}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=0.16341821264361944, 
lon=0.04608724380526752([X=0.9856292512291138, Y=0.04545712432110151, 
Z=0.16269182207472105])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, Y=5.5280224842135794E-12, 
Z=1.5452567609928165E-12])]], internalEdges={0, 2}}, GeoConvexPolygon: 
{planetmodel=PlanetModel.SPHERE, points=[[lat=1.0468214627857893E-8, 
lon=8.413079957136915E-7([X=0.6461, Y=8.413079957135923E-7, 
Z=1.0468214627857893E-8])], [lat=1.5452567609928165E-12, 
lon=5.5280224842135794E-12([X=1.0, 

[jira] [Commented] (SOLR-7767) Zookeeper Ensemble Admin UI

2018-08-03 Thread Upayavira (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568522#comment-16568522
 ] 

Upayavira commented on SOLR-7767:
-

The way zookeeper is handled in the backend of the admin UI is a mess. A simple 
API that returns data as the UI expects it would be a great improvement.

> Zookeeper Ensemble Admin UI
> ---
>
> Key: SOLR-7767
> URL: https://issues.apache.org/jira/browse/SOLR-7767
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, SolrCloud
>Reporter: Aniket Khare
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: zk-status-error.png, zk-status-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For SolrCloud mode can we have the functionality to display the live nodes 
> from the zookeeper ensemble. So that user can easily get to know if any of 
> zookeeper instance is down or having any other issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12617.
--
Resolution: Fixed

> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568502#comment-16568502
 ] 

ASF subversion and git services commented on SOLR-12617:


Commit 61db4ab8acc33c0cb8a649629a5e67405bea in lucene-solr's branch 
refs/heads/branch_7x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=61db4ab ]

SOLR-12617: Remove Commons BeanUtils as a dependency

(cherry picked from commit e3cdb39)


> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8444) Geo3D Test Failure: Test Point is Contained by shape but outside the XYZBounds

2018-08-03 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8444:
---
Description: 
Reproduces for me on branch_7x.  /cc [~daddywri]  [~ivera]
{code:java}
reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=252B55C41A78F987 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=th 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{code}
{code:java}
[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   1> doc=639 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.077431832267001, 
lon=3.141592653589793([X=-0.47288721079787505, Y=5.791198090613375E-17, 
Z=-0.8794340737031547])]
   [junit4]   1>   quantized=[X=-0.47288721059145067, 
Y=2.3309121299774915E-10, Z=-0.8794340734858216]
   [junit4]   1> doc=1079 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.074298280522397, 
lon=-3.141592653589793([X=-0.4756448135017662, Y=-5.824968983859777E-17, 
Z=-0.8779556514050441])]
   [junit4]   1>   quantized=[X=-0.4756448134355703, 
Y=-2.3309121299774915E-10, Z=-0.8779556514433299]
   [junit4]   1>   shape=GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, 
number of shapes=1, address=5b34ab34, testPoint=[lat=-0.9074319066955279, 
lon=2.1047077826887393E-11([X=0.6151745825332513, Y=1.2947627315700302E-11, 
Z=-0.7871615107396388])], testPointInSet=true, shapes={ 
{[lat=0.12234154783984401, lon=2.9773900430735544E-11([X=0.9935862314832985, 
Y=2.9582937525533484E-11, Z=0.12216699617265761])], [lat=-1.1812619187738946, 
lon=0.0([X=0.3790909950565304, Y=0.0, Z=-0.9234617794363308])], 
[lat=-1.5378336326638269, lon=-2.177768668411E-97([X=0.03288309726634029, 
Y=-7.161177895900688E-99, Z=-0.9972239126272725])]}}
   [junit4]   1>   bounds=XYZBounds: [xmin=0.03288309626634029 
xmax=1.0011188549924792 ymin=-1.0E-9 ymax=1.029686850221785E-9 
zmin=-0.9972239136272725 zmax=0.12216699717265761]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=252B55C41A78F987 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=th 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.16s | TestGeo3DPoint.testGeo3DRelations <<<
   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
shape=GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, number of shapes=1, 
address=5b34ab34, testPoint=[lat=-0.9074319066955279, 
lon=2.1047077826887393E-11([X=0.6151745825332513, Y=1.2947627315700302E-11, 
Z=-0.7871615107396388])], testPointInSet=true, shapes={ 
{[lat=0.12234154783984401, lon=2.9773900430735544E-11([X=0.9935862314832985, 
Y=2.9582937525533484E-11, Z=0.12216699617265761])], [lat=-1.1812619187738946, 
lon=0.0([X=0.3790909950565304, Y=0.0, Z=-0.9234617794363308])], 
[lat=-1.5378336326638269, lon=-2.177768668411E-97([X=0.03288309726634029, 
Y=-7.161177895900688E-99, Z=-0.9972239126272725])]}}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([252B55C41A78F987:955428509535571B]:0)
   [junit4]>at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
sim=RandomSimilarity(queryNorm=false): {}, locale=th, timezone=America/Virgin
   [junit4]   2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 
1.8.0_161 (64-bit)/cpus=4,threads=1,free=298939008,total=313524224
   [junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DPoint]
   [junit4] Completed [1/1 (1!)] in 0.62s, 1 test, 1 failure <<< FAILURES!

{code}

  was:
Reproduces for me.  /cc [~daddywri]  [~ivera]
{code:java}
reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=252B55C41A78F987 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=th 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{code}
{code:java}
[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   1> doc=639 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.077431832267001, 
lon=3.141592653589793([X=-0.47288721079787505, Y=5.791198090613375E-17, 
Z=-0.8794340737031547])]
   [junit4]   1>   quantized=[X=-0.47288721059145067, 
Y=2.3309121299774915E-10, Z=-0.8794340734858216]
   [junit4]   1> doc=1079 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.074298280522397, 
lon=-3.141592653589793([X=-0.4756448135017662, Y=-5.824968983859777E-17, 
Z=-0.8779556514050441])]
   [junit4]   1>   quantized=[X=-0.4756448134355703, 

[jira] [Created] (LUCENE-8444) Geo3D Test Failure: Test Point is Contained by shape but outside the XYZBounds

2018-08-03 Thread Nicholas Knize (JIRA)
Nicholas Knize created LUCENE-8444:
--

 Summary: Geo3D Test Failure: Test Point is Contained by shape but 
outside the XYZBounds
 Key: LUCENE-8444
 URL: https://issues.apache.org/jira/browse/LUCENE-8444
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Nicholas Knize


Reproduces for me.  /cc [~daddywri]  [~ivera]
{code:java}
reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=252B55C41A78F987 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=th 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{code}
{code:java}
[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   1> doc=639 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.077431832267001, 
lon=3.141592653589793([X=-0.47288721079787505, Y=5.791198090613375E-17, 
Z=-0.8794340737031547])]
   [junit4]   1>   quantized=[X=-0.47288721059145067, 
Y=2.3309121299774915E-10, Z=-0.8794340734858216]
   [junit4]   1> doc=1079 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-1.074298280522397, 
lon=-3.141592653589793([X=-0.4756448135017662, Y=-5.824968983859777E-17, 
Z=-0.8779556514050441])]
   [junit4]   1>   quantized=[X=-0.4756448134355703, 
Y=-2.3309121299774915E-10, Z=-0.8779556514433299]
   [junit4]   1>   shape=GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, 
number of shapes=1, address=5b34ab34, testPoint=[lat=-0.9074319066955279, 
lon=2.1047077826887393E-11([X=0.6151745825332513, Y=1.2947627315700302E-11, 
Z=-0.7871615107396388])], testPointInSet=true, shapes={ 
{[lat=0.12234154783984401, lon=2.9773900430735544E-11([X=0.9935862314832985, 
Y=2.9582937525533484E-11, Z=0.12216699617265761])], [lat=-1.1812619187738946, 
lon=0.0([X=0.3790909950565304, Y=0.0, Z=-0.9234617794363308])], 
[lat=-1.5378336326638269, lon=-2.177768668411E-97([X=0.03288309726634029, 
Y=-7.161177895900688E-99, Z=-0.9972239126272725])]}}
   [junit4]   1>   bounds=XYZBounds: [xmin=0.03288309626634029 
xmax=1.0011188549924792 ymin=-1.0E-9 ymax=1.029686850221785E-9 
zmin=-0.9972239136272725 zmax=0.12216699717265761]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=252B55C41A78F987 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=th 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.16s | TestGeo3DPoint.testGeo3DRelations <<<
   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
shape=GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, number of shapes=1, 
address=5b34ab34, testPoint=[lat=-0.9074319066955279, 
lon=2.1047077826887393E-11([X=0.6151745825332513, Y=1.2947627315700302E-11, 
Z=-0.7871615107396388])], testPointInSet=true, shapes={ 
{[lat=0.12234154783984401, lon=2.9773900430735544E-11([X=0.9935862314832985, 
Y=2.9582937525533484E-11, Z=0.12216699617265761])], [lat=-1.1812619187738946, 
lon=0.0([X=0.3790909950565304, Y=0.0, Z=-0.9234617794363308])], 
[lat=-1.5378336326638269, lon=-2.177768668411E-97([X=0.03288309726634029, 
Y=-7.161177895900688E-99, Z=-0.9972239126272725])]}}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([252B55C41A78F987:955428509535571B]:0)
   [junit4]>at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
sim=RandomSimilarity(queryNorm=false): {}, locale=th, timezone=America/Virgin
   [junit4]   2> NOTE: Linux 4.15.0-29-generic amd64/Oracle Corporation 
1.8.0_161 (64-bit)/cpus=4,threads=1,free=298939008,total=313524224
   [junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DPoint]
   [junit4] Completed [1/1 (1!)] in 0.62s, 1 test, 1 failure <<< FAILURES!

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12617) Remove Commons BeanUtils as a dependency

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568482#comment-16568482
 ] 

ASF subversion and git services commented on SOLR-12617:


Commit e3cdb395a4009f118900397c8a2086620b436455 in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3cdb39 ]

SOLR-12617: Remove Commons BeanUtils as a dependency


> Remove Commons BeanUtils as a dependency
> 
>
> Key: SOLR-12617
> URL: https://issues.apache.org/jira/browse/SOLR-12617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12617.patch
>
>
> The BeanUtils library is a dependency in the velocity contrib module.
> It is a compile time dependency but the velocity code that Solr uses doesn't 
> leverage any of this.
> After removing the dependency Solr compiles just fine and the browse handler 
> also loads up correctly. 
> While chatting to [~ehatcher] offline he confirmed that the tests also pass 
> without this dependency.
> The main motivation behind this is a long standing CVE against bean-utils 
> 1.8.3 ( 
> [https://nvd.nist.gov/vuln/detail/CVE-2014-0114#vulnCurrentDescriptionTitle] 
> ) which to my knowledge cannot be leveraged from how we use it in Solr . But 
> security scans still pick it up so if it's not being used we should simply 
> remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1134 - Unstable

2018-08-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1134/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/729/consoleText

[repro] Revision: 2b121e7f2267d185455b4f6bf4aa9fa6bf9266f9

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=8C54490F02AFFD95 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=fr-CH 
-Dtests.timezone=America/Jamaica -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
2a41cbd192451f6e69ae2e6cccb7b2e26af2
[repro] git fetch
[repro] git checkout 2b121e7f2267d185455b4f6bf4aa9fa6bf9266f9

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3334 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestTriggerIntegration" -Dtests.showOutput=onerror  
-Dtests.seed=8C54490F02AFFD95 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=fr-CH -Dtests.timezone=America/Jamaica -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 4482 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 2a41cbd192451f6e69ae2e6cccb7b2e26af2

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Cassandra Targett
I agree that the problem is the workflow. Only the Jira system admins
(Infra) can modify a workflow, so I think an issue for them to transition
the issues and remove that possible status is a good idea. Or, if we keep
the In Progress state, we need to make sure we can transition issues and
and out of it.

On Fri, Aug 3, 2018 at 10:07 AM Jan Høydahl  wrote:

> The error is "NOTE: You do not have permission to perform a bulk
> transition on the selected 7 issues."
> A little googl'ing shows that this is because I do not have permission to
> transitino one of them either.
> And that is probably because our workflow is broken - it contains a
> dangling "IN PROGRESS" sate, but
> it is not connected with any arrows to other states, so it is impossible
> to get out out it.
> And I cannot change (temporarily) the workflow either, it asks me to
> contact Jira admin.
>
> This makes me think whether we perhaps need to ask Infra to fix our
> workflow and/or transition all IN PROGRESS issues for us.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
>
> Are you sure you don't have the permissions? The user "janhoy" has
> Administrator-level permissions for the SOLR project, and the Administrator
> role has the permissions to transition issues, so you should be able to.
>
> If you can't make that change it's possible you're trying to transition to
> a status that is not allowed according to the workflow.
>
> Cassandra
>
> On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:
>
>> Hi,
>>
>> I have some issues in status "IN PROGRESS" which are impossible to
>> transistion to any other state after the recent changes. The resolve button
>> is not there and not in menus either. So I believe these are trapped in
>> no-mans-land… There are a total of 64 issues with this status.
>>
>> I guess it makes sense to bulk transition all of these to state OPEN. But
>> I don't seem to have the JIRA karma to bulk transition issues either. Can
>> some other committer make an attempt?
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207591947
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -115,9 +123,10 @@ protected Object mutateValue(Object srcVal) {
   for (Map.Entry format : 
formats.entrySet()) {
 DateTimeFormatter parser = format.getValue();
 try {
-  DateTime dateTime = parser.parseDateTime(srcStringVal);
-  return dateTime.withZone(DateTimeZone.UTC).toDate();
-} catch (IllegalArgumentException e) {
+  TemporalAccessor parsedTemporalDate = 
parser.parseBest(srcStringVal, OffsetDateTime::from,
--- End diff --

Alright.
Hopefully this one is close to being merged. Any other problems you for see 
here?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207590707
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -115,9 +123,10 @@ protected Object mutateValue(Object srcVal) {
   for (Map.Entry format : 
formats.entrySet()) {
 DateTimeFormatter parser = format.getValue();
 try {
-  DateTime dateTime = parser.parseDateTime(srcStringVal);
-  return dateTime.withZone(DateTimeZone.UTC).toDate();
-} catch (IllegalArgumentException e) {
+  TemporalAccessor parsedTemporalDate = 
parser.parseBest(srcStringVal, OffsetDateTime::from,
--- End diff --

SOLR-12591 is next (docs + tests), then removal in SOLR-12593


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-8443.

Resolution: Fixed

> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Assignee: Nicholas Knize
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568387#comment-16568387
 ] 

ASF subversion and git services commented on LUCENE-8443:
-

Commit cfd4154b1a239827b9f6536eeaa54c9813edcf32 in lucene-solr's branch 
refs/heads/branch_7x from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cfd4154 ]

LUCENE-8443: Fix InverseIntersectVisitor logic for LatLonShape queries, add 
adversarial test for same shape many times


> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Assignee: Nicholas Knize
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12485) Xml loader should save the relationship of children

2018-08-03 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12485:
---

Assignee: David Smiley

> Xml loader should save the relationship of children
> ---
>
> Key: SOLR-12485
> URL: https://issues.apache.org/jira/browse/SOLR-12485
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Once SolrInputDocument supports labeled child documents, XmlLoader should add 
> the child document to the map while saving its key name, to maintain the 
> child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #430: SOLR-12485: support labelled children in xml ...

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/430#discussion_r207588377
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/util/ClientUtils.java ---
@@ -72,7 +72,9 @@ public static void writeXML( SolrInputDocument doc, 
Writer writer ) throws IOExc
   for( Object v : field ) {
 String update = null;
 
-if (v instanceof Map) {
+if(v instanceof SolrInputDocument) {
--- End diff --

Good catch!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #430: SOLR-12485: support labelled children in xml ...

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/430#discussion_r207587702
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -501,6 +501,73 @@ public void testXML() throws IOException, 
XMLStreamException {

   }
 
+  @Test
+  public void testXMLLabeledChildren() throws IOException, 
XMLStreamException {
--- End diff --

This is a copy-paste of the previous test, but as-such doesn't test that 
these labels are retained in any way.  In this test, I think you just need to 
use the XMLLoader to get a SolrInputDocument, and at that point you can test 
that there is a field "test" with these two child documents as values.

I do like that in this test you've displayed the syntax.  I was expecting 
something like 
```

p1

  
c1
I like this
  


```
What do you think of that syntax?  The intention is to make it super clear 
that the child document is attached as a field to the parent document, which is 
also reflected in the internal API, and so it's consistent.  I haven't looked 
at the details of XMLLoader yet to judge how approachable this is.  Putting a 
"name" attribute on doc isn't bad but makes the "field" relationship un-obvious.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12621) Admin UI Returns 404 error after jdk point version upgrade and service restart

2018-08-03 Thread Rome Sheehan (JIRA)
Rome Sheehan created SOLR-12621:
---

 Summary: Admin UI Returns 404 error after jdk point version 
upgrade and service restart 
 Key: SOLR-12621
 URL: https://issues.apache.org/jira/browse/SOLR-12621
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 6.6
Reporter: Rome Sheehan


After a recent RHEL O/S upgrade,  the admin UI was returning a 404 error even 
thought the SOLR service was up and running.  Further investigation suggested 
that the root cause was the java version used to start the service was to 
blame.  Since we were seeing the issue only on QA and PROD had not yet been 
patched, the Linux admin restarted the service on the affected server using the 
pre-ugrade version of the JDK, which solved the issue.

We're wondering why this would fail on  a minor point version upgrade.

More details:

Pre-upgrade - RHEL 7.4 , openjdk v 1.8.0_144

Post-upgrade - RHEL 7.5, openjdk v 1.8.0_171-8 (the service started, but admin 
UI not working)

Solr version 6.6.0 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568371#comment-16568371
 ] 

ASF subversion and git services commented on LUCENE-8443:
-

Commit 2a41cbd192451f6e69ae2e6cccb7b2e26af2 in lucene-solr's branch 
refs/heads/master from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a41cbd ]

LUCENE-8443: Fix InverseIntersectVisitor logic for LatLonShape queries, add 
adversarial test for same shape many times


> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Assignee: Nicholas Knize
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207585634
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -115,9 +123,10 @@ protected Object mutateValue(Object srcVal) {
   for (Map.Entry format : 
formats.entrySet()) {
 DateTimeFormatter parser = format.getValue();
 try {
-  DateTime dateTime = parser.parseDateTime(srcStringVal);
-  return dateTime.withZone(DateTimeZone.UTC).toDate();
-} catch (IllegalArgumentException e) {
+  TemporalAccessor parsedTemporalDate = 
parser.parseBest(srcStringVal, OffsetDateTime::from,
--- End diff --

Ok, would this be part of another ticket?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-08-03 Thread Andrejs Aleksejevs (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568354#comment-16568354
 ] 

Andrejs Aleksejevs commented on LUCENE-8291:


Hi, [~thetaphi] thanks for the comment. Will try to use it.

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8291-2.patch, LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207577229
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.java
 ---
@@ -115,9 +123,10 @@ protected Object mutateValue(Object srcVal) {
   for (Map.Entry format : 
formats.entrySet()) {
 DateTimeFormatter parser = format.getValue();
 try {
-  DateTime dateTime = parser.parseDateTime(srcStringVal);
-  return dateTime.withZone(DateTimeZone.UTC).toDate();
-} catch (IllegalArgumentException e) {
+  TemporalAccessor parsedTemporalDate = 
parser.parseBest(srcStringVal, OffsetDateTime::from,
--- End diff --

I determined the functionality in the "extraction" module should simply go 
away in lieu of this URP, so there will be no duplication.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Jan Høydahl
The error is "NOTE: You do not have permission to perform a bulk transition on 
the selected 7 issues."
A little googl'ing shows that this is because I do not have permission to 
transitino one of them either.
And that is probably because our workflow is broken - it contains a dangling 
"IN PROGRESS" sate, but
it is not connected with any arrows to other states, so it is impossible to get 
out out it.
And I cannot change (temporarily) the workflow either, it asks me to contact 
Jira admin.

This makes me think whether we perhaps need to ask Infra to fix our workflow 
and/or transition all IN PROGRESS issues for us.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 3. aug. 2018 kl. 15:42 skrev Cassandra Targett :
> 
> Are you sure you don't have the permissions? The user "janhoy" has 
> Administrator-level permissions for the SOLR project, and the Administrator 
> role has the permissions to transition issues, so you should be able to.
> 
> If you can't make that change it's possible you're trying to transition to a 
> status that is not allowed according to the workflow.
> 
> Cassandra
> 
> On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  > wrote:
> Hi,
> 
> I have some issues in status "IN PROGRESS" which are impossible to 
> transistion to any other state after the recent changes. The resolve button 
> is not there and not in menus either. So I believe these are trapped in 
> no-mans-land… There are a total of 64 issues with this status.
> 
> I guess it makes sense to bulk transition all of these to state OPEN. But I 
> don't seem to have the JIRA karma to bulk transition issues either. Can some 
> other committer make an attempt?
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 



[jira] [Commented] (SOLR-7767) Zookeeper Ensemble Admin UI

2018-08-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568301#comment-16568301
 ] 

Jan Høydahl commented on SOLR-7767:
---

I'm approaching this again. Have pushed this to the PR
 * CHANGES entry
 * [RefGuide 
description|https://github.com/cominvent/lucene-solr/blob/solr7767-zk-admin/solr/solr-ref-guide/src/cloud-screens.adoc]
 with screenshot (same as in this issue)
 * Added test {{ZookeeperInfoHandlerTest}}

I think in the first version, we'll simply list all info in the table, then we 
could make fancy collapse in later iterations if needed. The same goes for 
refresh button, this UI is very simple, little data is fetched, and you can 
always reload in browser.

One improvement could be to register this monitoring on its own endpoint 
{{/admin/zookeeper/status}}, i.e. its own requestHandler, rather than 
overloading the ZookeeperInfoHandler, what do you think?

> Zookeeper Ensemble Admin UI
> ---
>
> Key: SOLR-7767
> URL: https://issues.apache.org/jira/browse/SOLR-7767
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, SolrCloud
>Reporter: Aniket Khare
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: zk-status-error.png, zk-status-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For SolrCloud mode can we have the functionality to display the live nodes 
> from the zookeeper ensemble. So that user can easily get to know if any of 
> zookeeper instance is down or having any other issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr issues in status "IN PROGRESS"

2018-08-03 Thread Cassandra Targett
Are you sure you don't have the permissions? The user "janhoy" has
Administrator-level permissions for the SOLR project, and the Administrator
role has the permissions to transition issues, so you should be able to.

If you can't make that change it's possible you're trying to transition to
a status that is not allowed according to the workflow.

Cassandra

On Fri, Aug 3, 2018 at 6:39 AM Jan Høydahl  wrote:

> Hi,
>
> I have some issues in status "IN PROGRESS" which are impossible to
> transistion to any other state after the recent changes. The resolve button
> is not there and not in menus either. So I believe these are trapped in
> no-mans-land… There are a total of 64 issues with this status.
>
> I guess it makes sense to bulk transition all of these to state OPEN. But
> I don't seem to have the JIRA karma to bulk transition issues either. Can
> some other committer make an attempt?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2475 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2475/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestWithCollection.testMoveReplicaWithCollection

Error Message:
Expected moving a replica of 'withCollection': 
testMoveReplicaWithCollection_abc to fail

Stack Trace:
java.lang.AssertionError: Expected moving a replica of 'withCollection': 
testMoveReplicaWithCollection_abc to fail
at 
__randomizedtesting.SeedInfo.seed([63194DEC6D9081F2:2C0E744FFEE24884]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.TestWithCollection.testMoveReplicaWithCollection(TestWithCollection.java:389)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14055 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestWithCollection
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22588 - Unstable!

2018-08-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22588/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState

Error Message:
Collection not found: deleteFromClusterState_false

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: 
deleteFromClusterState_false
at 
__randomizedtesting.SeedInfo.seed([F281F751CC3B9089:1C185C3CF305EB3E]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:188)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568129#comment-16568129
 ] 

ASF subversion and git services commented on SOLR-8207:
---

Commit 17a02c1089b80ee358a5dc6692cb443d9b4c9b01 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17a02c1 ]

SOLR-8207: Add "Nodes" view to the Admin UI "Cloud" tab, listing nodes and key 
metrics


> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207.patch, node-compact.png, node-details.png, 
> node-hostcolumn.png, node-toggle-row-numdocs.png, nodes-tab-real.png, 
> nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr issues in status "IN PROGRESS"

2018-08-03 Thread Jan Høydahl
Hi,

I have some issues in status "IN PROGRESS" which are impossible to transistion 
to any other state after the recent changes. The resolve button is not there 
and not in menus either. So I believe these are trapped in no-mans-land… There 
are a total of 64 issues with this status.

I guess it makes sense to bulk transition all of these to state OPEN. But I 
don't seem to have the JIRA karma to bulk transition issues either. Can some 
other committer make an attempt?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8207:
--
Affects Version/s: (was: 5.3)
   Issue Type: New Feature  (was: Improvement)

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207.patch, node-compact.png, node-details.png, 
> node-hostcolumn.png, node-toggle-row-numdocs.png, nodes-tab-real.png, 
> nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8207:
--
Attachment: SOLR-8207.patch

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207.patch, node-compact.png, node-details.png, 
> node-hostcolumn.png, node-toggle-row-numdocs.png, nodes-tab-real.png, 
> nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568107#comment-16568107
 ] 

Jan Høydahl commented on SOLR-8207:
---

New push:
 * Do not remove Graph (radial) as part of this patch (See SOLR-12620 for 
removal in 8.0)
 * Reformat refguide page with sub headings, makes it more readable
 * Correct refguide wording which still said that "Nodes" is default view

Precommit passes. Will commit to master branch today.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12620) Remove the Cloud -> Graph (Radial) view

2018-08-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12620:
---
Attachment: SOLR-12620.patch

> Remove the Cloud -> Graph (Radial) view
> ---
>
> Key: SOLR-12620
> URL: https://issues.apache.org/jira/browse/SOLR-12620
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12620.patch
>
>
> Spinoff from SOLR-8207
> The radial view does not scale well and should be removed in 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12620) Remove the Cloud -> Graph (Radial) view

2018-08-03 Thread JIRA
Jan Høydahl created SOLR-12620:
--

 Summary: Remove the Cloud -> Graph (Radial) view
 Key: SOLR-12620
 URL: https://issues.apache.org/jira/browse/SOLR-12620
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Reporter: Jan Høydahl
Assignee: Jan Høydahl
 Fix For: master (8.0)


Spinoff from SOLR-8207

The radial view does not scale well and should be removed in 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi reassigned LUCENE-8443:


Assignee: Nicholas Knize

> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Assignee: Nicholas Knize
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568084#comment-16568084
 ] 

ASF subversion and git services commented on LUCENE-8443:
-

Commit cb15cd5a3ec6f7a003c9316a4295d11ec0a87a89 in lucene-solr's branch 
refs/heads/branch_7x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cb15cd5 ]

LUCENE-8443: Mute failing test temporarily


> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8443) TestLatLonPointShapeQueries.testRandomTiny failing

2018-08-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568082#comment-16568082
 ] 

ASF subversion and git services commented on LUCENE-8443:
-

Commit 1af7686cb6a8b56db508c9870f35e48fe5e1b281 in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1af7686 ]

LUCENE-8443: Mute failing test temporarily


> TestLatLonPointShapeQueries.testRandomTiny failing
> --
>
> Key: LUCENE-8443
> URL: https://issues.apache.org/jira/browse/LUCENE-8443
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Jim Ferenczi
>Priority: Minor
>
> This test fails with various seeds.
> Here's one:
> {noformat}
>  ant test  \
>   -Dtestcase=TestLatLonPointShapeQueries \
>   -Dtests.method=testRandomTiny \
>   -Dtests.seed=E784AAB1723B2B7D \
>   -Dtests.slow=true \
>   -Dtests.badapples=true \
>   -Dtests.locale=th \
>   -Dtests.timezone=America/Paramaribo \
>   -Dtests.asserts=true \
>   -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #431: SOLR-12602: fix SignificantTermsStream and St...

2018-08-03 Thread barrotsteindev
Github user barrotsteindev closed the pull request at:

https://github.com/apache/lucene-solr/pull/431


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >