[jira] [Commented] (SOLR-11879) avoid creating a new Exception object for EOFException in FastinputStream

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344617#comment-16344617
 ] 

ASF subversion and git services commented on SOLR-11879:


Commit 1ef988a26378137b1e1f022985dacee1f557f4fc in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ef988a ]

SOLR-11879: moved the peek() call inside the for loop


> avoid creating a new Exception object for EOFException in FastinputStream
> -
>
> Key: SOLR-11879
> URL: https://issues.apache.org/jira/browse/SOLR-11879
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: FastI
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
> Attachments: SOLR-11879.patch, SOLR-11879.patch, SOLR-11879.patch, 
> Screen Shot 2018-01-24 at 7.26.16 PM.png
>
>
> FastInputStream creates and throws a new EOFException, every time an end of 
> stream is encountered. This is wasteful as we never use the stack trace 
> anywhere 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11879) avoid creating a new Exception object for EOFException in FastinputStream

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344612#comment-16344612
 ] 

ASF subversion and git services commented on SOLR-11879:


Commit e2a5d46b9cdc5686051f4de34cca176b50c11fb1 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2a5d46 ]

SOLR-11879: moved the peek() call inside the for loop


> avoid creating a new Exception object for EOFException in FastinputStream
> -
>
> Key: SOLR-11879
> URL: https://issues.apache.org/jira/browse/SOLR-11879
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: FastI
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
> Attachments: SOLR-11879.patch, SOLR-11879.patch, SOLR-11879.patch, 
> Screen Shot 2018-01-24 at 7.26.16 PM.png
>
>
> FastInputStream creates and throws a new EOFException, every time an end of 
> stream is encountered. This is wasteful as we never use the stack trace 
> anywhere 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 428 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/428/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates: 1) 
Thread[id=18662, name=qtp936520764-18662, state=TIMED_WAITING, 
group=TGRP-TestStressCloudBlindAtomicUpdates] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates: 
   1) Thread[id=18662, name=qtp936520764-18662, state=TIMED_WAITING, 
group=TGRP-TestStressCloudBlindAtomicUpdates]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([F983D320774BB558]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=18662, name=qtp936520764-18662, state=TIMED_WAITING, 
group=TGRP-TestStressCloudBlindAtomicUpdates] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=18662, name=qtp936520764-18662, state=TIMED_WAITING, 
group=TGRP-TestStressCloudBlindAtomicUpdates]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([F983D320774BB558]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSetProperties

Error Message:
Expected 8 triggers but found: [x0, x1, x2, x3, x4, x5, x6] expected:<8> but 
was:<7>

Stack Trace:
java.lang.AssertionError: Expected 8 triggers but found: [x0, x1, x2, x3, x4, 
x5, x6] expected:<8> but was:<7>
at 

[jira] [Updated] (SOLR-11879) avoid creating a new Exception object for EOFException in FastinputStream

2018-01-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11879:
--
Attachment: SOLR-11879.patch

> avoid creating a new Exception object for EOFException in FastinputStream
> -
>
> Key: SOLR-11879
> URL: https://issues.apache.org/jira/browse/SOLR-11879
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: FastI
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
> Attachments: SOLR-11879.patch, SOLR-11879.patch, SOLR-11879.patch, 
> Screen Shot 2018-01-24 at 7.26.16 PM.png
>
>
> FastInputStream creates and throws a new EOFException, every time an end of 
> stream is encountered. This is wasteful as we never use the stack trace 
> anywhere 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21368 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21368/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by ComputePlanAction should 
not be null SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, 
null], BEFORE_ACTION=[compute_plan, null]}
at 
__randomizedtesting.SeedInfo.seed([CC46C76535920181:FC8626E7BDE0E0DD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4415 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4415/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.OverallAnalyticsTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.OverallAnalyticsTest: 1) Thread[id=66, 
name=qtp1315797791-66, state=TIMED_WAITING, group=TGRP-OverallAnalyticsTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.analytics.OverallAnalyticsTest: 
   1) Thread[id=66, name=qtp1315797791-66, state=TIMED_WAITING, 
group=TGRP-OverallAnalyticsTest]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([52209ED3420A94E7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.OverallAnalyticsTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=66, 
name=qtp1315797791-66, state=TIMED_WAITING, group=TGRP-OverallAnalyticsTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=66, name=qtp1315797791-66, state=TIMED_WAITING, 
group=TGRP-OverallAnalyticsTest]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([52209ED3420A94E7]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testSearchRate

Error Message:
{srt=[CapturedEvent{timestamp=1044782017928525, stage=AFTER_ACTION, 
actionName='compute', event={   "id":"3b638852f4573T6lejbeoqu6skdmmwimmhol185", 
  "source":"search_rate_trigger",   "eventTime":1044778799023475,   
"eventType":"SEARCHRATE",   "properties":{ "node":{   
"127.0.0.1:10016_solr":250.0,   

[jira] [Commented] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-01-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344527#comment-16344527
 ] 

Hoss Man commented on SOLR-11916:
-

re: useDocValuesAsStored -- here's my straw man proposal after sleeping on it a 
bit...

* SortableTextField.init should override the schemaVersion based implicit 
default in FieldType.init
** this means by default, no fieldType/field using SortableTextField w/default 
to useDocValuesAsStored
* SortableTextField.createFields should be aware of the effective value of 
SchemaField.useDocValuesAsStored and if it's true: fail (_at index time_) if 
any field values being added are longer then the (effective) 
maxCharsForDocValues
** this error message should be very clear about what's happening, mentioning 
both maxCharsForDocValues, and useDocValuesAsStored.

Net result: 
* clients that try to add huge values to fields with maxCharsForDocValues=small 
may get 2 diff behaviors depending on field's useDocValuesAsStored:
** if useDocValuesAsStored==false:
*** docvalues are truncated
** if useDocValuesAsStored==true:
*** request fails because solr can't "fit" the huge value into the "small" 
limit that's been configured
* ie: "the schema told us doc values should be limited to 'small' and to use 
doc values as if they were stored fields, and we can't meet those two 
expectations for your 'huge' field value, so we're rejecting it"


...i'm pretty sure this is all doable (even if the useDocValuesAsStored is 
specified on either the fieldType or the field) and i'll test it out soon.

> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" docValues="true" stored="false" multiValued="false"/>
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11685) CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with leader mismatch

2018-01-29 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-11685:


Assignee: Varun Thacker

> CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with 
> leader mismatch
> ---
>
> Key: SOLR-11685
> URL: https://issues.apache.org/jira/browse/SOLR-11685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: jenkins_7x_257.log, jenkins_master_7045.log, 
> solr_master_7574.log, solr_master_8983.log
>
>
> I've been noticing lots of failures on Jenkins where the document add get's 
> rejected because of leader conflict and throws an error like 
> {code}
> ClusterState says we are the leader 
> (https://127.0.0.1:38715/solr/awhollynewcollection_0_shard2_replica_n2), but 
> locally we don't think so. Request came from null
> {code}
> Scanning Jenkins logs I see that these failures have increased since Sept 
> 28th and has been failing daily.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11481) Ref guide page explaining nuances of the recovery process

2018-01-29 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-11481:


Assignee: Varun Thacker

> Ref guide page explaining nuances of the recovery process
> -
>
> Key: SOLR-11481
> URL: https://issues.apache.org/jira/browse/SOLR-11481
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
>
> The Solr recovery process involves PeerSync , which has configuration 
> parameters to allow the number of records it should keep.
> If this fails we do a index replication where possibly we can throttle 
> replication 
> I think it's worth explaining to users what these configuration parameters 
> are and how does a node actually recover. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1266 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1266/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestUserTermScoreWithQ

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.ltr.feature.TestUserTermScoreWithQ: 1) Thread[id=60, 
name=qtp758744801-60, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.ltr.feature.TestUserTermScoreWithQ: 
   1) Thread[id=60, name=qtp758744801-60, state=TIMED_WAITING, 
group=TGRP-TestUserTermScoreWithQ]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([D8AF62872B5AEB0F]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestUserTermScoreWithQ

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=60, 
name=qtp758744801-60, state=TIMED_WAITING, group=TGRP-TestUserTermScoreWithQ]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=60, name=qtp758744801-60, state=TIMED_WAITING, 
group=TGRP-TestUserTermScoreWithQ]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([D8AF62872B5AEB0F]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
TransactionLog, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:498)  
at org.apache.solr.core.SolrCore.(SolrCore.java:948)  at 

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 430 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/430/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.schema.ManagedSchemaRoundRobinCloudTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:892)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:656)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1287)  at 
org.apache.solr.core.SolrCore.lambda$getConfListener$20(SolrCore.java:2969)  at 
org.apache.solr.cloud.ZkController.lambda$fireEventListeners$5(ZkController.java:2610)
  at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:498)  
at org.apache.solr.core.SolrCore.(SolrCore.java:948)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:863)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1039)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:950)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:455)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:530)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
  at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
 at 

[jira] [Commented] (SOLR-11661) New HDFS collection reuses unremoved data from a deleted HDFS collection with same name causes inconsistent view of documents

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344487#comment-16344487
 ] 

ASF subversion and git services commented on SOLR-11661:


Commit 3527fa5979f6d4fded58767d84ffb0988734acd2 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3527fa5 ]

SOLR-11661: New HDFS collection reuses unremoved data from a deleted HDFS 
collection with same name causes inconsistent view of documents


> New HDFS collection reuses unremoved data from a deleted HDFS collection with 
> same name causes inconsistent view of documents
> -
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt, SOLR-11661.patch, 
> SOLR-11661.patch
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}
> The root problem here is when the old replica is not live during deletion of 
> a collection, the correspond HDFS data of that replica is not removed 
> therefore when a new collection with the same name as the deleted collection 
> is created, new replicas will reuse the old HDFS data. This leads to many 
> problems in leader election and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11661) New HDFS collection reuses unremoved data from a deleted HDFS collection with same name causes inconsistent view of documents

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344485#comment-16344485
 ] 

ASF subversion and git services commented on SOLR-11661:


Commit c56d774eb6555baa099fec22f290a9b5640a366d in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c56d774 ]

SOLR-11661: New HDFS collection reuses unremoved data from a deleted HDFS 
collection with same name causes inconsistent view of documents


> New HDFS collection reuses unremoved data from a deleted HDFS collection with 
> same name causes inconsistent view of documents
> -
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt, SOLR-11661.patch, 
> SOLR-11661.patch
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}
> The root problem here is when the old replica is not live during deletion of 
> a collection, the correspond HDFS data of that replica is not removed 
> therefore when a new collection with the same name as the deleted collection 
> is created, new replicas will reuse the old HDFS data. This leads to many 
> problems in leader election and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11661) New HDFS collection reuses unremoved data from a deleted HDFS collection with same name causes inconsistent view of documents

2018-01-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11661:

Summary: New HDFS collection reuses unremoved data from a deleted HDFS 
collection with same name causes inconsistent view of documents  (was: New HDFS 
collection reuses old HDFS data from deleted HDFS collection with same name 
causes inconsistent view of documents)

> New HDFS collection reuses unremoved data from a deleted HDFS collection with 
> same name causes inconsistent view of documents
> -
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt, SOLR-11661.patch, 
> SOLR-11661.patch
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}
> The root problem here is when the old replica is not live during deletion of 
> a collection, the correspond HDFS data of that replica is not removed 
> therefore when a new collection with the same name as the deleted collection 
> is created, new replicas will reuse the old HDFS data. This leads to many 
> problems in leader election and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11661) New HDFS collection reuses old HDFS data from deleted HDFS collection with same name causes inconsistent view of documents

2018-01-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11661:

Description: 
While testing SOLR-11458, [~ab] ran into an interesting failure which resulted 
in different document counts between leader and replica. The test is 
MoveReplicaHDFSTest on jira/solr-11458-2 branch.

The failure is rare but reproducible on beasting:
{code}
reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
-Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
was:<56>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
   [junit4]>at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
   [junit4]>at 
org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
{code}

The root problem here is when the old replica is not live during deletion of a 
collection, the correspond HDFS data of that replica is not removed therefore 
when a new collection with the same name as the deleted collection is created, 
new replicas will reuse the old HDFS data. This leads to many problems in 
leader election and recovery

  was:
While testing SOLR-11458, [~ab] ran into an interesting failure which resulted 
in different document counts between leader and replica. The test is 
MoveReplicaHDFSTest on jira/solr-11458-2 branch.

The failure is rare but reproducible on beasting:
{code}
reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
-Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
was:<56>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
   [junit4]>at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
   [junit4]>at 
org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
{code}


> New HDFS collection reuses old HDFS data from deleted HDFS collection with 
> same name causes inconsistent view of documents
> --
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt, SOLR-11661.patch, 
> SOLR-11661.patch
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}
> The root problem here is when the old replica is not live during deletion of 
> a collection, the correspond HDFS data of that replica is not removed 
> therefore when a new collection with the same name as the deleted collection 
> is created, new replicas will reuse the old HDFS data. This leads to many 
> problems in leader election and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11661) New HDFS collection reuses old HDFS data from deleted HDFS collection with same name causes inconsistent view of documents

2018-01-29 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11661:

Summary: New HDFS collection reuses old HDFS data from deleted HDFS 
collection with same name causes inconsistent view of documents  (was: Race 
condition between core creation thread and recovery request from leader causes 
inconsistent view of documents)

> New HDFS collection reuses old HDFS data from deleted HDFS collection with 
> same name causes inconsistent view of documents
> --
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt, SOLR-11661.patch, 
> SOLR-11661.patch
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11900) API command to delete oldest collections in a time routed alias

2018-01-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344455#comment-16344455
 ] 

David Smiley commented on SOLR-11900:
-

I was chatting with [~gus_heck] and we figured the need for this isn't very 
compelling (in either form above).  Instead, if the user wants to delete old 
collections explicitly, they could do these commands themselves (update the 
alias, delete the collections).  Collection deletion could even be enhanced to 
detect its a part of an alias and auto-remove itself, which would make it 
easier and would eliminate a race condition of the target collection list 
getting updated at the same time more collections get added (however unlikely). 
 And after SOLR-11925, the user could also temporarily adjust whatever metadata 
setting that establishes the automatic collection deletion time span, assuming 
that new data is coming in to trigger the logic.

So I'll stop this now and re-use most of the code here in SOLR-11925 which 
needs most of the same stuff.

> API command to delete oldest collections in a time routed alias
> ---
>
> Key: SOLR-11900
> URL: https://issues.apache.org/jira/browse/SOLR-11900
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11900.patch
>
>
> For Time Routed Aliases, we'll need an API command to delete the oldest 
> collection(s).  Perhaps the command action name is 
> DELETE_COLLECTION_OF_ROUTED_ALIAS (yes that's long).  And input is of course 
> the routed alias name, plus a mandatory "before" which is a standard time 
> input that Solr accepts that will likely include date math.  Thus if you used 
> before="NOW/DAY-90DAYS" then your guaranteed to have the last 90 days worth 
> of data.  If a collection overlaps past what "before" is computed to be then 
> it needs to stay.  The pattern might match any number of collections, perhaps 
> none.  But in all cases, the most recent collection must be retained -- the 
> time routed aliases must at all times refer to at least one collection.
> The underlying steps will be to first update the alias, and then delete the 
> collection(s).  It ought to return the collections that get deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11925) Auto delete oldest collections in a time routed alias

2018-01-29 Thread David Smiley (JIRA)
David Smiley created SOLR-11925:
---

 Summary: Auto delete oldest collections in a time routed alias
 Key: SOLR-11925
 URL: https://issues.apache.org/jira/browse/SOLR-11925
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 7.3


The oldest collections in a Time Routed Alias should be automatically deleted, 
according to a new alias metadata that establishes how long.  It can be checked 
as new data flows in at TimeRoutedAliasUpdateProcessor and thus it won't occur 
if new data isn't coming in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344435#comment-16344435
 ] 

David Smiley commented on SOLR-10786:
-

I assume the pair of arguments, double[] a and double[] b for longitude and 
latitude respectively would both be of length 1?  Sure; it's trivial as you 
just need to call out to something like 
{{org.apache.lucene.util.SloppyMath#haversinMeters(double, double, double, 
double)}}.  Yeah I could add something like this or maybe this comment is 
all you were looking for from me since it amounts to a one-liner.

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21367 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21367/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:39829_solr, 
127.0.0.1:41877_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/19)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node5":{   "core":"testSimple1_shard1_replica_n2",   
"base_url":"http://127.0.0.1:41877/solr;,   
"node_name":"127.0.0.1:41877_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node12":{   
"core":"testSimple1_shard1_replica_n11",   
"base_url":"http://127.0.0.1:41877/solr;,   
"node_name":"127.0.0.1:41877_solr",   "state":"active",   
"type":"NRT"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node8":{   
"core":"testSimple1_shard2_replica_n6",   
"base_url":"http://127.0.0.1:41877/solr;,   
"node_name":"127.0.0.1:41877_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node10":{   
"core":"testSimple1_shard2_replica_n9",   
"base_url":"http://127.0.0.1:43243/solr;,   
"node_name":"127.0.0.1:43243_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"2",  
 "autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:39829_solr, 127.0.0.1:41877_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/19)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node5":{
  "core":"testSimple1_shard1_replica_n2",
  "base_url":"http://127.0.0.1:41877/solr;,
  "node_name":"127.0.0.1:41877_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node12":{
  "core":"testSimple1_shard1_replica_n11",
  "base_url":"http://127.0.0.1:41877/solr;,
  "node_name":"127.0.0.1:41877_solr",
  "state":"active",
  "type":"NRT"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node8":{
  "core":"testSimple1_shard2_replica_n6",
  "base_url":"http://127.0.0.1:41877/solr;,
  "node_name":"127.0.0.1:41877_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node10":{
  "core":"testSimple1_shard2_replica_n9",
  "base_url":"http://127.0.0.1:43243/solr;,
  "node_name":"127.0.0.1:43243_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([B3AAACAC1048F065:8B19885237BB24B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7146 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7146/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestCrashCausesCorruptIndex

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001\_2.fnm:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001\_2.fnm

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001\_2.fnm:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001\testCrashCorruptsIndexing-001\_2.fnm
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestCrashCausesCorruptIndex_357A5C5B0375DBCD-001

at __randomizedtesting.SeedInfo.seed([357A5C5B0375DBCD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestFileSwitchDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_357A5C5B0375DBCD-001\bar-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_357A5C5B0375DBCD-001\bar-009
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_357A5C5B0375DBCD-001\bar-009:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_357A5C5B0375DBCD-001\bar-009

at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 936 - Still Failing

2018-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/936/

No tests ran.

Build Log:
[...truncated 28244 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (33.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.02 sec (1245.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.0 MB in 0.06 sec (1226.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.4 MB in 0.07 sec (1221.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (301.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.5 MB in 0.05 sec (983.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.5 MB in 0.15 sec (1006.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.5 MB in 0.15 sec (1022.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+41) - Build # 1265 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1265/
Java: 64bit/jdk-10-ea+41 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 null Live Nodes: [127.0.0.1:37879_solr, 
127.0.0.1:40225_solr] Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/20)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node3/data/",
   "base_url":"https://127.0.0.1:42015/solr;,   
"node_name":"127.0.0.1:42015_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"down"}, "core_node5":{  
 
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node5/data/",
   "base_url":"https://127.0.0.1:37879/solr;,   
"node_name":"127.0.0.1:37879_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node7/data/",
   "base_url":"https://127.0.0.1:42015/solr;,   
"node_name":"127.0.0.1:42015_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"down"}, "core_node8":{  
 
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node8/data/",
   "base_url":"https://127.0.0.1:37879/solr;,   
"node_name":"127.0.0.1:37879_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n6",   
"shared_storage":"true",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"true",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple2
null
Live Nodes: [127.0.0.1:37879_solr, 127.0.0.1:40225_solr]
Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/20)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node3/data/",
  "base_url":"https://127.0.0.1:42015/solr;,
  "node_name":"127.0.0.1:42015_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node3/data/tlog",
  "core":"testSimple2_shard1_replica_n1",
  "shared_storage":"true",
  "state":"down"},
"core_node5":{
  
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node5/data/",
  "base_url":"https://127.0.0.1:37879/solr;,
  "node_name":"127.0.0.1:37879_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node5/data/tlog",
  "core":"testSimple2_shard1_replica_n2",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node7/data/",
  "base_url":"https://127.0.0.1:42015/solr;,
  "node_name":"127.0.0.1:42015_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node7/data/tlog",
  "core":"testSimple2_shard2_replica_n4",
  "shared_storage":"true",
  "state":"down"},
"core_node8":{
  
"dataDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node8/data/",
  "base_url":"https://127.0.0.1:37879/solr;,
  "node_name":"127.0.0.1:37879_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:43933/data/testSimple2/core_node8/data/tlog",
  "core":"testSimple2_shard2_replica_n6",
  "shared_storage":"true",
  "state":"active",
 

[jira] [Comment Edited] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344301#comment-16344301
 ] 

Joel Bernstein edited comment on SOLR-10786 at 1/30/18 12:49 AM:
-

[~dsmiley], do you think you put together a lat/long distance measure that we 
could cluster lat/long coordinates with?

We would need to follow this interface to plug into Apache Commons Math DBSCAN 
clustering:

[https://commons.apache.org/proper/commons-math/javadocs/api-3.6/org/apache/commons/math3/ml/distance/DistanceMeasure.html]


was (Author: joel.bernstein):
[~dsmiley], do you think you put together a lat/long distance measure the we 
could cluster lat/long coordinates with?

 

We would need to follow this interface to plug into Apache Commons Math DBSCAN 
clustering:

https://commons.apache.org/proper/commons-math/javadocs/api-3.6/org/apache/commons/math3/ml/distance/DistanceMeasure.html

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344301#comment-16344301
 ] 

Joel Bernstein commented on SOLR-10786:
---

[~dsmiley], do you think you put together a lat/long distance measure the we 
could cluster lat/long coordinates with?

 

We would need to follow this interface to plug into Apache Commons Math DBSCAN 
clustering:

https://commons.apache.org/proper/commons-math/javadocs/api-3.6/org/apache/commons/math3/ml/distance/DistanceMeasure.html

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-01-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344285#comment-16344285
 ] 

Varun Thacker commented on SOLR-7887:
-

Patch which adds licenses etc. 

I am unsure what to put in disruptor-NOTICE.txt so it's currently empty.

Pre-commit is still failing which I'll investigate now. 

if anyone has time to give it a spin on windows to confirm everything is 
running fine it would be great.

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344284#comment-16344284
 ] 

Joel Bernstein commented on SOLR-10786:
---

That's a good question. I think DBSCAN would have been better for this. And 
also since lat/long are very small records I suspect it would be fast enough. 
But I think we would need a distance measure that is specific for this purpose. 
I think this use case is interesting enough that we should investigate.

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-01-29 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7887:

Attachment: SOLR-7887.patch

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1463 - Still Failing

2018-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1463/

12 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748) ,time=1}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
,time=1}
at 
__randomizedtesting.SeedInfo.seed([540DD97C956B2714:DC59E6A63B974AEC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1191)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1132)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:992)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 418 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/418/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=12094, name=jetty-launcher-2329-thread-1-EventThread, state=WAITING, 
group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
2) Thread[id=12021, name=jetty-launcher-2329-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)   
 3) Thread[id=12093, 
name=jetty-launcher-2329-thread-1-SendThread(127.0.0.1:41886), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=12094, name=jetty-launcher-2329-thread-1-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   2) Thread[id=12021, name=jetty-launcher-2329-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 

License File Doubts

2018-01-29 Thread Varun Thacker
Hi Everyone,

While adding license files as part of  SOLR-7887 there's a doubt that
cropped up:

Let's take solr/licenses/derby-LICENSE-ASL.txt for example which has this
section towards the end

   APPENDIX: How to apply the Apache License to your work.

  To apply the Apache License to your work, attach the following
  boilerplate notice, with the fields enclosed by brackets "[]"
  replaced with your own identifying information. (Don't include
  the brackets!)  The text should be enclosed in the appropriate
  comment syntax for the file format. We also recommend that a
  file or class name and description of purpose be included on the
  same "printed page" as the copyright notice for easier
  identification within third-party archives.

   Copyright [] [name of copyright owner]




Should we be filling out the [] and [name of copyright owner] section
in our LICENSE files?


[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+41) - Build # 21366 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21366/
Java: 64bit/jdk-10-ea+41 -XX:-UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd: 1) 
Thread[id=257, name=qtp1878999664-257, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd: 
   1) Thread[id=257, name=qtp1878999664-257, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd]
at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@10-ea/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([109044116134926]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=257, name=qtp1878999664-257, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=257, name=qtp1878999664-257, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd]
at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2205)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@10-ea/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([109044116134926]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([F2BDFE94D0B2F3E4:48B1C91B8F5A25AB]:0)
at 

[jira] [Comment Edited] (LUCENE-3475) ShingleFilter should handle positionIncrement of zero, e.g. synonyms

2018-01-29 Thread Mayya Sharipova (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344192#comment-16344192
 ] 

Mayya Sharipova edited comment on LUCENE-3475 at 1/29/18 11:33 PM:
---

[~jpountz] Hi Adrien!  I wonder what would be the best approach to handle 
positionIncrement=0?

I was thinking that in  *ShingleFilter:getNextToken* we can do something like 
this:
{code:java}
if (input.incrementToken()) {
   while (posIncrAtt.getPositionIncrement()  == 0) { // we may have multiple 
synonyms
  if (input.incrementToken()) { // go to next token 
   
  // store synonyms tokens and following tokens somewhere and create a 
new input TokenStream from them? 
{code}
I guess I am wondering if we have any other reference code that recreates a 
tokenStream from synonym tokens?
  


was (Author: mayyas):
[~jpountz] Hi Adrien!  I wonder what would be the best approach to handle 
positionIncrement=0?

I was thinking that in  *ShingleFilter:getNextToken* we can do something like 
this:
{code:java}
if (input.incrementToken()) {
   while (posIncrAtt.getPositionIncrement()  == 0) { // we may have multiple 
synonyms
  if (input.incrementToken()) { // go to next token 
   
  // store synonyms tokens and following tokens somewhere and create a 
new input TokenStream from them? 
{code}

I guess I am wondering if we have any other sample code that already doing 
that, and which I can reference?
 

> ShingleFilter should handle positionIncrement of zero, e.g. synonyms
> 
>
> Key: LUCENE-3475
> URL: https://issues.apache.org/jira/browse/LUCENE-3475
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 3.4
>Reporter: Cameron
>Priority: Minor
>  Labels: newdev
>
> ShingleFilter is creating shingles for a single term that has been expanded 
> by synonyms when it shouldn't. The position increment is 0.
> As an example, I have an Analyzer with a SynonymFilter followed by a 
> ShingleFilter. Assuming car and auto are synonyms, the SynonymFilter produces 
> two tokens and position 1: car, auto. The ShingleFilter is then producing 3 
> tokens, when there should only be two: car, car auto, auto. This behavior 
> seems incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3475) ShingleFilter should handle positionIncrement of zero, e.g. synonyms

2018-01-29 Thread Mayya Sharipova (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344192#comment-16344192
 ] 

Mayya Sharipova commented on LUCENE-3475:
-

[~jpountz] Hi Adrien!  I wonder what would be the best approach to handle 
positionIncrement=0?

I was thinking that in  *ShingleFilter:getNextToken* we can do something like 
this:
{code:java}
if (input.incrementToken()) {
   while (posIncrAtt.getPositionIncrement()  == 0) { // we may have multiple 
synonyms
  if (input.incrementToken()) { // go to next token 
   
  // store synonyms tokens and following tokens somewhere and create a 
new input TokenStream from them? 
{code}

I guess I am wondering if we have any other sample code that already doing 
that, and which I can reference?
 

> ShingleFilter should handle positionIncrement of zero, e.g. synonyms
> 
>
> Key: LUCENE-3475
> URL: https://issues.apache.org/jira/browse/LUCENE-3475
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 3.4
>Reporter: Cameron
>Priority: Minor
>  Labels: newdev
>
> ShingleFilter is creating shingles for a single term that has been expanded 
> by synonyms when it shouldn't. The position increment is 0.
> As an example, I have an Analyzer with a SynonymFilter followed by a 
> ShingleFilter. Assuming car and auto are synonyms, the SynonymFilter produces 
> two tokens and position 1: car, auto. The ShingleFilter is then producing 3 
> tokens, when there should only be two: car, car auto, auto. This behavior 
> seems incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #314: SOLR-11459 Clear AddUpdateCommand#prevVersion...

2018-01-29 Thread werder06
GitHub user werder06 opened a pull request:

https://github.com/apache/lucene-solr/pull/314

SOLR-11459 Clear AddUpdateCommand#prevVersion to fix in-place updates…

… for non existed documents

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/werder06/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/314.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #314


commit 3de1a5b2fa1aa7af5afe09912d4caebe2ee9d800
Author: Andrey 
Date:   2018-01-29T22:57:16Z

SOLR-11459 Clear AddUpdateCommand#prevVersion to fix in-place updates for 
non existed documents




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1652 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1652/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([4327DD074C462EFD]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:507)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:251)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.AssertionError
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.getUpperBoundASTs(WildcardTypeImpl.java:86)
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.getUpperBounds(WildcardTypeImpl.java:122)
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.toString(WildcardTypeImpl.java:190)
at java.lang.reflect.Type.getTypeName(Type.java:46)
at 
sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.toString(ParameterizedTypeImpl.java:234)
at java.lang.reflect.Type.getTypeName(Type.java:46)
at 
java.lang.reflect.Method.specificToGenericStringHeader(Method.java:421)
at 
java.lang.reflect.Executable.sharedToGenericString(Executable.java:163)
at java.lang.reflect.Method.toGenericString(Method.java:415)
at java.beans.MethodRef.set(MethodRef.java:46)
at 
java.beans.MethodDescriptor.setMethod(MethodDescriptor.java:117)
at java.beans.MethodDescriptor.(MethodDescriptor.java:72)
at java.beans.MethodDescriptor.(MethodDescriptor.java:56)
at 
java.beans.Introspector.getTargetMethodInfo(Introspector.java:1205)
at java.beans.Introspector.getBeanInfo(Introspector.java:426)
at java.beans.Introspector.getBeanInfo(Introspector.java:173)
at java.beans.Introspector.getBeanInfo(Introspector.java:260)
at java.beans.Introspector.(Introspector.java:407)
at java.beans.Introspector.getBeanInfo(Introspector.java:173)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 341 - Still Unstable

2018-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/341/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([29181B033C1F277C:93142C8C63F7F133]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger(TriggerIntegrationTest.java:1575)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<8> but was:<10>

Stack Trace:
java.lang.AssertionError: expected:<8> but was:<10>
at 
__randomizedtesting.SeedInfo.seed([29181B033C1F277C:44E4BFFE8657D87B]:0)

[GitHub] lucene-solr pull request #307: SOLR-11459 Clear AddUpdateCommand#prevVersion...

2018-01-29 Thread werder06
Github user werder06 closed the pull request at:

https://github.com/apache/lucene-solr/pull/307


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1264 - Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1264/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by ComputePlanAction should 
not be null SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, 
null], BEFORE_ACTION=[compute_plan, null]}
at 
__randomizedtesting.SeedInfo.seed([8FA1EBE83DCC545B:BF610A6AB5BEB507]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11900) API command to delete oldest collections in a time routed alias

2018-01-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344109#comment-16344109
 ] 

David Smiley commented on SOLR-11900:
-

The attached patch uses the idea above, and is mostly done.  The main thing 
left is to add alias metadata flag to control this, defaulting to false.  
Suggested: "deleteQueryDeletesCollections".  I'm not sure wether to also 
pass-through the delete query as a normal query as well... there are 
distinctions in the timezone since a NOW/MONTH for this code I added will use 
the TZ from the alias metadata but the delete query against Solr will use the 
TZ parameter sent in the update request.  (P.S. I believe there is another 
issue about tlog replay not serializing the update request params).  So that's 
not nice.  Maybe I'm stubbornly latching onto this idea and I ought to instead 
make yet another conventional SolrCloud collections API request.  
DELETEROUTEDALIASCOLLECTION?  Ugh.

It'd be interesting to see what happens if the incoming delete request is 
flowing into the oldest collection.  It will try to delete itself.  Does that 
work? I'm guessing it would, albeit with a timeout error.  If it doesn't; is it 
a big deal? I don't think so since an incoming request to the alias will always 
route to the first collection ("soonest"), and this one is not delete-able by 
this code.

> API command to delete oldest collections in a time routed alias
> ---
>
> Key: SOLR-11900
> URL: https://issues.apache.org/jira/browse/SOLR-11900
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11900.patch
>
>
> For Time Routed Aliases, we'll need an API command to delete the oldest 
> collection(s).  Perhaps the command action name is 
> DELETE_COLLECTION_OF_ROUTED_ALIAS (yes that's long).  And input is of course 
> the routed alias name, plus a mandatory "before" which is a standard time 
> input that Solr accepts that will likely include date math.  Thus if you used 
> before="NOW/DAY-90DAYS" then your guaranteed to have the last 90 days worth 
> of data.  If a collection overlaps past what "before" is computed to be then 
> it needs to stay.  The pattern might match any number of collections, perhaps 
> none.  But in all cases, the most recent collection must be retained -- the 
> time routed aliases must at all times refer to at least one collection.
> The underlying steps will be to first update the alias, and then delete the 
> collection(s).  It ought to return the collections that get deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11767) Please create SolrCloud Helm Chart or Controller for Kubernetes

2018-01-29 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344104#comment-16344104
 ] 

Keith Laban commented on SOLR-11767:


Hey Rodney, building a CRD for Solr is something I've been investigating for a 
while. I've built some small POCs which are far from production ready. For us, 
we use bare-metal and local storage and unfortunately the state of local 
storage in kubernetes just isn't there yet although there are some things in 
the works that [look 
exciting|https://github.com/kubernetes/features/issues/490#issuecomment-359508997].

This could potentially work if you use network storage, as that is slightly 
more solved for in the kube world, unfortunately that wont work for us and 
isn't a path we researched. In my original POC I built a CRD controller which 
used empty-dir with the idea of deploying ephemeral sandboxes of solrcloud, 
again not really a production solution.

> Please create SolrCloud Helm Chart or Controller for Kubernetes
> ---
>
> Key: SOLR-11767
> URL: https://issues.apache.org/jira/browse/SOLR-11767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.1
> Environment: Azure AKS, On-Prem Kuberenetes 1.8
>Reporter: Rodney Aaron Stainback
>Priority: Blocker
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Please creates a highly avialable auto-scaling Kubernetes Helm Chart or 
> Controller/Custom Resource for easy deployment of SolrCloud in Kubernetes in 
> any environement.  Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Adam Gibson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344095#comment-16344095
 ] 

Adam Gibson commented on SOLR-11838:


[~cpoerschke] Yes we would be more than glad to accommodate you here. 

That model import module is mainly for keras import. In nd4j (our tensor 
library) we are working on supporting both the onnx file format 
([http://onnx.ai|http://onnx.ai%28/]) as well as the tensorflow fromat as well. 

I think [~jeroens] has the right idea re: dl4j. I can say that it has special 
requirements that are pretty atypical of infrastructure typical of running 
solr. This is especially true with batch inference.

 

[~joel.bernstein] SOMs are far from interesting in 2018. Dl4j has Variational 
AutoeEncoders which are both the cutting edge right before GANs but also more 
stable. Variational AutoEncoders , with their built in reconstruction mechanic 
could also be used to track anomalies or rank results right in solr. We use 
this workflow mainly for time series data but due to solr being a ranking 
engine it makes a ton of sense here.

Re: Time series/LSTMs.The vectors themselves are 3d, but are created from a 
time dimension with a specific window length. We typically create this with 
what we call a SequenceRecordReader which can take varying file inputs and 
infer a time series length based on the input. For solar, a timestamp field 
could be used to create the proper 3d vectors pretty easily. 

 

Re: Datavec record reader. We are happy to take pull requests. It is definitely 
of interest thanks!

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch, SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11900) API command to delete oldest collections in a time routed alias

2018-01-29 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11900:

Attachment: SOLR-11900.patch

> API command to delete oldest collections in a time routed alias
> ---
>
> Key: SOLR-11900
> URL: https://issues.apache.org/jira/browse/SOLR-11900
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11900.patch
>
>
> For Time Routed Aliases, we'll need an API command to delete the oldest 
> collection(s).  Perhaps the command action name is 
> DELETE_COLLECTION_OF_ROUTED_ALIAS (yes that's long).  And input is of course 
> the routed alias name, plus a mandatory "before" which is a standard time 
> input that Solr accepts that will likely include date math.  Thus if you used 
> before="NOW/DAY-90DAYS" then your guaranteed to have the last 90 days worth 
> of data.  If a collection overlaps past what "before" is computed to be then 
> it needs to stay.  The pattern might match any number of collections, perhaps 
> none.  But in all cases, the most recent collection must be retained -- the 
> time routed aliases must at all times refer to at least one collection.
> The underlying steps will be to first update the alias, and then delete the 
> collection(s).  It ought to return the collections that get deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #313: SOLR-11924: Added a way to create collection ...

2018-01-29 Thread HoustonPutman
GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/313

SOLR-11924: Added a way to create collection set watchers in ZkStateReader.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr 
collection-set-watchers

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #313


commit be1be01d5eb1bf0a69eda65ad60f46442c366950
Author: Houston Putman 
Date:   2018-01-29T20:29:06Z

Added a way to create collection set watchers to the ZkStateReader.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-01-29 Thread Houston Putman (JIRA)
Houston Putman created SOLR-11924:
-

 Summary: Add the ability to watch collection set changes in 
ZkStateReader
 Key: SOLR-11924
 URL: https://issues.apache.org/jira/browse/SOLR-11924
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: master (8.0), 7.3
Reporter: Houston Putman


Allow users to watch when the set of collections for a cluster is changed. This 
is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 427 - Still unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/427/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([C0FA8D280AE1991C:48AEB2F2A41DF4E4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread dan v (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344043#comment-16344043
 ] 

dan v commented on SOLR-10786:
--

Can one of them suit for DBSCAN usecase when we want to spatial clustering 
given lon and lat?

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344035#comment-16344035
 ] 

Joel Bernstein edited comment on SOLR-10786 at 1/29/18 9:14 PM:


Solr 7.3 has kmeans++ and fuzzyKmeans clustering. But DBSCAN clustering was 
just too slow I thought to be useful for Solr users. I will try it again 
sometime to see if it was just how I put things together, but it was painfully 
slow compared to kmeans++.

eigen and singular value decompositions are planned for Solr 7.4, so other 
clustering techniques such as PCA and LSA are on the way.

 


was (Author: joel.bernstein):
Solr 7.3 has kmeans++ and fuzzyKmeans clustering. But DBSCAN clustering just to 
slow I thought to be useful for the Solr users. I will try it again sometime to 
see if it was just how I put things together, but it was painfully slow 
compared to kmeans.

eigen and singular value decomposition are planned of for Solr 7.4, so other 
clustering techniques such as PCA and LSA are on the way.

 

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344035#comment-16344035
 ] 

Joel Bernstein edited comment on SOLR-10786 at 1/29/18 9:14 PM:


Solr 7.3 has kmeans++ and fuzzyKmeans clustering. But DBSCAN clustering was 
just too slow, I thought, to be useful for Solr users. I will try it again 
sometime to see if it was just how I put things together, but it was painfully 
slow compared to kmeans++.

eigen and singular value decompositions are planned for Solr 7.4, so other 
clustering techniques such as PCA and LSA are on the way.

 


was (Author: joel.bernstein):
Solr 7.3 has kmeans++ and fuzzyKmeans clustering. But DBSCAN clustering was 
just too slow I thought to be useful for Solr users. I will try it again 
sometime to see if it was just how I put things together, but it was painfully 
slow compared to kmeans++.

eigen and singular value decompositions are planned for Solr 7.4, so other 
clustering techniques such as PCA and LSA are on the way.

 

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344035#comment-16344035
 ] 

Joel Bernstein edited comment on SOLR-10786 at 1/29/18 9:13 PM:


Solr 7.3 has kmeans++ and fuzzyKmeans clustering. But DBSCAN clustering just to 
slow I thought to be useful for the Solr users. I will try it again sometime to 
see if it was just how I put things together, but it was painfully slow 
compared to kmeans.

eigen and singular value decomposition are planned of for Solr 7.4, so other 
clustering techniques such as PCA and LSA are on the way.

 


was (Author: joel.bernstein):
Solr 7.3 has kmeans++, fuzzyKmeans clustering. But DBSCAN clustering just to 
slow I thought to be useful for the Solr users. I will try it again sometime to 
see if it was just how I put things together, but it was painfully slow 
compared to kmeans.

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10786) Add DBSCAN clustering Streaming Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344035#comment-16344035
 ] 

Joel Bernstein commented on SOLR-10786:
---

Solr 7.3 has kmeans++, fuzzyKmeans clustering. But DBSCAN clustering just to 
slow I thought to be useful for the Solr users. I will try it again sometime to 
see if it was just how I put things together, but it was painfully slow 
compared to kmeans.

> Add DBSCAN clustering Streaming Evaluator
> -
>
> Key: SOLR-10786
> URL: https://issues.apache.org/jira/browse/SOLR-10786
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-10786.patch, SOLR-10786.patch, SOLR-10786.patch
>
>
> The DBSCAN clustering Stream Evaluator will cluster numeric vectors using the 
> DBSCAN clustering algorithm.
> Clustering implementation will be provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11923) Add bicubicSpline Stream Evaluator

2018-01-29 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11923:
-

 Summary: Add bicubicSpline Stream Evaluator
 Key: SOLR-11923
 URL: https://issues.apache.org/jira/browse/SOLR-11923
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds the *bicupicSpline* Stream Evaluator to support computing 
bicubic spline over a matrix.

Implementation provided by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+41) - Build # 21365 - Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21365/
Java: 64bit/jdk-10-ea+41 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:34113_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:34113_solr
at 
__randomizedtesting.SeedInfo.seed([1E50B15B86223A78:96048E8128DE5780]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:99)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13480 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestUtilizeNode
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.TestUtilizeNode_1E50B15B86223A78-001/init-core-data-001
   [junit4]   2> 

[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344006#comment-16344006
 ] 

Joel Bernstein commented on SOLR-11766:
---

I've started work on the 7.3 documentation using the existing format. I should 
be finished this week. We can think about possibly deploying a new format for 
7.4.

> Ref Guide: redesign Streaming Expression reference pages
> 
>
> Key: SOLR-11766
> URL: https://issues.apache.org/jira/browse/SOLR-11766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, 
> Streaming-expanded-panel.png
>
>
> There are a very large number of streaming expressions and they need some 
> special info design to be more easily accessible. The current way we're 
> presenting them doesn't really work. This issue is to track ideas and POC 
> patches for possible approaches.
> A couple of ideas I have, which may or may not all work together:
> # Provide a way to filter the list of commands by expression type (would need 
> to figure out the types)
> # Present the available expressions in smaller sections, similar in UX 
> concept to https://redis.io/commands. On that page, I can see 9-12 commands 
> above "the fold" on my laptop screen, as compared to today when I can see 
> only 1 expression at a time & each expression probably takes more space than 
> necessary. This idea would require figuring out where people go when they 
> click a command to get more information.
> ## One solution for where people go is to put all the commands back in one 
> massive page, but this isn't really ideal
> ## Another solution would be to have an individual .adoc file for each 
> expression and present them all individually.
> # Some of the Bootstrap.js options may help - collapsing panels or tabs, if 
> properly designed, may make it easier to see an overview of available 
> expressions and get more information if interested.
> I'll post more ideas as I come up with them.
> These ideas focus on the HTML layout of expressions - ideally we come up with 
> a solution for PDF that's better also, but we are much more limited in what 
> we can do there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11892) Avoid unnecessary exceptions in FSDirectory and RAMDirectory

2018-01-29 Thread hamada (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343976#comment-16343976
 ] 

hamada edited comment on SOLR-11892 at 1/29/18 8:42 PM:


specifically the code of interest where an IOException is thrown  RAMDirectory :

@Override
 public void deleteFile(String name) throws IOException {
 ensureOpen();
 RAMFile file = fileMap.remove(name);
 if (file != null)

{ file.directory = null; sizeInBytes.addAndGet(-file.sizeInBytes); }

else

{ // FIXME there are no file operations here this method removes fileName entry 
from a map, it isn't per se deleting a file!

throw new FileNotFoundException(name); }

}


was (Author: hamadaca):
specifically the code of interest where an IOException is thrown :

@Override
public void deleteFile(String name) throws IOException {
 ensureOpen();
 RAMFile file = fileMap.remove(name);
 if (file != null) {
 file.directory = null;
 sizeInBytes.addAndGet(-file.sizeInBytes);
 } else {

// FIXME there are no file operations here this method removes fileName entry 
from a map, it isn't per se removing deleting a file!
 throw new FileNotFoundException(name);
 }
}

> Avoid unnecessary exceptions in FSDirectory and RAMDirectory
> 
>
> Key: SOLR-11892
> URL: https://issues.apache.org/jira/browse/SOLR-11892
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: Screen Shot 2018-01-24 at 9.09.55 PM.png, Screen Shot 
> 2018-01-24 at 9.10.47 PM.png
>
>
> In privateDeleteFile, just use deleteIfExists.
> in RamDirectory we can declare a static exception and create it once.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11892) Avoid unnecessary exceptions in FSDirectory and RAMDirectory

2018-01-29 Thread hamada (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343976#comment-16343976
 ] 

hamada commented on SOLR-11892:
---

specifically the code of interest where an IOException is thrown :

@Override
public void deleteFile(String name) throws IOException {
 ensureOpen();
 RAMFile file = fileMap.remove(name);
 if (file != null) {
 file.directory = null;
 sizeInBytes.addAndGet(-file.sizeInBytes);
 } else {

// FIXME there are no file operations here this method removes fileName entry 
from a map, it isn't per se removing deleting a file!
 throw new FileNotFoundException(name);
 }
}

> Avoid unnecessary exceptions in FSDirectory and RAMDirectory
> 
>
> Key: SOLR-11892
> URL: https://issues.apache.org/jira/browse/SOLR-11892
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: Screen Shot 2018-01-24 at 9.09.55 PM.png, Screen Shot 
> 2018-01-24 at 9.10.47 PM.png
>
>
> In privateDeleteFile, just use deleteIfExists.
> in RamDirectory we can declare a static exception and create it once.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343959#comment-16343959
 ] 

Christine Poerschke edited comment on SOLR-11838 at 1/29/18 8:29 PM:
-

A little bit with the dependency tree question in mind, but also generally, 
just throwing this out there ...

For the use cases of using a model trained and saved elsewhere for [Learning To 
Rank|https://lucene.apache.org/solr/guide/7_2/learning-to-rank.html] or 
[ClassificationUpdateProcessorFactory|https://lucene.apache.org/solr/7_2_0//solr-core/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.html]
 purposes - where could that code comfortably live, need it necessarily be 
within Apache Solr?

[~agibsonccc] - I noticed there's a 
[deeplearning4j-modelimport|https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j-modelimport]
 module, might there be scope for a {{deeplearning4j-modelexport}} or 
{{deeplearning4j-modelexport-solr}} module? (Happy to open a deeplearning4j 
issue or pull request if here is not the best place to pose that question.)

Attached illustrative patch, Solr LTR itself would contain the generic 
AdapterModel and what is the 
{{sandbox/model-consumer/src/main/java/please/replace/me/MultiLayerNetworkLTRScoringModel.java}}
 in the patch in principle could (with javadocs and of course tests) live 
elsewhere.


was (Author: cpoerschke):
A little bit with the dependency tree question in mind, but also generally, 
just throwing this out there ...

For the use cases of using a model trained and saved elsewhere for [Learning To 
Rank|https://lucene.apache.org/solr/guide/7_2/learning-to-rank.html] or 
[ClassificationUpdateProcessorFactory|https://lucene.apache.org/solr/7_2_0//solr-core/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.html]
 purposes - where could that code comfortably live, need it necessarily be 
within Apache Solr?

[~agibsonccc] - I noticed there's a 
[deeplearning4j-modelimport|https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j-modelimport]
 model, might there be scope for a {{deeplearning4j-modelexport}} or 
{{deeplearning4j-modelexport-solr}} module? (Happy to open a deeplearning4j 
issue or pull request if here is not the best place to pose that question.)

Attached illustrative patch, Solr LTR itself would contain the generic 
AdapterModel and what is the 
{{sandbox/model-consumer/src/main/java/please/replace/me/MultiLayerNetworkLTRScoringModel.java}}
 in the patch in principle could (with javadocs and of course tests) live 
elsewhere.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch, SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343959#comment-16343959
 ] 

Christine Poerschke commented on SOLR-11838:


A little bit with the dependency tree question in mind, but also generally, 
just throwing this out there ...

For the use cases of using a model trained and saved elsewhere for [Learning To 
Rank|https://lucene.apache.org/solr/guide/7_2/learning-to-rank.html] or 
[ClassificationUpdateProcessorFactory|https://lucene.apache.org/solr/7_2_0//solr-core/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.html]
 purposes - where could that code comfortably live, need it necessarily be 
within Apache Solr?

[~agibsonccc] - I noticed there's a 
[deeplearning4j-modelimport|https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j-modelimport]
 model, might there be scope for a {{deeplearning4j-modelexport}} or 
{{deeplearning4j-modelexport-solr}} module? (Happy to open a deeplearning4j 
issue or pull request if here is not the best place to pose that question.)

Attached illustrative patch, Solr LTR itself would contain the generic 
AdapterModel and what is the 
{{sandbox/model-consumer/src/main/java/please/replace/me/MultiLayerNetworkLTRScoringModel.java}}
 in the patch in principle could (with javadocs and of course tests) live 
elsewhere.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch, SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11838:
---
Attachment: SOLR-11838.patch

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch, SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343911#comment-16343911
 ] 

Erick Erickson commented on SOLR-11658:
---

Thanks!

 

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4414 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4414/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:60245/solr]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:60245/solr]
at 
__randomizedtesting.SeedInfo.seed([4A8EF17A1C4BEB94:723DD5843BB83F45]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple(AutoAddReplicasPlanActionTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-11658.
---
Resolution: Fixed

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343867#comment-16343867
 ] 

Steve Rowe commented on SOLR-11658:
---

FYI, there is an open issue to automate interpolation of version-specific stuff 
e.g. ZK in ref guide docs; my second patch would mostly not be required if this 
were implemented: SOLR-10616

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343857#comment-16343857
 ] 

ASF subversion and git services commented on SOLR-11658:


Commit e6928d857ae6cd60b595036d0f7c01a7906e92da in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e6928d8 ]

SOLR-11658: ZK mentions: 3.4.10->3.4.11.  Also fixed CHANGES.txt attribution.


> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11847) Ant target jenkins-maven-nightly should publish maven artifact snapshots to the ASF repo

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343856#comment-16343856
 ] 

ASF subversion and git services commented on SOLR-11847:


Commit c73bc6b145ff2e2b94f42a153c87523eed3172df in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c73bc6b ]

SOLR-11847: Resume publishing maven snapshot artifacts as part of Ant target 
jenkins-maven-nightly


> Ant target jenkins-maven-nightly should publish maven artifact snapshots to 
> the ASF repo
> 
>
> Key: SOLR-11847
> URL: https://issues.apache.org/jira/browse/SOLR-11847
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3, 7.1.1, 7.2.2
>
> Attachments: SOLR-11847.patch
>
>
> SOLR-11181 unintentionally removed the implicit maven artifact publishing 
> from target {{validate-maven-artifacts}} (it previously depended on 
> {{generate-maven-artifacts}}, which handles publishing when the appropriate 
> sysprops are set).
> We should add {{generate-maven-artifacts}} back as a dependency of the 
> {{jenkins-maven-nightly}} target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343855#comment-16343855
 ] 

ASF subversion and git services commented on SOLR-11658:


Commit 2a5a356e04d1166ba7f9df38bc1c904ca305d5be in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a5a356 ]

SOLR-11658: ZK mentions: 3.4.10->3.4.11.  Also fixed CHANGES.txt attribution.


> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-11658:
--
Attachment: SOLR-11658-part2.patch

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658-part2.patch, SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened SOLR-11658:
---

I noticed some leftover 3.4.10 references, attaching a patch I'll commit 
shortly.  The patch also adds Jason to the CHANGES.txt entry.

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11892) Avoid unnecessary exceptions in FSDirectory and RAMDirectory

2018-01-29 Thread hamada (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343843#comment-16343843
 ] 

hamada commented on SOLR-11892:
---

Few points about Exception cost:
 * It can not be assumed that all JVM's are all optimized the same, therefore, 
performance cost should not be ignored.  General assumptions on cost should not 
be based on one VM's optimization.
 * Exception creation has associated cost, follow the stack on an exception 
creation :
 ** public synchronized Throwable fillInStackTrace() {
 if (stackTrace != null ||
 backtrace != null /* Out of protocol state */ ) {
 fillInStackTrace(0);
 stackTrace = UNASSIGNED_STACK;
 }
 return this;
}

private native Throwable fillInStackTrace(int dummy);
 ** In addition to being synchronized, it has a memory cost, the exception, and 
all stackelement strings
 * Exceptions in the critical path will affect performance, and add to memory 
pressure.
 ** In this use case it appears to be logic flow control, and not of the 
Exception

In addition to the cpu and memory cost, the frequency of such an exception, the 
absence of file name map entry is indicative of a race condition, a bug, or a 
concurrency issue.

The general rule about exceptions, as they're named, they should be of the 
exception, and not the norm. 

> Avoid unnecessary exceptions in FSDirectory and RAMDirectory
> 
>
> Key: SOLR-11892
> URL: https://issues.apache.org/jira/browse/SOLR-11892
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: Screen Shot 2018-01-24 at 9.09.55 PM.png, Screen Shot 
> 2018-01-24 at 9.10.47 PM.png
>
>
> In privateDeleteFile, just use deleteIfExists.
> in RamDirectory we can declare a static exception and create it once.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343620#comment-16343620
 ] 

Joel Bernstein edited comment on SOLR-11838 at 1/29/18 7:00 PM:


Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, but the SOM's might be an 
interesting first step towards neural network classifiers.

The SOM's are part of larger clustering integration which includes kmeans, 
fuzzyKmeans, PCA, LSA and SOM.


was (Author: joel.bernstein):
Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, but the SOM's might be an 
interesting first step towards neural network classifiers.

The SOM's are part of larger clustering integration which include kmeans, 
fuzzyKmeans, PCA, LSA and SOM.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2283 - Still Unstable

2018-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2283/

3 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testBuffering

Error Message:
expected:<1> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([7919C757183A1F7D:64F7697CB963BE56]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:494)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.search.TestRecovery.testLogReplay

Error Message:
expected:<7> but was:<8>

Stack Trace:
java.lang.AssertionError: expected:<7> but was:<8>
at 

[jira] [Resolved] (SOLR-11873) Use time based expiration cache in all places in HdfsDirectoryFactory

2018-01-29 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-11873.

   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

> Use time based expiration cache in all places in HdfsDirectoryFactory
> -
>
> Key: SOLR-11873
> URL: https://issues.apache.org/jira/browse/SOLR-11873
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.2
>Reporter: Mihaly Toth
>Assignee: Mark Miller
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> {{HdfsDirectoryFactory.exists()}} method already applies caching on 
> FileSystem objects. This is not done yet in the {{size()}} method.
> This function is eventually used when querying the core status. Each and 
> every query will use the same configuration and start from the first 
> configured HDFS NameNode. If that is down Solr will always access this down 
> node first without "learning".
> It would be nice to apply the same caching on that function too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11879) avoid creating a new Exception object for EOFException in FastinputStream

2018-01-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343759#comment-16343759
 ] 

Varun Thacker commented on SOLR-11879:
--

Hi Noble,

 

We should add a CHANGES entry for this right?

> avoid creating a new Exception object for EOFException in FastinputStream
> -
>
> Key: SOLR-11879
> URL: https://issues.apache.org/jira/browse/SOLR-11879
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: FastI
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
> Attachments: SOLR-11879.patch, SOLR-11879.patch, Screen Shot 
> 2018-01-24 at 7.26.16 PM.png
>
>
> FastInputStream creates and throws a new EOFException, every time an end of 
> stream is encountered. This is wasteful as we never use the stack trace 
> anywhere 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11919) V2 API support for the SystemInfoHandler

2018-01-29 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-11919.
--
Resolution: Not A Problem

Thanks Cassandra! I must have missed it

> V2 API support for the SystemInfoHandler 
> -
>
> Key: SOLR-11919
> URL: https://issues.apache.org/jira/browse/SOLR-11919
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: Varun Thacker
>Priority: Major
>
> SystemInfoHandler does not have a V2 API.
>  
> We should have a V2 equivalent for 
> [http://localhost:8983/solr/admin/info/system]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11918) Document usage of SystemInfoHandler at the node level

2018-01-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343731#comment-16343731
 ] 

Varun Thacker commented on SOLR-11918:
--

Yeah for sure monitoring would be the better section for this.

> Document usage of SystemInfoHandler at the node level
> -
>
> Key: SOLR-11918
> URL: https://issues.apache.org/jira/browse/SOLR-11918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> [http://localhost:8983/solr/admin/info/system] gives us info about the node . 
> It's useful for monitoring scripts to use some information from here. 
> Currently it's not documented in the ref guide . 
>  
> Perhaps the best place would be a section under "Deployment and Operations" 
> for best practices on monitoring a cluster.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11899) TestDistribStateManager.testGetSetRemoveData failure

2018-01-29 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11899.
--
Resolution: Cannot Reproduce

Will reopen if it appears again.

> TestDistribStateManager.testGetSetRemoveData failure
> 
>
> Key: SOLR-11899
> URL: https://issues.apache.org/jira/browse/SOLR-11899
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> {code:java}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDistribStateManager -Dtests.method=testGetSetRemoveData 
> -Dtests.seed=2409B3FE130DD727 -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=es-CO -Dtests.timezone=Turkey -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>   [junit4] FAILURE 5.41s J2 | TestDistribStateManager.testGetSetRemoveData <<<
>   [junit4]    > Throwable #1: java.lang.AssertionError: Node watch should 
> have fired!
>   [junit4]    > at 
> __randomizedtesting.SeedInfo.seed([2409B3FE130DD727:2995CAC4783112D]:0)
>   [junit4]    > at 
> org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData(TestDistribStateManager.java:256)
>   [junit4]    > at java.lang.Thread.run(Thread.java:748)
>   [junit4]   2> 2019666 INFO  
> (TEST-TestDistribStateManager.testMulti-seed#[2409B3FE130DD727]) [    ] 
> o.a.s.SolrTestCaseJ4 ###Starting testMulti
>   [junit4]   2> 2019666 INFO  
> (TEST-TestDistribStateManager.testMulti-seed#[2409B3FE130DD727]) [    ] 
> o.a.s.c.a.s.TestDistribStateManager Using 
> org.apache.solr.cloud.autoscaling.sim.SimDistribStateManager
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 429 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/429/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-029:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-029

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-008
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-029:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-029
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_7E52A381DCDF9BA9-001\testSeekZero-008

at __randomizedtesting.SeedInfo.seed([7E52A381DCDF9BA9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster:
 1) Thread[id=5743, name=qtp4497288-5743, state=TIMED_WAITING, 
group=TGRP-TestCollectionsAPIViaSolrCloudCluster] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster: 
   1) Thread[id=5743, name=qtp4497288-5743, state=TIMED_WAITING, 
group=TGRP-TestCollectionsAPIViaSolrCloudCluster]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 

[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343656#comment-16343656
 ] 

Jeroen Steggink commented on SOLR-11838:


As a start, I think applying models for LTR or classifying documents/fields 
when indexing would be most useful.

One thing we shouldn't underestimate is data structures for Neural Networks. 
Depending on the network structure a model may depend on a specific data 
structure. For example, timeseries-vectors are very different from other 
vectors. Are we doing just bag-of-words or do we keep the order of words? How 
many fields would your like as input? How many inputs can models have 
(preferably ComputationGraphs, as they are more flexible).

Furthermore, we should think about what is actually going to work. Having 
one-hot encoding for all terms in an index could be problematic. There is 
already a logistic regression implementation which works great for simple 
classification. If we're going to use DL4J it should add something more than 
Solr already offers.

Maybe we can think of a few specific use cases to make a prototype for?

 

I think we can make a DataVec record reader for Solr (@[~kwatters]). But I 
guess this is something we can add to DataVec itself, instead of adding this to 
Solr. An alternative could be to use Solr's Streaming API to return data in a 
format which is efficient and could be directly used by DataVec.

Another thing I'd like to mention is dependencies. Instead of relying on DL4J 
specifically, we could think about abstracting data input and output for 
machine learning and applying models in general. As a DL4J user I'm not very 
interested in running it on a Solr server. I have dedicated servers running 
DL4J models which I serve using REST APIs. The reason is that I have servers 
with GPUs and lot's of RAM dedicated for this type of process. Solr on the 
other hand can be very demanding in a different way.

 

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11873) Use time based expiration cache in all places in HdfsDirectoryFactory

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343635#comment-16343635
 ] 

ASF subversion and git services commented on SOLR-11873:


Commit e16d50b75b02f616d998bc3e0121a38c62e7daf0 in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e16d50b ]

SOLR-11873: Use time based expiration cache in all necessary places in 
HdfsDirectoryFactory.


> Use time based expiration cache in all places in HdfsDirectoryFactory
> -
>
> Key: SOLR-11873
> URL: https://issues.apache.org/jira/browse/SOLR-11873
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.2
>Reporter: Mihaly Toth
>Assignee: Mark Miller
>Priority: Major
>
> {{HdfsDirectoryFactory.exists()}} method already applies caching on 
> FileSystem objects. This is not done yet in the {{size()}} method.
> This function is eventually used when querying the core status. Each and 
> every query will use the same configuration and start from the first 
> configured HDFS NameNode. If that is down Solr will always access this down 
> node first without "learning".
> It would be nice to apply the same caching on that function too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11873) Use time based expiration cache in all places in HdfsDirectoryFactory

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343633#comment-16343633
 ] 

ASF subversion and git services commented on SOLR-11873:


Commit 13773755b82850cf6aea6e20b08c5d62a6fddda0 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1377375 ]

SOLR-11873: Use time based expiration cache in all necessary places in 
HdfsDirectoryFactory.


> Use time based expiration cache in all places in HdfsDirectoryFactory
> -
>
> Key: SOLR-11873
> URL: https://issues.apache.org/jira/browse/SOLR-11873
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.2
>Reporter: Mihaly Toth
>Assignee: Mark Miller
>Priority: Major
>
> {{HdfsDirectoryFactory.exists()}} method already applies caching on 
> FileSystem objects. This is not done yet in the {{size()}} method.
> This function is eventually used when querying the core status. Each and 
> every query will use the same configuration and start from the first 
> configured HDFS NameNode. If that is down Solr will always access this down 
> node first without "learning".
> It would be nice to apply the same caching on that function too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Migrating Solr from one server to another

2018-01-29 Thread Erick Erickson
Assuming you have some down time (that is, time when you're not
actively indexing), "it's just files". So:
> create a parallel local cluster. By "parallel" I mean the same number of 
> shards. I'd create it with only one replica/shard to start (i.e. every shard 
> will have only a leader).
> shut down all Solr nodes on your local cluster.
> copy the data directory from one replica from each shard shard in your AWS 
> instance to the corresponding replica in your local cluster. WARNING: you 
> have to copy to corresponding shards. To be absolutely sure you have the 
> right ones, look at your admin UI>>cloud>>tree>>collection>>(your 
> collection)>>state.json. Each shard has a "range" property, some hex range. 
> The source and destination replicas _must_ have the _exact_ same range.
> Bring up your local cluster and verify that it't ok
> build out your local cluster with ADDREPLICA commands.

Best,
Erick

On Mon, Jan 29, 2018 at 4:39 AM, Aditya  wrote:
> Hi,
>
> I have a Solr instance running on AWS with close to 1000K documents. We've
> decided to stop using AWS and migrate to local clusters and hence I need to
> migrate the data from AWS to local.
>
> Can anyone help me out on how to go about the process? I came across methods
> that first migrate all the data in the collection to a single file but I'm
> not sure if that is such a good idea.
>
> It would be really great if some of you could point me to blogs/articles to
> help export solr data from AWS to local.
>
>
> Best Regards,
> Aditya

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343620#comment-16343620
 ] 

Joel Bernstein edited comment on SOLR-11838 at 1/29/18 4:50 PM:


Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, but the SOM's might be interesting 
first step towards neural network classifiers.

The SOM's are part of larger clustering integration which include kmeans, 
fuzzyKmeans, PCA, LSA and SOM.


was (Author: joel.bernstein):
Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, the SOM's might be interesting 
first step towards neural network classifiers.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343620#comment-16343620
 ] 

Joel Bernstein edited comment on SOLR-11838 at 1/29/18 4:50 PM:


Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, but the SOM's might be an 
interesting first step towards neural network classifiers.

The SOM's are part of larger clustering integration which include kmeans, 
fuzzyKmeans, PCA, LSA and SOM.


was (Author: joel.bernstein):
Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, but the SOM's might be interesting 
first step towards neural network classifiers.

The SOM's are part of larger clustering integration which include kmeans, 
fuzzyKmeans, PCA, LSA and SOM.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7145 - Still Unstable!

2018-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7145/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

12 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002\segments_1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002\segments_1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002\segments_1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001\analyzingInfixContext-002\segments_1
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\suggest\test\J0\temp\lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest_8EB616F47F43A9A1-001

at __randomizedtesting.SeedInfo.seed([8EB616F47F43A9A1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestJdbcDataSource

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestJdbcDataSource_B98D0ED672BAFD0-001\dih-properties-010:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestJdbcDataSource_B98D0ED672BAFD0-001\dih-properties-010


[jira] [Comment Edited] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343620#comment-16343620
 ] 

Joel Bernstein edited comment on SOLR-11838 at 1/29/18 4:47 PM:


Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.4). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, the SOM's might be interesting 
first step towards neural network classifiers.


was (Author: joel.bernstein):
Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.3). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, the SOM's might be interesting 
first step towards neural network classifiers.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343620#comment-16343620
 ] 

Joel Bernstein commented on SOLR-11838:
---

Just a heads up. I plan on adding the self organizing map implementations (SOM) 
from Apache Commons Math in the very near future (probably 7.3). These are 
neural network *unsupervised* clustering algorithms that I believe also are 
supported by DL4j. It sounds like people are mostly interested in the the 
*supervised* deep learning models with DL4j, the SOM's might be interesting 
first step towards neural network classifiers.

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10651) Streaming Expressions statistical functions library

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343607#comment-16343607
 ] 

ASF subversion and git services commented on SOLR-10651:


Commit 603bb7fb14e795b3317385fe97c3bfcd4bc39725 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=603bb7f ]

SOLR-10651, SOLR-10784: Add new statistical and machine learning functions to 
CHANGES.txt for 7.3 release


> Streaming Expressions statistical functions library
> ---
>
> Key: SOLR-10651
> URL: https://issues.apache.org/jira/browse/SOLR-10651
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR_7_1_DOCS.patch
>
>
> This is a ticket for organizing the new statistical programming features of 
> Streaming Expressions. It's also a place for the community to discuss what 
> functions are needed to support statistical programming. 
> Basic Syntax:
> {code}
> let(a = timeseries(...),
> b = timeseries(...),
> c = col(a, count(*)),
> d = col(b, count(*)),
> r = regress(c, d),
> tuple(p = predict(r, 50)))
> {code}
> The expression above is doing the following:
> 1) The let expression is setting variables (a, b, c, d, r).
> 2) Variables *a* and *b* are the output of timeseries() Streaming 
> Expressions. These will be stored in memory as lists of Tuples containing the 
> time series results.
> 3) Variables *c* and *d* are set using the *col* evaluator. The col evaluator 
> extracts a column of numbers from a list of tuples. In the example *col* is 
> extracting the count\(*\) field from the two time series result sets.
> 4) Variable *r* is the output from the *regress* evaluator. The regress 
> evaluator performs a simple regression analysis on two columns of numbers.
> 5) Once the variables are set, a single Streaming Expression is run by the 
> *let* expression. In the example the *tuple* expression is run. The tuple 
> expression outputs a single Tuple with name/value pairs. Any Streaming 
> Expression can be run by the *let* expression so this can be a complex 
> program. The streaming expression run by *let* has access to all the 
> variables defined earlier.
> 6) The tuple expression in the example has one name / value pair. The name 
> *p* is set to the output of the *predict* evaluator. The predict evaluator is 
> predicting the value of a dependent variable based on the independent 
> variable 50. The regression result stored in variable *r* is used to make the 
> prediction.
> 7) The output of this expression will be a single tuple with the value of the 
> predict function in the *p* field.
> The growing list of issues linked to this ticket are the array manipulation 
> and statistical functions that will form the basis of the stats library. The 
> vast majority of these functions are backed by algorithms in Apache Commons 
> Math. Other machine learning and math libraries will follow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10784) Streaming Expressions machine learning functions library

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343608#comment-16343608
 ] 

ASF subversion and git services commented on SOLR-10784:


Commit 603bb7fb14e795b3317385fe97c3bfcd4bc39725 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=603bb7f ]

SOLR-10651, SOLR-10784: Add new statistical and machine learning functions to 
CHANGES.txt for 7.3 release


> Streaming Expressions machine learning functions library
> 
>
> Key: SOLR-10784
> URL: https://issues.apache.org/jira/browse/SOLR-10784
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for Streaming Expression's machine learning 
> function library. It will be used in much the same way that SOLR-10651 is 
> being used for the statistical function library.
> In the beginning many of the tickets will be based on machine learning 
> functions in *Apache Commons Math*, but other ML and matrix math libraries 
> will also used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10651) Streaming Expressions statistical functions library

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343603#comment-16343603
 ] 

ASF subversion and git services commented on SOLR-10651:


Commit b4baf080e9f0d66a2841a7648a38ce131b49eeac in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b4baf08 ]

SOLR-10651, SOLR-10784: Add new statistical and machine learning functions to 
CHANGES.txt for 7.3 release


> Streaming Expressions statistical functions library
> ---
>
> Key: SOLR-10651
> URL: https://issues.apache.org/jira/browse/SOLR-10651
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR_7_1_DOCS.patch
>
>
> This is a ticket for organizing the new statistical programming features of 
> Streaming Expressions. It's also a place for the community to discuss what 
> functions are needed to support statistical programming. 
> Basic Syntax:
> {code}
> let(a = timeseries(...),
> b = timeseries(...),
> c = col(a, count(*)),
> d = col(b, count(*)),
> r = regress(c, d),
> tuple(p = predict(r, 50)))
> {code}
> The expression above is doing the following:
> 1) The let expression is setting variables (a, b, c, d, r).
> 2) Variables *a* and *b* are the output of timeseries() Streaming 
> Expressions. These will be stored in memory as lists of Tuples containing the 
> time series results.
> 3) Variables *c* and *d* are set using the *col* evaluator. The col evaluator 
> extracts a column of numbers from a list of tuples. In the example *col* is 
> extracting the count\(*\) field from the two time series result sets.
> 4) Variable *r* is the output from the *regress* evaluator. The regress 
> evaluator performs a simple regression analysis on two columns of numbers.
> 5) Once the variables are set, a single Streaming Expression is run by the 
> *let* expression. In the example the *tuple* expression is run. The tuple 
> expression outputs a single Tuple with name/value pairs. Any Streaming 
> Expression can be run by the *let* expression so this can be a complex 
> program. The streaming expression run by *let* has access to all the 
> variables defined earlier.
> 6) The tuple expression in the example has one name / value pair. The name 
> *p* is set to the output of the *predict* evaluator. The predict evaluator is 
> predicting the value of a dependent variable based on the independent 
> variable 50. The regression result stored in variable *r* is used to make the 
> prediction.
> 7) The output of this expression will be a single tuple with the value of the 
> predict function in the *p* field.
> The growing list of issues linked to this ticket are the array manipulation 
> and statistical functions that will form the basis of the stats library. The 
> vast majority of these functions are backed by algorithms in Apache Commons 
> Math. Other machine learning and math libraries will follow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10784) Streaming Expressions machine learning functions library

2018-01-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343604#comment-16343604
 ] 

ASF subversion and git services commented on SOLR-10784:


Commit b4baf080e9f0d66a2841a7648a38ce131b49eeac in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b4baf08 ]

SOLR-10651, SOLR-10784: Add new statistical and machine learning functions to 
CHANGES.txt for 7.3 release


> Streaming Expressions machine learning functions library
> 
>
> Key: SOLR-10784
> URL: https://issues.apache.org/jira/browse/SOLR-10784
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This is an umbrella ticket for Streaming Expression's machine learning 
> function library. It will be used in much the same way that SOLR-10651 is 
> being used for the statistical function library.
> In the beginning many of the tickets will be based on machine learning 
> functions in *Apache Commons Math*, but other ML and matrix math libraries 
> will also used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-29 Thread Kevin Watters (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343602#comment-16343602
 ] 

Kevin Watters commented on SOLR-11838:
--

I'm very excited to see this integration happening.  [~gus_heck] has been 
working with me on some DL4j projects in particular training models and 
evaluating them for classification.  I think at a high level there are 3 main 
integration patterns that we could / should consider in Solr.
 # using a model at ingest time to tag / annotate a record going into the 
index.  (primary example would be something like sentiment analysis tagging.)  
This implies the model was trained and saved somewhere.
 # using a solr index (query) to generate a set of training test data so that 
DL4j can "fit" the model and train it.  (there might even be a desire for some 
join functionality so you can join together 2 datasets to create adhoc training 
datasets.)
 # (this is a bit more out there.)  indexing each node of the multi layer 
network / computation graph as a document in the index, and use a query to 
evaluate the output of the model by traversing the documents in the index to 
ultimately come up with a set of relevancy scores for the documents that 
represent the output layer of the network.

I think , perhaps, the most interesting use case is #2.  So basically, the idea 
is you want to define a network  (specify the layers, types of layers, 
activation function, etc..) and then specify a query that matches a set of 
documents in the index that would be used to train that model.  Currently DL4j 
uses "datavec" to handle all the data normalization prior to going into the 
model for training.  That exposes a DataSetIterator.  The datasetiterator could 
be replaced with an iterator that sits ontop of the export handler or even just 
a raw search result.  The general use cases here for pagination would be 
 # to iterate the full result set  (presumably multiple times as the model will 
make multiple passes over the data when training.)
 # generate a random ordering of the dataset being returned
 # excluding a random (but deterministic?) set of documents from the main query 
to provide a holdout testing dataset.

Keeping in mind that typically in network training, you have both your training 
dataset and the testing dataset.  

The final outcome of this would be a computationgraph/multilayernetwork which 
can be serialized by dl4j as a json file, and the other output could/should be 
the evaluation or accuracy scores of the model  (F1, Accuracy, and confusion 
matrix.)

As per the comments about natives, yes, there are definitely platform dependent 
parts of DL4j, in particular the "nd4j" which can be gpu/cpu, but there are 
also other dependencies on javacv/javacpp.  The javacv/javacpp stuff is really 
only used for image manipulation as it's the java binding to OpenCV.  The 
dependency tree for DL4j is rather large, so I think we'll need to take 
care/caution that we're not injecting a bunch of conflicting jar files.  
Perhaps, if we identify the conflicting jar versions. 

 

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11922) parallel - cartesianProduct

2018-01-29 Thread Robson Koji (JIRA)
Robson Koji created SOLR-11922:
--

 Summary: parallel - cartesianProduct
 Key: SOLR-11922
 URL: https://issues.apache.org/jira/browse/SOLR-11922
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Parallel SQL, streaming expressions
Affects Versions: 6.6.2
Reporter: Robson Koji


Fix CartesianProductStream in order to run Parallel:

parallel(
   cartesianProduct(
    search(

 

When trying to run que streaming expression above, Solr raises the error bellow:

 


java.io.IOException: java.lang.NullPointerException
    at 
org.apache.solr.client.solrj.io.stream.ParallelStream.constructStreams(ParallelStream.java:277)
    at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305)
    at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
    at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535)
    at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83)
    at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
    at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193)
    at 
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
    at 
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
    at 
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
    at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
    at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
    at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
    at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
    at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
    at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:534)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
    at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
    at 
org.apache.solr.client.solrj.io.stream.CartesianProductStream.toExpression(CartesianProductStream.java:154)
    at 
org.apache.solr.client.solrj.io.stream.CartesianProductStream.toExpression(CartesianProductStream.java:134)
    at 
org.apache.solr.client.solrj.io.stream.CartesianProductStream.toExpression(CartesianProductStream.java:44)
    at 

[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343561#comment-16343561
 ] 

Erick Erickson commented on SOLR-11658:
---

OK, next time I push a fix I'll change it.

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11658) Upgrade ZooKeeper dependency to 3.4.11

2018-01-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343489#comment-16343489
 ] 

Steve Rowe commented on SOLR-11658:
---

[~erickerickson], Jason wrote the patch, so should get credit in CHANGES.txt.

> Upgrade ZooKeeper dependency to 3.4.11
> --
>
> Key: SOLR-11658
> URL: https://issues.apache.org/jira/browse/SOLR-11658
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11658.patch
>
>
> ZK 3.4.11 was released yesterday: 
> http://zookeeper.apache.org/doc/r3.4.11/releasenotes.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 340 - Still Unstable

2018-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/340/

3 tests failed.
FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:46579_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:46579_solr
at 
__randomizedtesting.SeedInfo.seed([9E217D9B085CD302:16754241A6A0BEFA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([9E217D9B085CD302:1506AE4A495A7886]:0)
at 

[jira] [Updated] (SOLR-11918) Document usage of SystemInfoHandler at the node level

2018-01-29 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11918:
-
Component/s: documentation

> Document usage of SystemInfoHandler at the node level
> -
>
> Key: SOLR-11918
> URL: https://issues.apache.org/jira/browse/SOLR-11918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> [http://localhost:8983/solr/admin/info/system] gives us info about the node . 
> It's useful for monitoring scripts to use some information from here. 
> Currently it's not documented in the ref guide . 
>  
> Perhaps the best place would be a section under "Deployment and Operations" 
> for best practices on monitoring a cluster.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11919) V2 API support for the SystemInfoHandler

2018-01-29 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11919:
-
Component/s: v2 API

> V2 API support for the SystemInfoHandler 
> -
>
> Key: SOLR-11919
> URL: https://issues.apache.org/jira/browse/SOLR-11919
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: Varun Thacker
>Priority: Major
>
> SystemInfoHandler does not have a V2 API.
>  
> We should have a V2 equivalent for 
> [http://localhost:8983/solr/admin/info/system]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11919) V2 API support for the SystemInfoHandler

2018-01-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343356#comment-16343356
 ] 

Cassandra Targett commented on SOLR-11919:
--

I believe this already exists at http://localhost:8983/api/node/system. The 
table in the [Google 
Doc|https://docs.google.com/document/d/18n9IL6y82C8gnBred6lzG0GLaT3OsZZsBvJQ2YAt72I]
 attached to SOLR-8029 shows this equivalency mapping also.

> V2 API support for the SystemInfoHandler 
> -
>
> Key: SOLR-11919
> URL: https://issues.apache.org/jira/browse/SOLR-11919
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> SystemInfoHandler does not have a V2 API.
>  
> We should have a V2 equivalent for 
> [http://localhost:8983/solr/admin/info/system]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11918) Document usage of SystemInfoHandler at the node level

2018-01-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343348#comment-16343348
 ] 

Cassandra Targett commented on SOLR-11918:
--

bq. Perhaps the best place would be a section under "Deployment and Operations" 
for best practices on monitoring a cluster.

But in SOLR-11411 you advocated for adding a new section "Monitoring Solr", 
which you did. Doesn't it seem it would muddy things to have monitoring in 2 
places?

> Document usage of SystemInfoHandler at the node level
> -
>
> Key: SOLR-11918
> URL: https://issues.apache.org/jira/browse/SOLR-11918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> [http://localhost:8983/solr/admin/info/system] gives us info about the node . 
> It's useful for monitoring scripts to use some information from here. 
> Currently it's not documented in the ref guide . 
>  
> Perhaps the best place would be a section under "Deployment and Operations" 
> for best practices on monitoring a cluster.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8140) Checks for coplanarity when creating polygons shows numerical issues

2018-01-29 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343334#comment-16343334
 ] 

Karl Wright commented on LUCENE-8140:
-

Rewriting the "planar point filter" is fine with me; never did like that code 
much.


> Checks for coplanarity when creating polygons shows numerical issues
> 
>
> Key: LUCENE-8140
> URL: https://issues.apache.org/jira/browse/LUCENE-8140
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8140-fix.patch, LUCENE-8140.patch
>
>
> Coplanarity checks in GeoPolygonFactory shows numerical errors when the 
> distance between two points is very small compared to the distance of the 
> other two points. The situation is as follows:
> Having three points A, B, & C and the distance between A & B is very small 
> compared to the distance between A & C, then the plane AC contains all points 
> (co-planar) but the plane defined by AB does not contain C because of 
> numerical issues. This situation makes some polygons fail to build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >