[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203097#comment-15203097
 ] 

Shawn Heisey commented on SOLR-6806:


Ouch.  That represents a fairly significant drop in the size of the contrib 
folder, and a small drop in the overall size of the artifacts that a release 
manager must upload.

I actually would have suggested the one in analysis-extras as the one to keep.  
I use the ICU classes from Lucene, so it's logical for that to be the one I'd 
expect to be there.  In the end, I don't really care which one is kept, as long 
as there's general consensus.  I haven't got any clue about which of those 
contrib modules gets used more often.

We could drop a symlink into one of those locations in the .tgz archive.


> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203112#comment-15203112
 ] 

Robert Muir commented on LUCENE-7114:
-

The only issue is, now the onus is on me to fix this? I think this build of 
java 9 is broken, don't disable compact strings, let jenkins fail!

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SolrCloud App Unit Testing

2016-03-19 Thread Madhire, Naveen
Hi,

I am writing a Solr Application, can anyone please let me know how to Unit test 
the application?

I see we have MiniSolrCloudCluster class available in Solr, but I am confused 
about how to use that for Unit testing.

How should I create a embedded server for unit testing?



Thanks,
Naveen


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 19 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/19/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([B4BFCF00AE46A66B:3CEBF0DA00BACB93]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203106#comment-15203106
 ] 

Shawn Heisey edited comment on SOLR-6806 at 3/20/16 4:59 AM:
-

Here are the obvious things to move to their own artifacts, and their .zip 
sizes (in KB, for precision).  I made a stab at a name for the zip version of 
the archive.

contrib: 68423KB (solr-contrib-x.x.x.zip)
dist: 17960KB (solr-jars-x.x.x.zip)
docs: 11893KB (solr-docs-x.x.x.zip)
example: 4265KB (solr-examples-x.x.x.zip)

The kuromoji and hadoop jars from WEB-INF/lib could be placed in another 
artifact.  Not sure what to call it, perhaps solr-extras.

The idea with each of these supporting artifacts is that they would be 
extracted to the same location as the main artifact, so they would contain a 
similar directory structure.  Not sure whether we would omit the solr-x.x.x 
top-level directory that is in the main artifact.  Most people who have .tgz 
experience would expect it to be there, but zip users might be confused.


was (Author: elyograg):
Here are the obvious things to move to their own artifacts, and their .zip 
sizes (in KB, for precision).  I made a stab at a name for the zip version of 
the archive.

contrib: 68423KB (solr-contrib-x.x.x.zip)
dist: 17960KB (solr-jars-x.x.x.zip)
docs: 11893KB (solr-docs-x.x.x.zip)
example: 4265KB (solr-examples-x.x.x.zip)

I would always use the .tgz archives in production, but since the machine where 
I'm doing all this experimentation is Windows, this info is all about the 
zipfiles.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Attachment: SOLR-8878.patch

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Attachments: SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16272 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16272/
Java: 32bit/jdk-9-jigsaw-ea+110 -client -XX:+UseParallelGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9344, 
name=testExecutor-4537-thread-12, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9344, name=testExecutor-4537-thread-12, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([6A53BE5D2E9D5E0D:E2078187806133F5]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44343/dd
at __randomizedtesting.SeedInfo.seed([6A53BE5D2E9D5E0D]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@9-ea/ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@9-ea/ThreadPoolExecutor.java:632)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:44343/dd
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(java.base@9-ea/Native Method)
at 
java.net.SocketInputStream.socketRead(java.base@9-ea/SocketInputStream.java:116)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:170)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11489 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_6A53BE5D2E9D5E0D-001/init-core-data-001
   [junit4]   2> 1044222 INFO  
(SUITE-UnloadDistributedZkTest-seed#[6A53BE5D2E9D5E0D]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /dd/
   [junit4]   2> 1044224 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[6A53BE5D2E9D5E0D]) [] 

[jira] [Commented] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to know

2016-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203069#comment-15203069
 ] 

David Smiley commented on SOLR-8862:


I hope this can get improved/resolved.  I didn't chase it down as far but I too 
had frustrations developing a test using MiniSolrCloudCluster that simply 
wanted the collection to be searchable (in SOLR-5750).

> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8867:
---
Attachment: SOLR-8867.patch

Here's an updated patch that modifies a random range test to include docs w/o a 
value in the field and also queries across negative values.

This also changes getRangeScorer() to use LeafReaderContext to be consistent 
with everything else.

All tests pass, and I plan on committing shortly.

> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8867.patch, SOLR-8867.patch
>
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Summary: Allow the DaemonStream run rate be controlled by the internal 
stream  (was: Allow the DaemonStream run rate to be controlled by the internal 
stream)

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8878) Allow the DaemonStream run rate to be controlled by the internal stream

2016-03-19 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8878:


 Summary: Allow the DaemonStream run rate to be controlled by the 
internal stream
 Key: SOLR-8878
 URL: https://issues.apache.org/jira/browse/SOLR-8878
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein


Currently the DaemonStream sleeps for one second and then checks the 
runInterval param to determine if it needs to rerun the internal stream.

This setup will work fine if the runInterval is longer then one second and if 
it never changes. But with the TopicStream, you want a variable run rate. For 
example if the TopicStream's latest run has returned documents, the next run 
should be immediate. But if the TopicStream's latest run returned zero 
documents then you'd want to sleep for a period of time before starting the 
next run.

This ticket allows the internal stream to control the DaemonStream run rate by 
adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
DaemonStream will check the EOF Tuple from the internal stream and if the 
sleepMillis key-pair is present it will adjust it's run rate accordingly.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197616#comment-15197616
 ] 

Mark Miller commented on SOLR-8742:
---

Also, this is using raw RAMInputStream in this case - nothing HDFS specific in 
this fail.

> HdfsDirectoryTest fails reliably after changes in LUCENE-6932
> -
>
> Key: SOLR-8742
> URL: https://issues.apache.org/jira/browse/SOLR-8742
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> the following seed fails reliably for me on master...
> {noformat}
>[junit4]   2> 1370568 INFO  
> (TEST-HdfsDirectoryTest.testEOF-seed#[A0D22782D87E1CE2]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending testEOF
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
> -Dtests.method=testEOF -Dtests.seed=A0D22782D87E1CE2 -Dtests.slow=true 
> -Dtests.locale=es-PR -Dtests.timezone=Indian/Mauritius -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.13s J0 | HdfsDirectoryTest.testEOF <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A0D22782D87E1CE2:31B9658A9A5ABA9E]:0)
>[junit4]>  at 
> org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> git bisect says this is the first commit where it started failing..
> {noformat}
> ddc65d977f920013c5fca16c8ac75ae2c6895f9d is the first bad commit
> commit ddc65d977f920013c5fca16c8ac75ae2c6895f9d
> Author: Michael McCandless 
> Date:   Thu Jan 21 17:50:28 2016 +
> LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond 
> the end of the file
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1726039 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}
> ...which seems remarkable relevant and likely to indicate a problem that 
> needs fixed in the HdfsDirectory code (or perhaps just the test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8877) SolrCLI.java and corresponding test does not work with whitespace in path

2016-03-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8877:

Description: 
The SolrCLI and the corresponding test use CommandLine.parse() of commons-exec, 
but in most cases the parameters are not correctly escaped.

CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
valid way to build a command line and execute it. The correct war is to create 
an instance of the CommandLine class and then add the arguments one-by one:

{code:java}
  org.apache.commons.exec.CommandLine startCmd = new 
org.apache.commons.exec.CommandLine(callScript);
  startCmd.addArguments(new String[] {
  "start",
  cloudModeArg,
  "-p",
  Integer.toString(port),
  "-s",
  solrHome,
  hostArg,
  zkHostArg,
  memArg,
  extraArgs,
  addlOptsArg
  });
{code}

I tried to fix it by using the approach, but the test then fails with other 
bugs on Windows. I disabled it for now if it detects whitespace in Solr's path. 
I think the reason might be that some of the above args are empty or are 
multi-args on itsself, so they get wrongly escaped.

I have no idea how to fix it, but the current way fails completely on Windows, 
where most users have a whitespace in their home directory or in the 
"C:\Program Files" folder.

  was:
The SolrCLI and the corresponding test use CommandLine.parse() of commons-exec, 
but in most cases the parameters are not correctly escaped.

CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
valid way to build a command line and execute it. The correct war is to create 
an instance of the CommandLine class and then add the arguments one-by one:

{code:java}
  org.apache.commons.exec.CommandLine startCmd = new 
org.apache.commons.exec.CommandLine(callScript);
  startCmd.addArguments(new String[] {
  "start",
  callScript,
  "-p",
  Integer.toString(port),
  "-s",
  solrHome,
  hostArg,
  zkHostArg,
  memArg,
  extraArgs,
  addlOptsArg
  });
{code}

I tried to fix it by using the approach, but the test then fails with other 
bugs on Windows. I disabled it for now if it detects whitespace in Solr's path. 
I think the reason might be that some of the above args are empty or are 
multi-args on itsself, so they get wrongly escaped.

I have no idea how to fix it, but the current way fails completely on Windows, 
where most users have a whitespace in their home directory or in the 
"C:\Program Files" folder.


> SolrCLI.java and corresponding test does not work with whitespace in path
> -
>
> Key: SOLR-8877
> URL: https://issues.apache.org/jira/browse/SOLR-8877
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
>Reporter: Uwe Schindler
> Attachments: SOLR-8877.patch
>
>
> The SolrCLI and the corresponding test use CommandLine.parse() of 
> commons-exec, but in most cases the parameters are not correctly escaped.
> CommandLine.parse() should be placed on forbidden-apis list. This is *not* a 
> valid way to build a command line and execute it. The correct war is to 
> create an instance of the CommandLine class and then add the arguments one-by 
> one:
> {code:java}
>   org.apache.commons.exec.CommandLine startCmd = new 
> org.apache.commons.exec.CommandLine(callScript);
>   startCmd.addArguments(new String[] {
>   "start",
>   cloudModeArg,
>   "-p",
>   Integer.toString(port),
>   "-s",
>   solrHome,
>   hostArg,
>   zkHostArg,
>   memArg,
>   extraArgs,
>   addlOptsArg
>   });
> {code}
> I tried to fix it by using the approach, but the test then fails with other 
> bugs on Windows. I disabled it for now if it detects whitespace in Solr's 
> path. I think the reason might be that some of the above args are empty or 
> are multi-args on itsself, so they get wrongly escaped.
> I have no idea how to fix it, but the current way fails completely on 
> Windows, where most users have a whitespace in their home directory or in the 
> "C:\Program Files" folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Martin Gainty
¡Bienvenidos Kevin!
Martín 
__  
 



Date: Wed, 16 Mar 2016 14:36:32 -0700
Subject: Re: Welcome Kevin Risden as Lucene/Solr committer
From: tomasflo...@gmail.com
To: dev@lucene.apache.org

Welcome Kevin!

On Wed, Mar 16, 2016 at 1:23 PM, Kevin Risden  wrote:
Thanks for the warm welcome. Its an honor to be invited to work on

this project and with so many great people.



Bio:

I graduated from Rose-Hulman Institute of Technology in 2012. My

undergrad revolved around software development, software testing, and

robotics. In early 2013, I joined Avalon Consulting, LLC, moved down

to Austin, TX, and first started using Solr. The focus at the time was

to use Solr as an analytics engine to power charts/graphs. From 2013

on, I worked a lot on Hadoop and Solr integrations with a continued

focus on analytics. Providing training and education are two areas

that I am really passionate about. In addition to my regular work, I

have been improving the SolrJ JDBC driver to enable more analytics use

cases.

Kevin Risden





On Wed, Mar 16, 2016 at 12:55 PM, Anshum Gupta  wrote:

> Congratulations and Welcome Kevin!

>

> On Wed, Mar 16, 2016 at 10:03 AM, David Smiley 

> wrote:

>>

>> Welcome Kevin!

>>

>> (corrected misspelling of your last name in the subject)

>>

>> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:

>>>

>>> I'm pleased to announce that Kevin Risden has accepted the PMC's

>>> invitation to become a committer.

>>>

>>> Kevin, it's tradition that you introduce yourself with a brief bio.

>>>

>>> I believe your account has been setup and karma has been granted so that

>>> you can add yourself to the committers section of the Who We Are page on the

>>> website:

>>> .

>>>

>>> Congratulations and welcome!

>>>

>>>

>>> Joel Bernstein

>>>

>> --

>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker

>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:

>> http://www.solrenterprisesearchserver.com

>

>

>

>

> --

> Anshum Gupta



-

To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

For additional commands, e-mail: dev-h...@lucene.apache.org




  

[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200349#comment-15200349
 ] 

David Smiley commented on SOLR-8765:


I think this issue introduced a possible problem, very likely unintended as 
it's easy to overlook.  This is a new convenience method we are to use (or a 
like-kind constructor):
{code:java}
  public static Create createCollection(String collection, String config, int 
numShards, int numReplicas) {
{code}
Notice that numShards is a primitive int.  And notice that Create.numShards is 
an object Integer.  the setNumShards method is deprecated so I'll overlook that 
as I'm not to call it.  So how am I supposed to use this for the implicit 
router in which my intent is to manage the shards myself, without setting 
numShards?  Perhaps we shall have a separate convenience method & constructor 
expressly for the implicit router?

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7016) Solr/Lucene 5.4.1: FastVectorHighlighter still fails with StringIndexOutOfBoundsException

2016-03-19 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-7016:
--
Attachment: SOLR-4137.patch

Here's a modified patch. It was never incorporated in Lucene. It applies to 5.5.

> Solr/Lucene 5.4.1: FastVectorHighlighter still fails with 
> StringIndexOutOfBoundsException
> -
>
> Key: LUCENE-7016
> URL: https://issues.apache.org/jira/browse/LUCENE-7016
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1
> Environment: OS X 10.10.5
>Reporter: Bjørn Hjelle
>Priority: Minor
>  Labels: fastvectorhighlighter
> Attachments: SOLR-4137.patch
>
>
> I have reported issues with highlighting of EdgeNGram fields in SOLR-7926. 
> As a workaround I now try to use an NGramField and the FastVectorHighlighter, 
> but I often hit the FastVectorHighlighter 
> StringIndexOutOfBoundsException-issue.
> Note that I use luceneMatchVersion="4.3". Without this, the whole term is 
> highlighted, not just the search-term, as I reported in SOLR-7926.
> Any help with this would be highly appreciated! (Or tips on how otherwise to 
> achieve proper highlighting of EdgeNGram and NGram-fields.)
> The issue can easily be reproduced by following these steps: 
> Download and start Solr 5.4.1, create a core:
> -
> $ wget http://apache.uib.no/lucene/solr/5.4.1/solr-5.4.1.tgz
> $ tar xvf solr-5.4.1.tgz
> $ cd solr-5.4.1
> $ bin/solr start -f 
> $ bin/solr create_core -c test -d server/solr/configsets/basic_configs
> (in a second terminal window)
> Add dynamic field and fieldtype to server/solr/test/conf/schema.xml:
> -
>  stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
>   
>   
>   
>   
>maxGramSize="20" luceneMatchVersion="4.3"/>
>   
>   
>   
>   
>   
>   
> 
> Replace existing /select requestHandler in 
> server/solr/test/conf/solrconfig.xml with:
> -
> 
>
>  explicit
>  10
>  name_ngram
>  100%
>  edismax
>  
>   name_ngram
>  
>  *
>  true
>  name_ngram 
>  true
>
>   
>   
> Stop and restart Solr
> ---  
>   
> Create and index this document: 
> --  
> $ more doc.xml 
> 
>   
> 1
> Jan-Ole Pedersen
>   
> 
> $ bin/post -c test doc.xml 
> Execute search: 
> $ curl "http://localhost:8983/solr/test/select?q=jan+ol=json=true;
> {
>   "responseHeader":{
> "status":500,
> "QTime":3,
> "params":{
>   "q":"jan ol",
>   "indent":"true",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "name_ngram":"Jan-Ole Pedersen",
> "_version_":1525256012582354944}]
>   },
>   "error":{
> "msg":"String index out of range: -6",
> "trace":"java.lang.StringIndexOutOfBoundsException: String index out of 
> range: -6\n\tat java.lang.String.substring(String.java:1954)\n\tat 
> org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(BaseFragmentsBuilder.java:180)\n\tat
>  
> org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.createFragments(BaseFragmentsBuilder.java:145)\n\tat
>  
> org.apache.lucene.search.vectorhighlight.FastVectorHighlighter.getBestFragments(FastVectorHighlighter.java:187)\n\tat
>  
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter(DefaultSolrHighlighter.java:479)\n\tat
>  
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:426)\n\tat
>  
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)\n\tat
>  
> 

[jira] [Comment Edited] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to

2016-03-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200194#comment-15200194
 ] 

Hoss Man edited comment on SOLR-8862 at 3/17/16 10:01 PM:
--

Ok, so here's what i've found so far...

* Just adding a single line of logging to my test after {{configureCluster}} 
and before {{cluster.createCollection}} was enough to make the seed start 
passing fairly reliably.
** so clearly a finicky timing problem
* {{MiniSolrCloudCluster}}'s constructor has logic that waits for 
{{/live_nodes}} have {{numServer}} children before returning
** this was added in SOLR-7146 precisely because of problems like the one i'm 
seeing
** if there aren't the expected number of {{/live_nodes}} the first time it 
checks, then it sleeps in 1 second increments until there are.
* {{/live_nodes}} get's populated by {{ZkController.createEphemeralLiveNode}}
** -*THIS METHOD IS SUSPICIOUSLY CALLED IN TWO DIFF PLACES:*-
**# EDIT: this is actualy part of an {{OnReconnect}} handler that I 
misconstrued as something that would be called on the initial connect. -fairly 
early in the {{ZkController}} constructor-...{code}
// we have to register as live first to pick up docs in the buffer
createEphemeralLiveNode();
{code}
**# again as the very last thing in {{ZkControlle.init}}...{code}
// Do this last to signal we're up.
createEphemeralLiveNode();
{code}...this line+comment added in recently in SOLR-8696 when it replaced 
another previously existing call to {{createEphemeralLiveNode}} that was 
earlier in the init method (see 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commitdiff;h=8ac4fdd;hp=7d32456efa4ade0130c3ed0ae677aa47b29355a9
 )
* Even if {{/live_nodes}} were only populated as the very last line in 
{{ZkController.init}}, that's far from the last thing that happens when a solr 
node starts up. Things that happen after {{ZkController}} is initialized but 
before {{CoreContainer.createAndLoad}} returns and the {{SolrDispatchFilter}} 
starts accepting requests:
** {{ZkContainer.initZooKeeper}}...
*** whatever the hell this is suppose to do...{code}
if (zkRun != null && zkServer.getServers().size() > 1 && confDir == null && 
boostrapConf == false) {
  // we are part of an ensemble and we are not uploading the config - pause to 
give the config time
  // to get up
  Thread.sleep(1);
}
{code}
*** any node that has a confDir uploads it to zk: 
{{configManager.uploadConfigDir(configPath, confName);}} (even if it's not 
bootstrapping???)
*** any node that *IS* doing bootstrap does that: 
{{ZkController.bootstrapConf(zkController.getZkClient(), cc, solrHome);}}
** {{CoreContainer.load()}}...
*** Authentication plugins are initialized
*** core * collection & configset & container handlers are initialized
*** *{{CoreDescriptor}} FOR EACH CORE DIR ON DISK ARE LOADED*
 which of course means opening transaction logs, opening indexwriters, open 
searchers, newSearcher event listeners, etc...
*** {{ZkController.checkOverseerDesignate()}} is called (no idea what that does)


Which all leads me to the following conclusions...

# when using {{MiniSolrCloudCluster}}, if you are lucky, there will be at least 
one node not yet in {{/live_nodes} when it does it's first check, and then it 
will sleep 1 second giving those nodes time to _actually_ startup & load their 
cores, and hopefully at least one of them will be completley finished by the 
time you actaully try to use a {{CloudSolrClient}} pointed at that ZK 
{{/live_nodes}} data.
# unless there is some other "i'm alive" data in ZK that 
{{MiniSolrCloudCluster}} should be consulting, it seems like it's doing the 
best it can to ensure that all the nodes are live before returning to the caller
# *This does not seem like a probably that only affects tests.*  This seems 
like a real world problem we shoudl address -- {{CloudSolrClient}} should be 
able to consult some info in ZK to know when a node is _really_ alive and ready 
for requests.
#* if there is a reason why the {{/live_nodes}} entry needs to be created as 
early as it is (ie: {{// we have to register as live first to pick up docs in 
the buffer}}) then it should only be created that one time and some other 
ephemeral node should be used
#* whatever ephemeral node is used should be created by a very explicit very 
special method call made as the very last thing in {{SolrDispatchFilter}}



was (Author: hossman):

Ok, so here's what i've found so far...

* Just adding a single line of logging to my test after {{configureCluster}} 
and before {{cluster.createCollection}} was enough to make the seed start 
passing fairly reliably.
** so clearly a finicky timing problem
* {{MiniSolrCloudCluster}}'s constructor has logic that waits for 
{{/live_nodes}} have {{numServer}} children before returning
** this was added in SOLR-7146 precisely because of problems like the one i'm 
seeing
** if there aren't the 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16233 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16233/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu Mar 17 07:42:23 
GMT 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu Mar 17 07:42:23 GMT 2016
at 
__randomizedtesting.SeedInfo.seed([E7047B72BBDEF85:D5DB47712E958636]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1422)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'first' for 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+109) - Build # 151 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/151/
Java: 64bit/jdk-9-ea+109 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6201, 
name=testExecutor-3395-thread-12, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6201, name=testExecutor-3395-thread-12, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:38149/pvgiu/i
at __randomizedtesting.SeedInfo.seed([83186880356751AC]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:38149/pvgiu/i
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11345 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_83186880356751AC-001/init-core-data-001
   [junit4]   2> 716724 INFO  
(SUITE-UnloadDistributedZkTest-seed#[83186880356751AC]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/pvgiu/i
   [junit4]   2> 716725 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[83186880356751AC]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 716725 INFO  (Thread-2038) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 716725 INFO  (Thread-2038) [] 

[jira] [Commented] (SOLR-8814) Support GeoJSON response format

2016-03-19 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197720#comment-15197720
 ] 

Ryan McKinley commented on SOLR-8814:
-

bq. In the test, it appears System.setProperty("enable.update.log", "false"); 
// schema12 doesn't support version is not needed since you don't use schema12

fixed -- thanks

bq. I suggest initializing the HashMap of the built-in transformers with the 
no-arg constructor (TransformerFactory.java), and same thing for the response 
writers (SolrCore.java). It's not worth it in trying in trying to optimize & 
maintain anything else. I realize you didn't introduce these but I suggest 
ending it now.

Lets open another issue if you care about this... i don't know enough to say, 
and don't want that discussion to get lost in this issue

bq. Personally I'd find it far easier to interpret the test if I was looking at 
the JSON string or toString'ed Map or whatever it is, versus the laborious 
extraction of each part of the data structure. If you disagree, leave it.

I think the tests have a good mix of this -- some are testing with strings and 
others are checking the direct element.  (Where parsing is important)

bq. GeoTransformerFactory.java doesn't compile for me; it references 
GeoJSONResponseWriter.FIELD which doesn't exist. The patch file itself seemed 
strange; seemed like a list of commits and not one patch. Maybe this is related.

sorry, my git patch was weird.  It was the 'patch' flavor, not the 'diff' flavor


> Support GeoJSON response format
> ---
>
> Key: SOLR-8814
> URL: https://issues.apache.org/jira/browse/SOLR-8814
> Project: Solr
>  Issue Type: New Feature
>  Components: Response Writers
>Reporter: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch
>
>
> With minor changes, we can modify the existing JSON writer to produce a 
> GeoJSON `FeatureCollection` for ever SolrDocumentList.  We can then pick a 
> field to use as the geometry type, and use that for the Feature#geometry
> {code}
> "response":{"type":"FeatureCollection","numFound":1,"start":0,"features":[
>   {"type":"Feature",
> "geometry":{"type":"Point","coordinates":[1,2]},
> "properties":{
>   ... the normal solr doc fields here ...}}]
>   }}
> {code}
> This will allow adding solr results directly to various mapping clients like 
> [Leaflet|http://leafletjs.com/]
> 
> This patch will work with Documents that have a spatial field the either:
> 1. Extends AbstractSpatialFieldType
> 2. has a stored value with geojson
> 2. has a stored value that can be parsed by spatial4j (WKT, etc)
> The spatial field is identified with the parameter `geojson.field`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8082:
---
Attachment: SOLR-8082.patch

Updating the patch. This contains a randomized test, which I am currently 
beasting now.
This depends on a patch for a bug I found during testing this, LUCENE-7111.

If the beasting goes fine, I think this is a fix that behaves correctly. But, 
still not sure if this is the best fix to have, since there possibly exists 
another alternative (which I'll look into after this) to write the longs in 
sortable order itself.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200999#comment-15200999
 ] 

Mark Miller commented on SOLR-4509:
---

Actually this snuck in even before SSL, it was added in 4.0.  It was quietly 
added, but internal features did not count on it or use it. SSL only used it 
for test purposes later on. Security has bear hugged it though - it's how you 
configure security now and is part of a user plugin api. Bummer given the old 
deprecated HttpClient classes involved and the extra pain to move to 
preconfiguration. We should try and minimize exposing internal client API's as 
part of our API's to users. It really locks us in.

> Disable HttpClient stale check for performance.
> ---
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, master
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-03-19 Thread Caleb Rackliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated SOLR-8858:
--
 Flags: Patch
External issue URL: https://github.com/apache/lucene-solr/pull/21

I've posted a PR that fixes this in what I'm hoping is a reasonable way. I 
imagine the impact will mostly fall on custom {{StoredFieldsReader}} 
implementations.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Reporter: Caleb Rackliffe
>  Labels: easyfix
> Fix For: 5.5.1
>
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7110) Add Shape Support to BKD (extend to an R*/X-Tree data structure)

2016-03-19 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7110:
---
Description: 
I've been tinkering with this off and on for a while and its showing some 
promise so I'm going to open an issue to (eventually) add this feature to 
either a 6.x or (more likely) a 7.x release.

R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
like the internal node, the key for each leaf node is the Minimum Bounding 
Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding Rectangle) 
of the shape. Inserting a shape then boils down to the best way of optimizing 
the tree structure. This optimization is driven by a set of criteria for 
choosing the appropriate internal key (e.g., minimizing overlap between 
siblings, maximizing "squareness", minimizing area, maximizing space usage). 
Query is then (a bit oversimplified) a two-phase process:
* recurse each branch that overlaps with the MBR of the query shape
* compute the relation with the leaf node(s) - in higher dimensions (3+) this 
becomes an increasingly difficult computational geometry problem

The current BKD implementation is a special simplified case of an R*/X tree 
where, for Point data, it is always guaranteed there will be no overlap between 
sibling nodes (because you're using the point data as the keys). By exploiting 
this property the tree algorithms (split, merge, etc) are relatively cheap 
(hence their performance boost over postings based numerics). By modifying the 
key data, and extending the tree generation algorithms BKD logic can be 
extended to support Shape data using the MBR as the Key and modifying split and 
merge based on the criteria needed for optimizing a shape-based data structure.

The initial implementation (based on limitations of the GeoAPI) will support 2D 
shapes only. Once the GeoAPI can performantly handle 3D shapes the change is 
relatively trivial to add the third dimension to the tree generation code.

Like everything else, this feature will be created in sandbox and, once mature, 
will graduate to lucene-spatial.

  was:
I've been tinkering with this off and on for a while and its showing some 
promise so I'm going to open an issue to (eventually) add this feature to 
either a 6.x or (more likely) a 7.x release.

R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
like the internal node, the key for each leaf node is the Minimum Bounding 
Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding Rectangle) 
of the shape. Inserting a shape then boils down to the best way of optimizing 
the tree structure. This optimization is driven by a set of criteria for 
choosing the appropriate internal key (e.g., minimizing overlap between 
siblings, maximizing "squareness", minimizing area, maximizing space usage). 
Query is then (a bit oversimplified) a two-phase process:
* recurse each branch that overlaps with the MBR of the query shape
* compute the relation with the leaf node(s) - in higher dimensions (3+) this 
becomes an increasingly difficult computational geometry problem
The current BKD implementation is a special simplified case of an R*/X tree 
where, for Point data, it is always guaranteed there will be no overlap between 
sibling nodes (because you're using the point data as the keys). By exploiting 
this property the tree algorithms (split, merge, etc) are relatively cheap 
(hence their performance boost over postings based numerics). By modifying the 
key data, and extending the tree generation algorithms BKD logic can be 
extended to support Shape data using the MBR as the Key and modifying split and 
merge based on the criteria needed for optimizing a shape-based data structure.

The initial implementation (based on limitations of the GeoAPI) will support 2D 
shapes only. Once the GeoAPI can performantly handle 3D shapes the change is 
relatively trivial to add the third dimension to the tree generation code.

Like everything else, this feature will be created in sandbox and, once mature, 
will graduate to lucene-spatial.


> Add Shape Support to BKD (extend to an R*/X-Tree data structure)
> 
>
> Key: LUCENE-7110
> URL: https://issues.apache.org/jira/browse/LUCENE-7110
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>
> I've been tinkering with this off and on for a while and its showing some 
> promise so I'm going to open an issue to (eventually) add this feature to 
> either a 6.x or (more likely) a 7.x release.
> R*/X-Tree is a data structure designed to support Shapes (2D, 3D, nD) where, 
> like the internal node, the key for each leaf node is the Minimum Bounding 
> Range (MBR - sometimes "incorrectly" referred to as Minimum Bounding 
> Rectangle) of 

[jira] [Commented] (SOLR-8842) security should use an API to expose the permission name instead of using HTTP params

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201170#comment-15201170
 ] 

ASF subversion and git services commented on SOLR-8842:
---

Commit faa0586b31d5644360646010ceaf530cbe227498 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=faa0586 ]

SOLR-8842: security rules made more foolproof by asking the requesthandler  
about the well known
  permission name.
  The APIs are also modified to ue 'index' as the unique 
identifier instead of name.
  Name is an optional attribute
  now and only to be used when specifying 
well-known permissions


> security should use an API to expose the permission name instead of using 
> HTTP params
> -
>
> Key: SOLR-8842
> URL: https://issues.apache.org/jira/browse/SOLR-8842
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: security
> Fix For: master, 6.1
>
> Attachments: SOLR-8842.patch, SOLR-8842.patch
>
>
> Currently the well-known permissions are using the HTTP atributes, such as 
> method, uri, params etc to identify the corresponding permission name such as 
> 'read', 'update' etc. Expose this value through an API so that it can be more 
> accurate and handle various versions of the API
> RequestHandlers will be able to implement an interface to provide the name
> {code}
> interface PermissionNameProvider {
>  Name getPermissionName(SolrQueryRequest req)
> }
> {code} 
> This means many significant changes to the API
> 1) {{name}} does not mean a set of http attributes. Name is decided by the 
> requesthandler . Which means it's possible to use the same name across 
> different permissions.  
> examples
> {code}
> {
> "permissions": [
> {//this permission applies to all collections
>   "name": "read",
>   "role": "dev"
> },
> {
>  
>  // this applies to only collection x. But both means you are hitting a 
> read type API
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev"
> }
>   ]
> }
> {code} 
> 2) so far we have been using the name as something unique. We use the name to 
> do an {{update-permission}} , {{delete-permission}} or even when you wish to 
> insert a permission before another permission we used to use the name. Going 
> forward it is not possible. Every permission will get an implicit index. 
> example
> {code}
> {
>   "permissions": [
> {
>   "name": "read",
>   "role": "dev",
>//this attribute is automatically assigned by the system
>   "index" : 1
> },
> {
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev",
>   "index" : 2
> }
>   ]
> }
> {code}
> 3) example update commands
> {code}
> {
>   "set-permission" : {
> "index": 2,
> "name": "read",
> "collection" : "x",
> "role" :["xdev","admin"]
>   },
>   //this deletes the permission at index 2
>   "delete-permission" : 2,
>   //this will insert the command before the first item
>   "set-permission": {
> "name":"config-edit",
> "role":"admin",
> "before":1
>   }
> }
> {code}
> 4) you could construct a  permission purely with http attributes and you 
> don't need any name for that. As expected, this will be appended atthe end of 
> the list of permissions
> {code}
> {
>   "set-permission": {
>  "collection": null,
>  "path":"/admin/collections",
>  "params":{"action":[LIST, CREATE]},
>  "role": "admin"}
> }
> {code}
> Users with existing configuration will not observe any change in behavior. 
> But the commands issued to manipulate the permissions will be different .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199673#comment-15199673
 ] 

Yonik Seeley commented on SOLR-8082:


bq. Do you mean that instead of using the DocValuesRangeQuery.newLongRange(), 
we write something on our own which converts the longs back to floats/doubles 
and then compares those floats/doubles?

Yeah, that's one way.
But maybe we should go ahead and fix ValueSourceRangeFilter to not match 
documents w/o a value in the field.  It's arguably a bug (and only existed 
historically because we didn't have info about what fields had a value for 
numerics, and didn't have exists()) and 6.0 is the perfect time to make the 
change.

bq. Do you think the NumericUtils.doubleToSortableLong() is a good choice for 
converting float/double to longs, instead of Double.doubleToLongBits() which is 
currently used?

Both have their advantages... while sortable longs might be convenient when 
operating in the "long" space, it would slow things down when converting back 
to a double.  


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Splitting Solr artifacts so the main download is smaller

2016-03-19 Thread Shawn Heisey
I'd like to see some motion on this, which probably means I need to do
it myself.  I'd like to know who I can talk to about the build/packaging
system so I can find what needs to change, and especially so I don't
break it.

There's already a jira issue -- SOLR-6806, with some related bits in
SOLR-5103.

The Solr download for 5.5.0 is 130 or 138 megabytes, depending on what
OS you're going to install it on.  For the rest of this email, let's
focus on the .zip version (138MB), since my client is Windows and I'd
like to compare apples to apples.

We have a .zip download size of 138MB, which thankfully is down in size
since we completely dropped the war file.  That *other* search engine
based on Lucene has a .zip download size of 28MB.

I started fiddling with the download archive on my Windows machine,
pulling out obvious pieces at the root of the extracted archive, and
managed to get the .zip size down to 40MB.

If I dig further and remove the lucene-analyzers-kuromoji jar (over 4MB)
and the hadoop jars (10MB), which the majority of Solr's users will
*never* need, Solr 5.5's .zip file drops to 25MB.

I'm not suggesting that we just remove these pieces.  We would need to
have a main artifact and several supporting artifacts.  The total size
would be virtually the same, so the concerns in LUCENE-5589 and
LUCENE-6247 will not get worse.  They also won't get better.

There's plenty of opportunity for bikeshedding here, but that should be
done in Jira.  For this email, I'd like to know if anyone has strong
opposition to this, and if not, who would be willing to provide guidance
for how to do it right.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl getTables() and fix getSchemas()

2016-03-19 Thread Trey Cahill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Cahill updated SOLR-8819:
--
Attachment: SOLR-8819.patch

> Implement DatabaseMetaDataImpl getTables() and fix getSchemas()
> ---
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch, 
> SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199163#comment-15199163
 ] 

Alan Woodward commented on SOLR-8765:
-

I'll try and get to it tomorrow (UK time)

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-19 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201966#comment-15201966
 ] 

Nicholas Knize commented on LUCENE-7118:


Nice! +1 for 6.0

> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8863) zkcli: provide more granularity in config manipulation

2016-03-19 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-8863:
--

 Summary: zkcli: provide more granularity in config manipulation
 Key: SOLR-8863
 URL: https://issues.apache.org/jira/browse/SOLR-8863
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools, SolrCloud
Affects Versions: 5.5
Reporter: Shawn Heisey
Priority: Minor


I was thinking about what somebody has to do if they want to replace a single 
file in a specific SolrCloud configuration.  This and other operations could be 
easier with some tweaks to the zkcli program.

I'd like to have some options to do things like the following, and other 
combinations not specifically stated here:

 * Upload the file named solrconfig.xml to the 'foo' config.
 * Upload the file named solrconfig.xml to the config used by the 'bar' 
collection.
 * Download the file named stopwords.txt from the config used by the 'bar' 
collection.
 * Rename schema.xml to managed-schema in the 'foo' config.
 * Delete archaic_stopwords.txt from the config used by the 'bar' collection.

When a config is changed, it would be a good idea for the program to print out 
a list of all collections affected by the change.  I can imagine a 
"-interactive" option that asks "are you sure" after printing the affected 
collection list, and a "-dry-run" option to print out that information without 
actually doing anything.  An alternative to the interactive option -- have the 
program prompt by default and implement a "-force" option to do it without 
prompting.

I wonder whether it would be a good idea to include an option to reload all 
affected collections after a change is made.  The script uses WEB-INF/lib on 
the classpath, so SolrJ should be available.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 962 - Still Failing

2016-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/962/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, 
_0.si, _1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, 
_4.cfe, _4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}]> but 
was:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, _0.si, 
_1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, _4.cfe, 
_4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}, 
{indexVersion=1458142313332,generation=3,filelist=[_3.cfe, _3.cfs, _3.si, 
_6.cfe, _6.cfs, _6.si, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, 
_0.si, _1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, 
_4.cfe, _4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}]> but 
was:<[{indexVersion=1458142313332,generation=2,filelist=[_0.cfe, _0.cfs, _0.si, 
_1.cfe, _1.cfs, _1.si, _2.cfe, _2.cfs, _2.si, _3.cfe, _3.cfs, _3.si, _4.cfe, 
_4.cfs, _4.si, _5.cfe, _5.cfs, _5.si, segments_2]}, 
{indexVersion=1458142313332,generation=3,filelist=[_3.cfe, _3.cfs, _3.si, 
_6.cfe, _6.cfs, _6.si, segments_3]}]>
at 
__randomizedtesting.SeedInfo.seed([F3C55E34BC995956:D6124504CCD15755]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+109) - Build # 16250 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16250/
Java: 64bit/jdk-9-ea+109 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1}
at 
__randomizedtesting.SeedInfo.seed([69933EC24BE2386A:E1C70118E51E5592]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:221)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198521#comment-15198521
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 44f9569d32a6b84126a91e39ddc598c374adeaab in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=44f9569 ]

SOLR-8838: Returning non-stored docValues is incorrect for negative floats and 
doubles.


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8873) Enforce dataDir/instanceDir/ulogDir to be paths that contain only a controlled subset of characters

2016-03-19 Thread JIRA
Tomás Fernández Löbbe created SOLR-8873:
---

 Summary: Enforce dataDir/instanceDir/ulogDir to be paths that 
contain only a controlled subset of characters
 Key: SOLR-8873
 URL: https://issues.apache.org/jira/browse/SOLR-8873
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe


We currently support any valid path for dataDir/instanceDir/ulogDir. I think we 
should prevent special characters and restrict to a subset that is commonly 
used and tested.
My initial proposals it to allow the Java pattern: {code:java}"^[a-zA-Z0-9\\.\\ 
\\-_/\"':]+$"{code} but I'm open to suggestions. I'm not sure if there can 
be issues with HDFS paths (this pattern does pass the tests we currently have), 
or some other use case I'm not considering.
I also think our tests should use all those characters randomly. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200367#comment-15200367
 ] 

ASF subversion and git services commented on SOLR-8867:
---

Commit c195395d34fb28711b99e4552602dcea729a718b in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c195395 ]

SOLR-8867: fix frange/FunctionValues.getRangeScorer to not match missing 
values, getRangeScorer refactored to take LeafReaderContext


> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8867.patch, SOLR-8867.patch
>
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-19 Thread Alan Woodward
Welcome Kevin!

Alan Woodward
www.flax.co.uk


On 17 Mar 2016, at 09:27, Jan Høydahl wrote:

> Welcome Kevin!
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 16. mar. 2016 kl. 18.03 skrev David Smiley :
>> 
>> Welcome Kevin!
>> 
>> (corrected misspelling of your last name in the subject)
>> 
>> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation 
>> to become a committer.
>> 
>> Kevin, it's tradition that you introduce yourself with a brief bio.
>> 
>> I believe your account has been setup and karma has been granted so that you 
>> can add yourself to the committers section of the Who We Are page on the 
>> website:
>> .
>> 
>> Congratulations and welcome!
>> 
>> 
>> Joel Bernstein
>> 
>> -- 
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
>> http://www.solrenterprisesearchserver.com
> 



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+109) - Build # 16242 - Still Failing!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16242/
Java: 64bit/jdk-9-ea+109 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([F7B3108D2AB1FA9C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)




Build Log:
[...truncated 11557 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/init-core-data-001
   [junit4]   2> 950255 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.SolrTestCaseJ4 ###Starting doTestStopPoll
   [junit4]   2> 950255 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/solr-instance-001/collection1
   [junit4]   2> 950257 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 950258 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@14de41d1{/solr,null,AVAILABLE}
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@62d7faa8{HTTP/1.1,[http/1.1]}{127.0.0.1:41997}
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.e.j.s.Server Started @952336ms
   [junit4]   2> 950260 INFO  
(TEST-TestReplicationHandler.doTestStopPoll-seed#[F7B3108D2AB1FA9C]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F7B3108D2AB1FA9C-001/solr-instance-001/collection1/data,
 hostContext=/solr, hostPort=41997}
   [junit4]   2> 950260 INFO  

[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-19 Thread Ryan Josal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201525#comment-15201525
 ] 

Ryan Josal commented on SOLR-8812:
--

On the topic of SOLR-2649, I just upgraded to 5.5 yesterday and SOLR-2649 broke 
one of our test cases which was "hair ties -barbie" should return hair ties but 
not barbie hair ties, and now it matches nothing.  I assume this is intended, 
but if not, maybe this ticket also addresses it?

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197562#comment-15197562
 ] 

Mark Miller commented on SOLR-8742:
---

Does this still fail for you? Does not seem to be reproducing for me.

> HdfsDirectoryTest fails reliably after changes in LUCENE-6932
> -
>
> Key: SOLR-8742
> URL: https://issues.apache.org/jira/browse/SOLR-8742
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> the following seed fails reliably for me on master...
> {noformat}
>[junit4]   2> 1370568 INFO  
> (TEST-HdfsDirectoryTest.testEOF-seed#[A0D22782D87E1CE2]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending testEOF
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
> -Dtests.method=testEOF -Dtests.seed=A0D22782D87E1CE2 -Dtests.slow=true 
> -Dtests.locale=es-PR -Dtests.timezone=Indian/Mauritius -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.13s J0 | HdfsDirectoryTest.testEOF <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A0D22782D87E1CE2:31B9658A9A5ABA9E]:0)
>[junit4]>  at 
> org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
>[junit4]>  at 
> org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> git bisect says this is the first commit where it started failing..
> {noformat}
> ddc65d977f920013c5fca16c8ac75ae2c6895f9d is the first bad commit
> commit ddc65d977f920013c5fca16c8ac75ae2c6895f9d
> Author: Michael McCandless 
> Date:   Thu Jan 21 17:50:28 2016 +
> LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond 
> the end of the file
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1726039 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}
> ...which seems remarkable relevant and likely to indicate a problem that 
> needs fixed in the HdfsDirectory code (or perhaps just the test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to

2016-03-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200233#comment-15200233
 ] 

Noble Paul edited comment on SOLR-8862 at 3/17/16 7:46 PM:
---

bq.ZkController.checkOverseerDesignate() is called (no idea what that does)

I probably should add a comment there. If an overseer designate is down and 
comes back up, it should be pushed ahead of non designates . So it sends a 
message to overseer to put it in the front of the overseer election queue


was (Author: noble.paul):
bq.ZkController.checkOverseerDesignate() is called (no idea what that does)

I probaly should add a comment there. If an overseer designate is down and 
comes back up, it should be pushed ahead of non designates . So it sends a 
message to overseer to put it in the front of the overseer election queue

> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-03-19 Thread Caleb Rackliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated SOLR-8858:
--
Affects Version/s: 4.10
   5.5

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
> Fix For: 5.5.1
>
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201502#comment-15201502
 ] 

Ishan Chattopadhyaya commented on SOLR-8082:


Thanks Yonik, the patch looks good. The test is passing for me.
Do you think we should rely on FunctionRangeQuery for the entire number line, 
or should we just use this for the negative range? To me, both looked similar 
in terms of performance.

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2016-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198700#comment-15198700
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit ae846bfb492fd91e30daac017c6587083e278236 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae846bf ]

SOLR-8860: Remove back-compat handling of router format made in SOLR-4221 in 
4.5.0


> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, master
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203043#comment-15203043
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

Another data point. We have two copies of the icu4j-54.1.jar library at 11Mb 
each (in Solr 5.5). They are at:
{quote}
./contrib/analysis-extras/lib/icu4j-54.1.jar
./contrib/extraction/lib/icu4j-54.1.jar
{quote}

We probably only need one of them; I am guessing the one in the /extraction.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_72) - Build # 182 - Failure!

2016-03-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/182/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9728, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=9732, 
name=pool-3-thread-1, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=9730, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=9727, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=9731, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=9729, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 

[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2016-03-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197500#comment-15197500
 ] 

Shalin Shekhar Mangar commented on SOLR-7339:
-

Okay I see. In that case, can you set this as resolved so it is not a blocker 
anymore?

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: master
>
> Attachments: SOLR-7339-revert.patch, SOLR-7339.patch, 
> SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8626) [ANGULAR] 404 error when clicking nodes in cloud graph view

2016-03-19 Thread Trey Grainger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Grainger updated SOLR-8626:

Attachment: SOLR-8626.patch

Attached a patch which fixes this issue. The issue existed in both the flat 
graph view and the radial view. Additionally, when one was in the radial view 
and clicked on the link for a node, it would switch back to flat graph view 
when navigating to the other node, so fixed that so that it preserves the 
user's current view type on the URL when navigating between node.

> [ANGULAR] 404 error when clicking nodes in cloud graph view
> ---
>
> Key: SOLR-8626
> URL: https://issues.apache.org/jira/browse/SOLR-8626
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Upayavira
> Attachments: SOLR-8626.patch
>
>
> h3. Reproduce:
> # {{bin/solr start -c}}
> # {{bin/solr create -c mycoll}}
> # Goto http://localhost:8983/solr/#/~cloud
> # Click a collection name in the graph -> 404 error. URL: 
> {{/solr/mycoll/#/~cloud}}
> # Click a shard name in the graph -> 404 error. URL: {{/solr/shard1/#/~cloud}}
> Only verified in Trunk, but probably exists in 5.4 as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



<    1   2   3   4