[jira] [Updated] (SOLR-8777) Duplicate Solr process can cripple a running process

2016-06-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8777:

Attachment: SOLR-8777.patch

Thanks Scott! I like your solution better so this patch uses your code. I'll 
commit this shortly.

> Duplicate Solr process can cripple a running process
> 
>
> Key: SOLR-8777
> URL: https://issues.apache.org/jira/browse/SOLR-8777
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-8777.patch, SOLR-8777.patch
>
>
> Thanks to [~mewmewball] for catching this one.
> Accidentally executing the same instance of Solr twice causes the second 
> start instance to die with an "Address already in use", but not before 
> deleting the first instance's live_node entry, emitting "Found a previous 
> node that still exists while trying to register a new live node  - 
> removing existing node to create another".
> The second start instance dies and its ephemeral node is then removed, 
> causing /live_nodes/ to be empty since the first start instance's 
> live_node was deleted by the second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 229 - Still Failing!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/229/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info
at 
__randomizedtesting.SeedInfo.seed([38F7B94EF3F32560:B0A386945D0F4898]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1162)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1103)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:963)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[GitHub] lucene-solr pull request #21: SOLR-8858 SolrIndexSearcher#doc() Completely I...

2016-06-28 Thread maedhroz
Github user maedhroz closed the pull request at:

https://github.com/apache/lucene-solr/pull/21


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354511#comment-15354511
 ] 

ASF GitHub Bot commented on SOLR-8858:
--

Github user maedhroz closed the pull request at:

https://github.com/apache/lucene-solr/pull/21


> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-06-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354450#comment-15354450
 ] 

David Smiley commented on SOLR-9038:


BTW i've been adding comments at the GH commits on [~hgadre]'s branch there.  
Overall it's looking good -- I like it. My only concern (repeating myself from 
GH) is that SolrPersistentSnapshotManager is a bolt-on to Solr's 
IndexDeletionPolicyWrapper when perhaps it should be integrated (one cohesive 
whole)?  Or keep it bolt-on but make the code that's in IDPW a separate bolt-on 
as well?  It's debatable... another opinion would be nice.  

BTW IMO "SolrDelectionPolicy" would be a better name to 
IndexDeletionPolicyWrapper.

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 294 - Failure

2016-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/294/

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:39800/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:39800/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([9DA7B55C7FF5A5F8:15F38A86D109C800]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
  

[jira] [Commented] (LUCENE-7361) Terms.toStringDebug

2016-06-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354281#comment-15354281
 ] 

David Smiley commented on LUCENE-7361:
--

MemoryIndex is doing this itself.  And if not this issue then in a separate 
issue I think we should improve it (using the WIP code attached here as a 
straw-man) so that it simply uses the Lucene public 
Terms/TermsEnum/PostingsEnum APIs rather than needlessly using its internal 
implementation data which led to a bug.  Terms is Terms... why should 
MemoryIndex be different?  The only thing different I see is that MemoryIndex 
is going to be not as huge as a main index (in general) so, subjectively, it 
can sorta get away with overriding toString vs some other method.  I'd actually 
rather it didn't -- leaving term & position details be on another method to 
avoid toString() getting humungous.  Where I'm coming at this is that it'd be a 
shame if this debugging utility only existed on MemoryIndex since the code 
doesn't really care about MemoryIndex specifics; MI has no specifics -- it's an 
index, albeit a small one.

At times I've wished to view an index I'm debugging that I'm writing tests for, 
which has a small amount of data as it's the unit/test data.  SimpleTextCodec 
is one option but it's very inconvenient to switch to and switch to a non-RAM 
directory vs. a hypothetical diagnostic method on Fields / Terms when I'm 
already in the debugger poking around.  Doesn't that seem useful to you too?

> Terms.toStringDebug
> ---
>
> Key: LUCENE-7361
> URL: https://issues.apache.org/jira/browse/LUCENE-7361
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
> Attachments: MemoryIndexToString.java
>
>
> While fixing LUCENE-7340, MemoryIndex.toString(), I thought MemoryIndex 
> shouldn't need it's own debug toString() impl for its Terms when there could 
> be a generic one.  So here I propose that we create a 
> Terms.toStringDebug(Appendable result, int charLimit, String indent) or 
> some-such but probably not override toString() for obvious reasons.  Maybe 
> also have this on Fields() that simply loops and calls out to the one on 
> Terms.
> The format is debatable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7340) MemoryIndex.toString is broken if you enable payloads

2016-06-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15354196#comment-15354196
 ] 

David Smiley commented on LUCENE-7340:
--

bq. the bracketing suggests that the payload is part of the position 
information (at least that's how I would interpret it), when really its 
something separate?

Payloads are associated with the position just as much as the offsets are, and 
the offsets are enclosed in parenthesis here too.

I'll commit this when I next get a chance; could be a couple days (I'm on 
vacation).

> MemoryIndex.toString is broken if you enable payloads
> -
>
> Key: LUCENE-7340
> URL: https://issues.apache.org/jira/browse/LUCENE-7340
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1, 6.0.1, master (7.0)
>Reporter: Daniel Collins
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-7340.diff, LUCENE-7340.diff, LUCENE-7340.patch
>
>
> Noticed this as we use Luwak which creates a MemoryIndex(true, true) storing 
> both offsets and payloads (though in reality we never put any payloads in it).
> We used to use MemoryIndex.toString() for debugging and noticed it broke in 
> Lucene 5.x  and beyond.  I think LUCENE-6155 broke it when it added support 
> for payloads?
> Creating default memoryindex (as all the tests currently do) works fine, as 
> does one with just offsets, it is just the payload version which is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353949#comment-15353949
 ] 

Erick Erickson commented on SOLR-9194:
--

Thanks a bunch! BTW, in some other testing I was doing I wanted to do things 
like remove all the collections. One can now do things like...
rm -r /collections
and on my local box
rm -rf example/cloud

and be back to a clean slate. With the configs already up there. Then use 'ls' 
to see that things are gone. Then... The 'ls' command is way more important 
than I though, glad you prompted for it

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch, 
> SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9194:
--
Attachment: SOLR-9194.patch

* Fixed -r param
* Tested cp, mv, rm, ls
* Fixed some error print, now complains explicitly for missing -z
* Do not require zk: prefix for mv
* Print help on {{bin\solr zk}} (instead of infinite hang)

Think it's getting closer to something. Nice if someone else on Windows can 
test it through as well.

Did not test upconfig/downconfig/-upconfig/-downconfig commands

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch, 
> SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9185) Solr's "Lucene"/standard query parser should not split on whitespace before sending terms to analysis

2016-06-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353920#comment-15353920
 ] 

Steve Rowe commented on SOLR-9185:
--

This parser's comment support clashes with the approach I took to handling 
whitespace (tokenization vs. ignoring): when a run of whitespace is interrupted 
by a comment, multiple WHITESPACE_SEQ tokens are generated, and the rules 
expect all whitespace runs to be collapsed into a single WHITESPACE_SEQ token.  
Thinking about a way to address this.

> Solr's "Lucene"/standard query parser should not split on whitespace before 
> sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: SOLR-9185.patch, SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8657) SolrRequestInfo logs an error if QuerySenderListener is being used

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353805#comment-15353805
 ] 

ASF subversion and git services commented on SOLR-8657:
---

Commit eaabb9dc77621cd9386a3b522f23280f52cbb5ce in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eaabb9d ]

SOLR-8657: Fix SolrRequestInfo error logs if QuerySenderListener is being used


> SolrRequestInfo logs an error if QuerySenderListener is being used
> --
>
> Key: SOLR-8657
> URL: https://issues.apache.org/jira/browse/SOLR-8657
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8657.patch, SOLR-8657.patch, Screen Shot 2016-02-10 
> at 09.43.56.png
>
>
> This is the stack trace:
> {code}
> at 
> org.apache.solr.request.SolrRequestInfo.setRequestInfo(SolrRequestInfo.java:59)
> at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:68)
> at org.apache.solr.core.SolrCore$6.call(SolrCore.java:1859)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> SolrRequestInfo is being set in MDCAwareThreadPoolExecutor.execute() and 
> later in QuerySenderListener.newSearcher() in the same thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353783#comment-15353783
 ] 

Jan Høydahl commented on SOLR-9194:
---

I still don't understand why we're doing all this command option parsing in .sh 
and .cmd, when we could have passed the whole input to SolrCLI.java and make 
the logic there.

Well, I understand that currently we rely on parsing {{solr.in.sh|cmd}} for 
various Java options like memory, port etc, but that file could be a properties 
file parsed by Java, and then Java could fork a new child process calling Java 
with all the correct options, not?

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch, 
> SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9194:
--
Attachment: SOLR-9194.patch

Got solr.cmd working a bit better. It is now parsing the options correctly and 
I successfully tested the cp command.

Still stuff to clean up and test!!

Also, I think the changes somehow must have messed up the {{solr start}} cmd. I 
got the response {{'to' is not recognized as an internal or external command}} 
when executing {{solr start -c}}



> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch, 
> SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353773#comment-15353773
 ] 

Erick Erickson commented on SOLR-9194:
--

Thanks! One thing I wonder about is certainly whether Windows file patters with 
backslashes are handled appropriately.

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-06-28 Thread Caleb Rackliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353771#comment-15353771
 ] 

Caleb Rackliffe commented on SOLR-8858:
---

After applying myself on master, the {{doc}} method looks like this:

{noformat}
  @Override
  public Document doc(int i, Set fields) throws IOException {
Document d;
if (documentCache != null) {
  d = documentCache.get(i);
  if (d != null) return d;
}

final DirectoryReader reader = getIndexReader();
if (fields != null) {
  if (enableLazyFieldLoading) {
final SetNonLazyFieldSelector visitor = new 
SetNonLazyFieldSelector(fields, reader, i);
reader.document(i, visitor);
d = visitor.doc;
  } else {
d = reader.document(i, fields);
  }
} else {
  d = reader.document(i);
}

if (documentCache != null) {
  documentCache.put(i, d);
}

return d;
  }
{noformat}

Running the tests (i.e. {{ant test -Dtests.slow=false}}), I get:

{noformat}
[junit4] Tests with failures [seed: BB0B954A8C44DF29]:
   [junit4]   - 
org.apache.solr.response.transform.TestSubQueryTransformer.testTwoSubQueriesAndByNumberWithTwoFields
   [junit4]   - 
org.apache.solr.response.transform.TestSubQueryTransformer.testJustJohnJavabin
   [junit4]   - 
org.apache.solr.response.transform.TestSubQueryTransformer.testJustJohnJson
   [junit4]   - 
org.apache.solr.response.transform.TestSubQueryTransformer.testJohnOrNancySingleField
   [junit4]   - 
org.apache.solr.response.transform.TestSubQueryTransformer.testThreeLevel
   [junit4]   - org.apache.solr.cloud.DistribJoinFromCollectionTest.testScore
   [junit4]   - org.apache.solr.cloud.DistribJoinFromCollectionTest.testNoScore
   [junit4]   - org.apache.solr.cloud.TestCloudDeleteByQuery (suite)
   [junit4]
   [junit4]
   [junit4] JVM J0: 0.58 ..   415.88 =   415.29s
   [junit4] JVM J1: 0.58 ..   415.81 =   415.23s
   [junit4] JVM J2: 0.58 ..   415.88 =   415.30s
   [junit4] JVM J3: 0.58 ..   415.72 =   415.13s
   [junit4] Execution time total: 6 minutes 56 seconds
   [junit4] Tests summary: 616 suites (10 ignored), 2584 tests, 1 suite-level 
error, 4 errors, 3 failures, 279 ignored (258 assumptions)
{noformat}

I'm going to dig into these a bit and see if using the {{fields}} set broke 
some assumptions somewhere...

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353760#comment-15353760
 ] 

Erik Hatcher commented on SOLR-7883:


I gave incorrect query parser syntax before, so that may have been the issue.  
It should be like this: {code}q={!mlt mindf=1 mintf=1}{code}

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353755#comment-15353755
 ] 

Erik Hatcher edited comment on SOLR-7883 at 6/28/16 9:41 PM:
-

Facets are driven off the result set (generated from the MLT query).  Maybe 
you're just not seeing the effect of it?   Here's an example:

Solr 5.4.1 (because that's the latest one I had laying around)
{code}

$ bin/solr start -e techproducts

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%20qf=name%7DSP2514N'
{
  'responseHeader'=>{
'status'=>0,
'QTime'=>1,
'params'=>{
  'q'=>'{!mlt mindf=1 mintf=1 qf=name}SP2514N',
  'facet.field'=>'cat',
  'debug'=>'query',
  'indent'=>'on',
  'facet.mincount'=>'1',
  'rows'=>'0',
  'wt'=>'ruby',
  'facet'=>'on'}},
  'response'=>{'numFound'=>2,'start'=>0,'docs'=>[]
  },
  'facet_counts'=>{
'facet_queries'=>{},
'facet_fields'=>{
  'cat'=>[
'electronics',2,
'hard drive',1,
'music',1]},
'facet_dates'=>{},
'facet_ranges'=>{},
'facet_intervals'=>{},
'facet_heatmaps'=>{}},
  'debug'=>{
'rawquerystring'=>'{!mlt mindf=1 mintf=1 qf=name}SP2514N',
'querystring'=>'{!mlt mindf=1 mintf=1 qf=name}SP2514N',
'parsedquery'=>'(+(name:gb name:hard name:drive name:250 name:spinpoint 
name:133 name:p120 name:samsung name:sp2514n name:ata) -id:SP2514N)/no_coord',
'parsedquery_toString'=>'+(name:gb name:hard name:drive name:250 
name:spinpoint name:133 name:p120 name:samsung name:sp2514n name:ata) 
-id:SP2514N',
'QParser'=>'SimpleMLTQParser'}}

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%20qf=name%7DGB18030TEST'
{
  'responseHeader'=>{
'status'=>0,
'QTime'=>0,
'params'=>{
  'q'=>'{!mlt mindf=1 mintf=1 qf=name}GB18030TEST',
  'facet.field'=>'cat',
  'debug'=>'query',
  'indent'=>'on',
  'facet.mincount'=>'1',
  'rows'=>'0',
  'wt'=>'ruby',
  'facet'=>'on'}},
  'response'=>{'numFound'=>2,'start'=>0,'docs'=>[]
  },
  'facet_counts'=>{
'facet_queries'=>{},
'facet_fields'=>{
  'cat'=>[
'electronics',1,
'music',1,
'search',1,
'software',1]},
'facet_dates'=>{},
'facet_ranges'=>{},
'facet_intervals'=>{},
'facet_heatmaps'=>{}},
  'debug'=>{
'rawquerystring'=>'{!mlt mindf=1 mintf=1 qf=name}GB18030TEST',
'querystring'=>'{!mlt mindf=1 mintf=1 qf=name}GB18030TEST',
'parsedquery'=>'(+(name:with name:encoded name:test name:some 
name:characters name:gb18030) -id:GB18030TEST)/no_coord',
'parsedquery_toString'=>'+(name:with name:encoded name:test name:some 
name:characters name:gb18030) -id:GB18030TEST',
'QParser'=>'SimpleMLTQParser'}}
{code}


was (Author: ehatcher):
Facets are driven off the result set (generated from the MLT query).  Maybe 
you're just not seeing the effect of it?   Here's an example:

Solr 5.4.1 (because that's the latest one I had laying around)
{code}

$ bin/solr start -e techproducts

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%7DGB18030TEST'
{
  'responseHeader'=>{
'status'=>0,
'QTime'=>2,
'params'=>{
  'q'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
  'facet.field'=>'cat',
  'debug'=>'query',
  'indent'=>'on',
  'facet.mincount'=>'1',
  'rows'=>'0',
  'wt'=>'ruby',
  'facet'=>'on'}},
  'response'=>{'numFound'=>1,'start'=>0,'docs'=>[]
  },
  'facet_counts'=>{
'facet_queries'=>{},
'facet_fields'=>{
  'cat'=>[
'search',1,
'software',1]},
'facet_dates'=>{},
'facet_ranges'=>{},
'facet_intervals'=>{},
'facet_heatmaps'=>{}},
  'debug'=>{
'rawquerystring'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
'querystring'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
'parsedquery'=>'(+(features:here name:characters features:no features:个 
features:份 features:shiny features:光 features:有 name:gb18030 features:very 
features:功 features:很 features:件 features:feature features:能 features:一 
id:GB18030TEST features:泽 features:文 features:document features:this 
features:is features:是 features:这 features:translated) 
-id:GB18030TEST)/no_coord',
'parsedquery_toString'=>'+(features:here name:characters features:no 
features:个 features:份 features:shiny features:光 features:有 name:gb18030 
features:very features:功 features:很 features:件 features:feature features:能 
features:一 id:GB18030TEST features:泽 features:文 features:document features:this 
features:is features:是 features:这 features:translated) -id:GB18030TEST',
'QParser'=>'SimpleMLTQParser'}}

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%7DSP2514N'
{
  'responseHeader'=>{
'status'=>0,

[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353755#comment-15353755
 ] 

Erik Hatcher commented on SOLR-7883:


Facets are driven off the result set (generated from the MLT query).  Maybe 
you're just not seeing the effect of it?   Here's an example:

Solr 5.4.1 (because that's the latest one I had laying around)
{code}

$ bin/solr start -e techproducts

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%7DGB18030TEST'
{
  'responseHeader'=>{
'status'=>0,
'QTime'=>2,
'params'=>{
  'q'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
  'facet.field'=>'cat',
  'debug'=>'query',
  'indent'=>'on',
  'facet.mincount'=>'1',
  'rows'=>'0',
  'wt'=>'ruby',
  'facet'=>'on'}},
  'response'=>{'numFound'=>1,'start'=>0,'docs'=>[]
  },
  'facet_counts'=>{
'facet_queries'=>{},
'facet_fields'=>{
  'cat'=>[
'search',1,
'software',1]},
'facet_dates'=>{},
'facet_ranges'=>{},
'facet_intervals'=>{},
'facet_heatmaps'=>{}},
  'debug'=>{
'rawquerystring'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
'querystring'=>'{!mlt mindf=1 mintf=1}GB18030TEST',
'parsedquery'=>'(+(features:here name:characters features:no features:个 
features:份 features:shiny features:光 features:有 name:gb18030 features:very 
features:功 features:很 features:件 features:feature features:能 features:一 
id:GB18030TEST features:泽 features:文 features:document features:this 
features:is features:是 features:这 features:translated) 
-id:GB18030TEST)/no_coord',
'parsedquery_toString'=>'+(features:here name:characters features:no 
features:个 features:份 features:shiny features:光 features:有 name:gb18030 
features:very features:功 features:很 features:件 features:feature features:能 
features:一 id:GB18030TEST features:泽 features:文 features:document features:this 
features:is features:是 features:这 features:translated) -id:GB18030TEST',
'QParser'=>'SimpleMLTQParser'}}

$ curl 
'http://localhost:8983/solr/techproducts/select?wt=ruby=on=query=0=on=cat=1=%7B!mlt%20mindf=1%20mintf=1%7DSP2514N'
{
  'responseHeader'=>{
'status'=>0,
'QTime'=>4,
'params'=>{
  'q'=>'{!mlt mindf=1 mintf=1}SP2514N',
  'facet.field'=>'cat',
  'debug'=>'query',
  'indent'=>'on',
  'facet.mincount'=>'1',
  'rows'=>'0',
  'wt'=>'ruby',
  'facet'=>'on'}},
  'response'=>{'numFound'=>12,'start'=>0,'docs'=>[]
  },
  'facet_counts'=>{
'facet_queries'=>{},
'facet_fields'=>{
  'cat'=>[
'electronics',11,
'memory',3,
'connector',2,
'graphics card',2,
'camera',1,
'copier',1,
'hard drive',1,
'multifunction printer',1,
'music',1,
'printer',1,
'scanner',1]},
'facet_dates'=>{},
'facet_ranges'=>{},
'facet_intervals'=>{},
'facet_heatmaps'=>{}},
  'debug'=>{
'rawquerystring'=>'{!mlt mindf=1 mintf=1}SP2514N',
'querystring'=>'{!mlt mindf=1 mintf=1}SP2514N',
'parsedquery'=>'(+(features:cache name:hard manu:co features:technology 
features:ide id:SP2514N features:silentseek features:fdb features:noiseguard 
manu:ltd features:motor name:p120 features:ultra features:dynamic 
features:fluid name:spinpoint features:8mb features:bearing name:sp2514n 
features:7200rpm name:250 cat:electronics features:133 id:samsung features:ata) 
-id:SP2514N)/no_coord',
'parsedquery_toString'=>'+(features:cache name:hard manu:co 
features:technology features:ide id:SP2514N features:silentseek features:fdb 
features:noiseguard manu:ltd features:motor name:p120 features:ultra 
features:dynamic features:fluid name:spinpoint features:8mb features:bearing 
name:sp2514n features:7200rpm name:250 cat:electronics features:133 id:samsung 
features:ata) -id:SP2514N',
'QParser'=>'SimpleMLTQParser'}}

{code}

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
> 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17088 - Failure!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17088/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, TransactionLog, 
TransactionLog, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
TransactionLog, TransactionLog, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([8083CB38BBA61C3D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
There are still nodes recoverying - waited for 120 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 120 
seconds
at 
__randomizedtesting.SeedInfo.seed([8083CB38BBA61C3D:8D7F4E2155A71C5]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:182)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5942 - Still Failing!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5942/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([F8495C8E51511DB3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([F8495C8E51511DB3:5F0DE42A3CEA0E0A]:0)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:298)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)

[jira] [Comment Edited] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353585#comment-15353585
 ] 

Erik Hatcher edited comment on SOLR-7883 at 6/28/16 9:21 PM:
-

q=\{!mlt\} does work out somehow yet this is still not correct. No matter what 
document id: is used facets are always the same values and counts. It seems to 
facet on the main results, not on the MLT result set. 


was (Author: nuddlegg):
q={!mlt} does work out somehow yet this is still not correct. No matter what 
document id: is used facets are always the same values and counts. It seems to 
facet on the main results, not on the MLT result set. 

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7361) Terms.toStringDebug

2016-06-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353730#comment-15353730
 ] 

Robert Muir commented on LUCENE-7361:
-

I don't really think the two are comparable. For memoryindex maybe it is ok to 
loop over things like fields or terms, but this is not ok for a real index. 
Can't memoryindex just do this stuff itself?

> Terms.toStringDebug
> ---
>
> Key: LUCENE-7361
> URL: https://issues.apache.org/jira/browse/LUCENE-7361
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
> Attachments: MemoryIndexToString.java
>
>
> While fixing LUCENE-7340, MemoryIndex.toString(), I thought MemoryIndex 
> shouldn't need it's own debug toString() impl for its Terms when there could 
> be a generic one.  So here I propose that we create a 
> Terms.toStringDebug(Appendable result, int charLimit, String indent) or 
> some-such but probably not override toString() for obvious reasons.  Maybe 
> also have this on Fields() that simply loops and calls out to the one on 
> Terms.
> The format is debatable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8777) Duplicate Solr process can cripple a running process

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353715#comment-15353715
 ] 

Scott Blum commented on SOLR-8777:
--

LGTM.  One suggestion, it's almost as easy to make 
checkForExistingEphemeralNode() to use a watcher instead of a loop.

{code}
private void checkForExistingEphemeralNode() throws KeeperException, 
InterruptedException {
  if (zkRunOnly) {
return;
  }
  String nodeName = getNodeName();
  String nodePath = ZkStateReader.LIVE_NODES_ZKNODE + "/" + nodeName;

  if (!zkClient.exists(nodePath, true)) {
return;
  }

  final CountDownLatch deletedLatch = new CountDownLatch(1);
  Stat stat = zkClient.exists(nodePath, new Watcher() {
@Override
public void process(WatchedEvent event) {
  if (Event.EventType.None.equals(event.getType())) {
return;
  }
  if (Event.EventType.NodeDeleted.equals(event.getType())) {
deletedLatch.countDown();
  }
}
  }, true);

  if (stat == null) {
// suddenly disappeared
return;
  }

  boolean deleted = 
deletedLatch.await(zkClient.getSolrZooKeeper().getSessionTimeout() * 2, 
TimeUnit.MILLISECONDS);
  if (!deleted) {
throw new SolrException(ErrorCode.SERVER_ERROR, "A previous ephemeral live 
node still exists. " +
"Solr cannot continue. Please ensure that no other Solr process using 
the same port is running already.");
  }
}
{code}


> Duplicate Solr process can cripple a running process
> 
>
> Key: SOLR-8777
> URL: https://issues.apache.org/jira/browse/SOLR-8777
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-8777.patch
>
>
> Thanks to [~mewmewball] for catching this one.
> Accidentally executing the same instance of Solr twice causes the second 
> start instance to die with an "Address already in use", but not before 
> deleting the first instance's live_node entry, emitting "Found a previous 
> node that still exists while trying to register a new live node  - 
> removing existing node to create another".
> The second start instance dies and its ephemeral node is then removed, 
> causing /live_nodes/ to be empty since the first start instance's 
> live_node was deleted by the second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 104 - Still Failing

2016-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/104/

4 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([CA8585B597C94BF3:140FEBC8AC6B2900]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:182)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:134)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR(LeaderInitiatedRecoveryOnShardRestartTest.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-7340) MemoryIndex.toString is broken if you enable payloads

2016-06-28 Thread Daniel Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353689#comment-15353689
 ] 

Daniel Collins commented on LUCENE-7340:


The only potential issue I see with the output as it stands with your patch, is 
that the bracketing suggests that the payload is part of the position 
information (at least that's how I would interpret it), when really its 
something separate?  But payloads aren't an area I know well, we came upon this 
bug by accident, so I don't feel that strongly about it.

Agreed, there is no real value in the number of payloads, I only added it as 
both terms and positions had counts, so it was purely for consistency with them.

> MemoryIndex.toString is broken if you enable payloads
> -
>
> Key: LUCENE-7340
> URL: https://issues.apache.org/jira/browse/LUCENE-7340
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1, 6.0.1, master (7.0)
>Reporter: Daniel Collins
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-7340.diff, LUCENE-7340.diff, LUCENE-7340.patch
>
>
> Noticed this as we use Luwak which creates a MemoryIndex(true, true) storing 
> both offsets and payloads (though in reality we never put any payloads in it).
> We used to use MemoryIndex.toString() for debugging and noticed it broke in 
> Lucene 5.x  and beyond.  I think LUCENE-6155 broke it when it added support 
> for payloads?
> Creating default memoryindex (as all the tests currently do) works fine, as 
> does one with just offsets, it is just the payload version which is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 992 - Failure!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/992/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.test

Error Message:
Could not load collection from ZK: testCreateShardRepFactor

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
testCreateShardRepFactor
at 
__randomizedtesting.SeedInfo.seed([1D68AC798DCAF1E0:953C93A323369C18]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1047)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:610)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:211)
at 
org.apache.solr.common.cloud.ClusterState.getSlicesMap(ClusterState.java:151)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:153)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:134)
at 
org.apache.solr.cloud.CustomCollectionTest.testCreateShardRepFactor(CustomCollectionTest.java:393)
at 
org.apache.solr.cloud.CustomCollectionTest.test(CustomCollectionTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-7361) Terms.toStringDebug

2016-06-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7361:
-
Attachment: MemoryIndexToString.java

Attached is some WIP on MemoryIndex's impl that I started to make generic.  
Again, the format is debatable but it'd be nice to hear that this is a good 
idea and what the API would be before continuing.

> Terms.toStringDebug
> ---
>
> Key: LUCENE-7361
> URL: https://issues.apache.org/jira/browse/LUCENE-7361
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: David Smiley
> Attachments: MemoryIndexToString.java
>
>
> While fixing LUCENE-7340, MemoryIndex.toString(), I thought MemoryIndex 
> shouldn't need it's own debug toString() impl for its Terms when there could 
> be a generic one.  So here I propose that we create a 
> Terms.toStringDebug(Appendable result, int charLimit, String indent) or 
> some-such but probably not override toString() for obvious reasons.  Maybe 
> also have this on Fields() that simply loops and calls out to the one on 
> Terms.
> The format is debatable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7362) Implement FieldInfos and FieldInfo toString()

2016-06-28 Thread David Smiley (JIRA)
David Smiley created LUCENE-7362:


 Summary: Implement FieldInfos and FieldInfo toString()
 Key: LUCENE-7362
 URL: https://issues.apache.org/jira/browse/LUCENE-7362
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: David Smiley


FieldInfos and FieldInfo ought to override toString().  Perhaps 
FieldInfo.toString() can look like the pattern popularized by Luke, also seen 
in Solr?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8657) SolrRequestInfo logs an error if QuerySenderListener is being used

2016-06-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-8657:
---

Assignee: Tomás Fernández Löbbe

> SolrRequestInfo logs an error if QuerySenderListener is being used
> --
>
> Key: SOLR-8657
> URL: https://issues.apache.org/jira/browse/SOLR-8657
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8657.patch, SOLR-8657.patch, Screen Shot 2016-02-10 
> at 09.43.56.png
>
>
> This is the stack trace:
> {code}
> at 
> org.apache.solr.request.SolrRequestInfo.setRequestInfo(SolrRequestInfo.java:59)
> at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:68)
> at org.apache.solr.core.SolrCore$6.call(SolrCore.java:1859)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> SolrRequestInfo is being set in MDCAwareThreadPoolExecutor.execute() and 
> later in QuerySenderListener.newSearcher() in the same thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8657) SolrRequestInfo logs an error if QuerySenderListener is being used

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353633#comment-15353633
 ] 

ASF subversion and git services commented on SOLR-8657:
---

Commit 4070bdd8d8b2095b406c404720e5f2c347596350 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4070bdd ]

SOLR-8657: Fix SolrRequestInfo error logs if QuerySenderListener is being used


> SolrRequestInfo logs an error if QuerySenderListener is being used
> --
>
> Key: SOLR-8657
> URL: https://issues.apache.org/jira/browse/SOLR-8657
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
> Attachments: SOLR-8657.patch, SOLR-8657.patch, Screen Shot 2016-02-10 
> at 09.43.56.png
>
>
> This is the stack trace:
> {code}
> at 
> org.apache.solr.request.SolrRequestInfo.setRequestInfo(SolrRequestInfo.java:59)
> at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:68)
> at org.apache.solr.core.SolrCore$6.call(SolrCore.java:1859)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> SolrRequestInfo is being set in MDCAwareThreadPoolExecutor.execute() and 
> later in QuerySenderListener.newSearcher() in the same thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353631#comment-15353631
 ] 

Jan Høydahl commented on SOLR-9194:
---

I'll take a stab at getting it running...

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7361) Terms.toStringDebug

2016-06-28 Thread David Smiley (JIRA)
David Smiley created LUCENE-7361:


 Summary: Terms.toStringDebug
 Key: LUCENE-7361
 URL: https://issues.apache.org/jira/browse/LUCENE-7361
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: David Smiley


While fixing LUCENE-7340, MemoryIndex.toString(), I thought MemoryIndex 
shouldn't need it's own debug toString() impl for its Terms when there could be 
a generic one.  So here I propose that we create a 
Terms.toStringDebug(Appendable result, int charLimit, String indent) or 
some-such but probably not override toString() for obvious reasons.  Maybe also 
have this on Fields() that simply loops and calls out to the one on Terms.

The format is debatable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7340) MemoryIndex.toString is broken if you enable payloads

2016-06-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7340:
-
Attachment: LUCENE-7340.patch

I tweaked your patch a bit, including modifying the output a little.  I think 
it's no value to count the number of payloads so I removed that; do you 
disagree?  I added a dedicated toString() test.

What I really don't like about MemoryIndex.toString() is that it isn't generic 
when it so obviously could be.  Why have MemoryIndex specific logic that needs 
to be maintained (it broke here causing this bug) when there might be a 
Terms.toString() or at least a utility method on Terms (or Fields?) that a 
better MemoryIndex might call?  I'm filing a separate issue for that.

> MemoryIndex.toString is broken if you enable payloads
> -
>
> Key: LUCENE-7340
> URL: https://issues.apache.org/jira/browse/LUCENE-7340
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1, 6.0.1, master (7.0)
>Reporter: Daniel Collins
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-7340.diff, LUCENE-7340.diff, LUCENE-7340.patch
>
>
> Noticed this as we use Luwak which creates a MemoryIndex(true, true) storing 
> both offsets and payloads (though in reality we never put any payloads in it).
> We used to use MemoryIndex.toString() for debugging and noticed it broke in 
> Lucene 5.x  and beyond.  I think LUCENE-6155 broke it when it added support 
> for payloads?
> Creating default memoryindex (as all the tests currently do) works fine, as 
> does one with just offsets, it is just the payload version which is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being searched is not present in segment

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353590#comment-15353590
 ] 

ASF subversion and git services commented on SOLR-9254:
---

Commit 723fc1dc8560b4255cca5fe198115c894205683c in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=723fc1d ]

SOLR-9254: Fixed typo in CHANGES.txt


> GraphTermsQueryQParserPlugin throws NPE when field being searched is not 
> present in segment
> ---
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being searched is not present in segment

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353589#comment-15353589
 ] 

ASF subversion and git services commented on SOLR-9254:
---

Commit 3f7acb5cf90e8e3e7ed21e927d10b867d0b307f1 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f7acb5 ]

SOLR-9254: Fixed typo in CHANGES.txt


> GraphTermsQueryQParserPlugin throws NPE when field being searched is not 
> present in segment
> ---
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Michael Daum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353585#comment-15353585
 ] 

Michael Daum commented on SOLR-7883:


q={!mlt} does work out somehow yet this is still not correct. No matter what 
document id: is used facets are always the same values and counts. It seems to 
facet on the main results, not on the MLT result set. 

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8777) Duplicate Solr process can cripple a running process

2016-06-28 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8777:

Attachment: SOLR-8777.patch

Here's a patch which waits for upto twice the session timeout for the ephemeral 
node to go away before setting up overseer election and creating live node. If 
the node doesn't go away, we raise an exception and bail out.

> Duplicate Solr process can cripple a running process
> 
>
> Key: SOLR-8777
> URL: https://issues.apache.org/jira/browse/SOLR-8777
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-8777.patch
>
>
> Thanks to [~mewmewball] for catching this one.
> Accidentally executing the same instance of Solr twice causes the second 
> start instance to die with an "Address already in use", but not before 
> deleting the first instance's live_node entry, emitting "Found a previous 
> node that still exists while trying to register a new live node  - 
> removing existing node to create another".
> The second start instance dies and its ephemeral node is then removed, 
> causing /live_nodes/ to be empty since the first start instance's 
> live_node was deleted by the second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 228 - Still Failing!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/228/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([499DD27EC24E021F:A0C769465CD792B7]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2=standard=0=20=2.2
at 

[jira] [Commented] (SOLR-9253) solrcloud goes dowm

2016-06-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353558#comment-15353558
 ] 

Shalin Shekhar Mangar commented on SOLR-9253:
-

bq. Before hitting a critical failure, would there be any indications in the 
logs of the possible need to increase the pool? Issuing a warning might be 
useful when lease times exceed a certain threshold.

If a sane connection timeout has been specified then there should be 
ConnectionPoolTimeoutException wrapped inside a RequestAbortedException. AFAIK, 
the lease times aren't available to us. The PoolingHttpClientConnectionManager 
exposes pool statistics but we don't expose that anywhere. 

> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353556#comment-15353556
 ] 

Erick Erickson commented on SOLR-7191:
--

yeah, probably as I'm out of my depth here. Since each replica goes through at 
least three state changes (down->recovering->active), not to mention leadership 
election and such and each state change needs to bet to ZK, I'm really not at 
all sure now to cut that number down.



> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9261) facet.field.limit doesn't work in local params for distributed search

2016-06-28 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob closed SOLR-9261.
---
Resolution: Duplicate

Accidental duplicate submission.

> facet.field.limit doesn't work in local params for distributed search
> -
>
> Key: SOLR-9261
> URL: https://issues.apache.org/jira/browse/SOLR-9261
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
>
> This is kind of an esoteric usage and several things have to be true for the 
> issue to surface. We discovered it using Hue generated queries.
> If you 1) specify a global facet.limit, 2) specify a facet.field.limit in 
> local params, 3) on a distributed search, then the field limit will be 
> ignored.
> Some examples:
> {{facet.limit=2=\{!key=other f.cat.facet.limit=15\}cat}} -- works 
> for single shard, but not multiple shards.
> {{facet.limit=2=cat=15}} -- works in all cases
> This could be treated as a docs bug to explicitly say that while some facet 
> settings in local params work, they should not be relied upon, or we could 
> treat this as a parser bug and make distributed search work like non-distrib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9260) facet.field.limit doesn't work in local params for distributed search

2016-06-28 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9260:
---

 Summary: facet.field.limit doesn't work in local params for 
distributed search
 Key: SOLR-9260
 URL: https://issues.apache.org/jira/browse/SOLR-9260
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mike Drob


This is kind of an esoteric usage and several things have to be true for the 
issue to surface. We discovered it using Hue generated queries.

If you 1) specify a global facet.limit, 2) specify a facet.field.limit in local 
params, 3) on a distributed search, then the field limit will be ignored.

Some examples:
{{facet.limit=2=\{!key=other f.cat.facet.limit=15\}cat}} -- works 
for single shard, but not multiple shards.
{{facet.limit=2=cat=15}} -- works in all cases

This could be treated as a docs bug to explicitly say that while some facet 
settings in local params work, they should not be relied upon, or we could 
treat this as a parser bug and make distributed search work like non-distrib.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9261) facet.field.limit doesn't work in local params for distributed search

2016-06-28 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9261:
---

 Summary: facet.field.limit doesn't work in local params for 
distributed search
 Key: SOLR-9261
 URL: https://issues.apache.org/jira/browse/SOLR-9261
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mike Drob


This is kind of an esoteric usage and several things have to be true for the 
issue to surface. We discovered it using Hue generated queries.

If you 1) specify a global facet.limit, 2) specify a facet.field.limit in local 
params, 3) on a distributed search, then the field limit will be ignored.

Some examples:
{{facet.limit=2=\{!key=other f.cat.facet.limit=15\}cat}} -- works 
for single shard, but not multiple shards.
{{facet.limit=2=cat=15}} -- works in all cases

This could be treated as a docs bug to explicitly say that while some facet 
settings in local params work, they should not be relied upon, or we could 
treat this as a parser bug and make distributed search work like non-distrib.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353532#comment-15353532
 ] 

Gregory Chanan commented on SOLR-9076:
--

bq. Which is odd, because this drove moving to Netty 4 as well, so why does it 
want Netty 3 classes - does it have conflicting Netty reqs?

Good question.  Looks like the dependency is coming from the bkjournal contrib 
-- I wonder if there's some configuration we can use to disable that in the 
tests.

Here's mvn dependency:tree output:
{code}
[INFO] org.apache.hadoop.contrib:hadoop-hdfs-bkjournal:jar:2.7.2
...
[INFO] +- org.apache.bookkeeper:bookkeeper-server:jar:4.2.3:compile
[INFO] |  +- org.slf4j:slf4j-api:jar:1.7.10:compile
[INFO] |  +- org.slf4j:slf4j-log4j12:jar:1.7.10:compile
[INFO] |  +- org.jboss.netty:netty:jar:3.2.4.Final:compile
...
{code}

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-Hack.patch, SOLR-9076-fixnetty.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8657) SolrRequestInfo logs an error if QuerySenderListener is being used

2016-06-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-8657:

Attachment: SOLR-8657.patch

Slightly updated patch that considers when to cleanup the SolrRequestInfo in 
the QuerySenderListener. Will commit this soon.

> SolrRequestInfo logs an error if QuerySenderListener is being used
> --
>
> Key: SOLR-8657
> URL: https://issues.apache.org/jira/browse/SOLR-8657
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
> Attachments: SOLR-8657.patch, SOLR-8657.patch, Screen Shot 2016-02-10 
> at 09.43.56.png
>
>
> This is the stack trace:
> {code}
> at 
> org.apache.solr.request.SolrRequestInfo.setRequestInfo(SolrRequestInfo.java:59)
> at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:68)
> at org.apache.solr.core.SolrCore$6.call(SolrCore.java:1859)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> SolrRequestInfo is being set in MDCAwareThreadPoolExecutor.execute() and 
> later in QuerySenderListener.newSearcher() in the same thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353524#comment-15353524
 ] 

Erik Hatcher commented on SOLR-7883:


While there's a bug here in the MLT handler's use of faceting, I'm not sure why 
the MLT query *parser* doesn't suffice for the need here?   [~SeanXie] - what 
are your needs that aren't met by q={!mlt}?

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353524#comment-15353524
 ] 

Erik Hatcher edited comment on SOLR-7883 at 6/28/16 6:53 PM:
-

While there's a bug here in the MLT handler's use of faceting, I'm not sure why 
the MLT query *parser* doesn't suffice for the need here?   [~SeanXie] - what 
are your needs that aren't met by q=\{!mlt\}?


was (Author: ehatcher):
While there's a bug here in the MLT handler's use of faceting, I'm not sure why 
the MLT query *parser* doesn't suffice for the need here?   [~SeanXie] - what 
are your needs that aren't met by q={!mlt}?

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353502#comment-15353502
 ] 

Scott Blum commented on SOLR-7191:
--

[~erickerickson] we may be talking about 2 different things?  I'm referring to 
the total number of Overseer state update operations that happen.  Is there a 
relationship between that and the watcher side that I'm unaware of?

Also, in my case, we only have 1 replica per shard, period, so leadership 
contention shouldn't be an issue at all.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353474#comment-15353474
 ] 

Sean Xie commented on SOLR-7883:


[~nuddlegg] 's case is exactly what I have experienced. For the needs we won't 
be able to use mlt as a search component nor as a MLT query parser. Having 
looking into the MLT source code and what we have done for a hack is to 
implement a second search using the interesting terms (same default max terms 
and adjustable).

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 678 - Still Failing!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/678/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:41752/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:41752/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([67388879BB403A54:EF6CB7A315BC57AC]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-28 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353445#comment-15353445
 ] 

Gary Lee commented on SOLR-9248:


Hi Mike, yes this did work in 5.4. I tested it before on 5.4 and the logic was 
to close the response input stream directly (respBody.close()) instead of 
calling EntityUtils.consumeFully, so the GZIPInputStream was getting closed 
properly and we weren't losing connections.

The stack trace is based on 5.5, so doesn't directly correspond with 5.5.2 - 
sorry if that led to any confusion. But your explanation is correct and that is 
exactly the problem I see. The exception is now ignored (which is why it's not 
straightforward to get a stack trace in the logs anymore), but the end result 
is that the respBody input stream is never closed. I believe respBody is the 
GZIPInputStream that needs to be closed, because I'm seeing that the connection 
continues to stay leased and eventually the httpClient doesn't accept new 
connections anymore. 

Your comment on "The GZIPInputStream from the GzipDecompressingEntity was never 
fully constructed" is true when calling EntityUtils.consumeFully, but the 
GZIPInputStream is first constructed at the time we need to read the response, 
and that completes without a problem. It's the next time that we try to do the 
same thing where the error occurs, and the initial GZIPInputStream (respBody) 
never gets closed. Since the GZipDecompressingEntity is providing a new stream 
every time, it essentially ignores the one was previously constructed, and thus 
never achieves the purpose of closing out an input stream.

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
> Attachments: CompressedConnectionTest.java
>
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353437#comment-15353437
 ] 

Erick Erickson commented on SOLR-7191:
--

FWIW, I tried both and had no trouble even when overseer was on one of the 
replicas with lots of cores. That said, I agree it's wise to put the overseer 
somewhere else in these cases. Certainly can't hurt!

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-28 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9248:

Attachment: CompressedConnectionTest.java

Ok, thanks for taking an initial look. Can you confirm that this is something 
that worked in earlier versions (5.4?).

I'm trying to build a unit test to reproduce this, but having a little bit of 
trouble. I think I see where things are going wrong, but haven't been able to 
reproduce this in a test.

When the call to {{EntityUtils.consumeFully}} happens, the code tries to get 
the stream and read the rest of the bytes from it to clear out the buffers and 
make servlet containers happy. However, {{GzipDecompressingEntity}} attempts to 
provide a new stream each time, wrapping the same underlying content (which has 
already been exhausted at this point).

Absent a way to reproduce this, it's hard to tell if this has been fixed. Your 
stack trace shows {{HttpSolrClient::executeMethod}} calling 
{{org.apache.http.util.EntityUtils::consume}} directly, whereas in 5.5.2 I see 
{{executeMethod}} -> {{org.apache.solr.common.util.Utils::consumeFully}} -> 
{{GzipDecompressingEntity::getContent}} (this will throw the exception) -> 
{{org.apache.http.util.EntityUtils::consumeQuietly}}, which will make another 
attempt at {{getContent}} and ignore the exception again.

Which stream are you indicating should be closed? The {{GZIPInputStream}} from 
the {{GzipDecompressingEntity}} was never fully constructed, and doesn't need 
to be closed. The connection itself should be managed by the servlet container, 
so we shouldn't be closing it either.

I'm attaching my in-progress unit test for this.

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
> Attachments: CompressedConnectionTest.java
>
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353414#comment-15353414
 ] 

Mark Miller commented on SOLR-9076:
---

Here is a nasty hack patch that gets the test passing (including hacks for 
SOLR-9073).

It hacks in a couple missing dependencies, moves around some config files, and 
allows creating a core when a core.properties file already exists.

Need to make it all work without those hacks.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-Hack.patch, SOLR-9076-fixnetty.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9076:
--
Attachment: SOLR-9076-Hack.patch

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-Hack.patch, SOLR-9076-fixnetty.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being searched is not present in segment

2016-06-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9254:
-
Summary: GraphTermsQueryQParserPlugin throws NPE when field being searched 
is not present in segment  (was: GraphTermsQueryQParserPlugin throws NPE when 
field being search is not present in segment)

> GraphTermsQueryQParserPlugin throws NPE when field being searched is not 
> present in segment
> ---
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being search is not present in segment

2016-06-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9254.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

> GraphTermsQueryQParserPlugin throws NPE when field being search is not 
> present in segment
> -
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7354) MoreLikeThis incorrectly does toString on Field object

2016-06-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353400#comment-15353400
 ] 

Steve Rowe commented on LUCENE-7354:


{quote}
In MoreLikeThis.java, circa line 763, when calling addTermFrequencies on a 
Field object, we are incorrectly calling toString on the Field object, which 
puts the Field attributes (indexed, stored, et. al) into the String that is 
returned.
{quote}

I don't see this - when I run {{CloudMLTQParserTest}} without your patch, and I 
look at {{MoreLikeThis.retrieveTerms()}} where {{String.valueOf(fieldValue)}} 
is called (by pulling the value of that expression out into a variable and 
breaking there in the debugger), I only see the actual field values - no 
indexed stored et al.

Indexed, stored, et al. are Field*Type* attributes, not Field attributes, 
right?  

In {{CloudMLTQParser.parse()}} where the filtered doc is composed, in your 
patch you have a nocommit (the only one I see in your patch) - 
Field.stringValue() returns {{value.toString()}}, but only if it's a String or 
a Number, and otherwise null, so it's definitely possible to not have a string 
value for binary fields or geo fields - I guess the question is whether people 
want to use non-text/non-scalar fields for MLT?:

{code:java}
for (String field : fieldNames) {
  Collection fieldValues = doc.getFieldValues(field);
  if (fieldValues != null) {
Collection strings = new ArrayList<>(fieldValues.size());
for (Object value : fieldValues) {
  if (value instanceof Field){
String sv = ((Field) value).stringValue();
if (sv != null) {
  strings.add(sv);
}//TODO: nocommit: what to do when we don't have StringValue? I 
don't think it is possible in this case, but need to check on this
  } else {
strings.add(value.toString());
  }
}
filteredDocument.put(field, strings);
  }
}
{code}

> MoreLikeThis incorrectly does toString on Field object
> --
>
> Key: LUCENE-7354
> URL: https://issues.apache.org/jira/browse/LUCENE-7354
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.0.1, 5.5.1, master (7.0)
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
> Attachments: LUCENE-7354-mlt-fix
>
>
> In MoreLikeThis.java, circa line 763, when calling addTermFrequencies on a 
> Field object, we are incorrectly calling toString on the Field object, which 
> puts the Field attributes (indexed, stored, et. al) into the String that is 
> returned.
> I'll put up a patch/fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being search is not present in segment

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353398#comment-15353398
 ] 

ASF subversion and git services commented on SOLR-9254:
---

Commit 59c5e6014bc8f2e3f89a269938145dc7da5e9019 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=59c5e60 ]

SOLR-9254: GraphTermsQueryQParserPlugin throws NPE when field being search is 
not present in segment


> GraphTermsQueryQParserPlugin throws NPE when field being search is not 
> present in segment
> -
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being search is not present in segment

2016-06-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353384#comment-15353384
 ] 

ASF subversion and git services commented on SOLR-9254:
---

Commit 407080af5bc68c9eb11c05c587368a783ff78d0c in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=407080a ]

SOLR-9254: GraphTermsQueryQParserPlugin throws NPE when field being search is 
not present in segment


> GraphTermsQueryQParserPlugin throws NPE when field being search is not 
> present in segment
> -
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7360) Remove Explanation.toHtml()

2016-06-28 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353386#comment-15353386
 ] 

Alan Woodward commented on LUCENE-7360:
---

Good idea, and I'll just deprecate the method in 6.x.

> Remove Explanation.toHtml()
> ---
>
> Key: LUCENE-7360
> URL: https://issues.apache.org/jira/browse/LUCENE-7360
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7360.patch
>
>
> This seems to be something of a relic.  It's still used in Solr, but I think 
> it makes more sense to move it directly into the ExplainAugmenter there 
> rather than having it in Lucene itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-06-28 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353377#comment-15353377
 ] 

Hrishikesh Gadre commented on SOLR-9242:


[~varunthacker] [~markrmil...@gmail.com] Any thoughts on this?

> Collection level backup/restore should provide a param for specifying the 
> repository implementation it should use
> -
>
> Key: SOLR-9242
> URL: https://issues.apache.org/jira/browse/SOLR-9242
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Attachments: SOLR-9242.patch
>
>
> SOLR-7374 provides BackupRepository interface to enable storing Solr index 
> data to a configured file-system (e.g. HDFS, local file-system etc.). This 
> JIRA is to track the work required to extend this functionality at the 
> collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9254) GraphTermsQueryQParserPlugin throws NPE when field being search is not present in segment

2016-06-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9254:
-
Summary: GraphTermsQueryQParserPlugin throws NPE when field being search is 
not present in segment  (was: GraphTermsQueryQParserPlugin NPE when field being 
search is not present in segment)

> GraphTermsQueryQParserPlugin throws NPE when field being search is not 
> present in segment
> -
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9254) GraphTermsQueryQParserPlugin NPE when field being search is not present in segment

2016-06-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9254:
-
Summary: GraphTermsQueryQParserPlugin NPE when field being search is not 
present in segment  (was: TestGraphTermsQParserPlugin.testQueries() 
NullPointerException)

> GraphTermsQueryQParserPlugin NPE when field being search is not present in 
> segment
> --
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353362#comment-15353362
 ] 

Erick Erickson commented on SOLR-7191:
--

IIUC, the whole watcher thing is replica based. So N replicas for the same 
collection in the same JVM register N watchers.

If that's true, does it make sense to think about watchers being set per 
_collection_ in a JVM rather than per _replica_? I admit I'm completely 
ignorant of the nuances here. It also wouldn't make any difference in a 
collection where each instance hosted exactly one replica per collection, but 
practically I'm not sure there's anything we can do about that anyway.

Although it seems that each replica could be an Observer for a given collection 
(watcher at the JVM level?) without doing much violence to the current 
architecture. Or maybe it'd just be simpler to have the replicas get their 
state information from some kind of cache maintained at the JVM level where the 
cache was updated via watcher. I admit I'm talking through my hat here. Maybe 
there should be a JIRA to discuss this?

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7360) Remove Explanation.toHtml()

2016-06-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353313#comment-15353313
 ] 

Adrien Grand commented on LUCENE-7360:
--

+1 to the patch. Maybe add a quick note to the lucene/MIGRATE.txt?

> Remove Explanation.toHtml()
> ---
>
> Key: LUCENE-7360
> URL: https://issues.apache.org/jira/browse/LUCENE-7360
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7360.patch
>
>
> This seems to be something of a relic.  It's still used in Solr, but I think 
> it makes more sense to move it directly into the ExplainAugmenter there 
> rather than having it in Lucene itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-06-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-7282:


Assignee: Noble Paul  (was: Shalin Shekhar Mangar)

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7360) Remove Explanation.toHtml()

2016-06-28 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7360:
--
Attachment: LUCENE-7360.patch

Here's a patch against master.

> Remove Explanation.toHtml()
> ---
>
> Key: LUCENE-7360
> URL: https://issues.apache.org/jira/browse/LUCENE-7360
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7360.patch
>
>
> This seems to be something of a relic.  It's still used in Solr, but I think 
> it makes more sense to move it directly into the ExplainAugmenter there 
> rather than having it in Lucene itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353293#comment-15353293
 ] 

Noble Paul commented on SOLR-7191:
--

The cluster was very stable when I used dedicated overseer using the 
{{ADDROLE}} command. I used the replica placement strategy to ensure that the 
overseer nodes did not have any replicas created. For any reasonably large 
cluster, I recommend using dedicated overseer nodes. Another observation was 
that the overseer nodes use very little memory. It never went beyond 200MB 
heap. 

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353278#comment-15353278
 ] 

Erick Erickson commented on SOLR-7191:
--

Empirically it was fine, but that was on very few runs. I had 10 replicas in 
each JVM and 3 load threads in one variant. Of course I might just have gotten 
lucky.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353262#comment-15353262
 ] 

Scott Blum commented on SOLR-7191:
--

(for the record, this was on a 5.5.1 based build)

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353261#comment-15353261
 ] 

Scott Blum commented on SOLR-7191:
--

Paginated getChildren().. always wondered why that wasn't a thing.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353258#comment-15353258
 ] 

Shawn Heisey commented on SOLR-7191:


bq. That seems... insane.

I am glad that I am not the only one to think the number of updates in the 
overseer queue for node startup is insane.

When you get that many updates in the queue and haven't make a big change to 
jute.maxbuffer, zookeeper starts failing because the size of the znode will 
become too large.  I think it's crazy that zookeeper allows *writes* to a znode 
when the write will make the node too big.  See ZOOKEEPER-1162.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353247#comment-15353247
 ] 

Scott Blum commented on SOLR-7191:
--

This may be unrelated to the current patch work, but seems relevant to the uber 
ticket.:

I rebooted our solr cluster the other night to pick up an update, and I ran 
into what seemed to be pathological behavior around state updates.  My first 
attempt to bring up everything at once resulted in utter deadlock, so I shut 
everything down, manually nuked all the overseer queues/maps in ZK, and started 
bringing them up one at a time.  What I saw was kind of astounding.

I was monitoring OVERSEERSTATUS and tracking the number of outstanding overseer 
ops + the total number of update_state ops, and I noticed that every VM I 
brought up needed ~4000 update_state ops to stabilize, despite the fact that 
each VM only manages ~128 cores.  We have 32 vms with ~128 cores each, or ~4096 
cores in our entire cluster... it took over 100,000 update_state operations to 
bring the whole cluster up.  That seems... insane.  3 or 4 update_state ops per 
core would seem reasonable to me, but I saw over 30 ops per core loaded as I 
went.  This number was extremely consistent for every node I brought up.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9253) solrcloud goes dowm

2016-06-28 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353215#comment-15353215
 ] 

Scott Blum commented on SOLR-9253:
--

Interesting.  Before hitting a critical failure, would there be any indications 
in the logs of the possible need to increase the pool?  Issuing a warning might 
be useful when lease times exceed a certain threshold.


> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9253) solrcloud goes dowm

2016-06-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-9253.

Resolution: Invalid

Please move this discussion to the user's list as you have been asked.

> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353208#comment-15353208
 ] 

Erick Erickson commented on SOLR-9194:
--

_Of course_ it fails miserably, I _said_ I moved stuff over and didn't have a 
way to test ;).

I'm not doing any work on this currently, if someone (hint hint) would be so 
kind as to fix the errors in the windows script and post it back I'd be forever 
grateful. A patch would be fine or just the entire Windows script. It seems 
like a long way around to tell me about errors, have me post a new patch and 
then just find the others



> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353173#comment-15353173
 ] 

Adrien Grand commented on LUCENE-7355:
--

This sounded appealing so I gave it a try but I hit a couple problems:
 - some analyzers need to apply char filters too, so we cannot expect to have a 
String in all cases we need some sort of KeywordTokenizer
 - some consumers need to get the binary representation of terms, which depends 
on the AttributeFactory (LUCENE-4176). So maybe we should return a TokenStream 
rather than a String an let consumers decide whether they want to add a 
CharTermAttribute or a TermToBytesRefAttribute. Is there a better option?

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353155#comment-15353155
 ] 

Mark Miller commented on SOLR-9076:
---

It requires a couple dependencies still:
 
  

I'm looking at SOLR-9073 - we have to start using a core name for embedded and 
that causes some random grief.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-fixnetty.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353122#comment-15353122
 ] 

Jan Høydahl commented on SOLR-9194:
---

First test on Windows 10 fails miserably :-)

{noformat}
C:\Users\janms\Desktop\solr-7.0.0-SNAPSHOT>bin\solr zk
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
'ELSE' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the batch label specified - zk_short_usage
{noformat}

This seems to be the line executed before the ELSE failures:
{noformat}
IF EXIST "C:\Users\janms\Desktop\solr-7.0.0-SNAPSHOT\bin\solr.in.cmd" CALL 
"C:\Users\janms\Desktop\solr-7.0.0-SNAPSHOT\bin\solr.in.cmd"
{noformat}

Found two bugs for sure:
* label {{zk_short_usage}} should be prefixed with {{:}}, not suffixed
* Copy/paste of line {{set ERROR_MSG="-n option must be set for upconfig"}} 
both for upconfig and downconfig

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9253) solrcloud goes dowm

2016-06-28 Thread Junfeng Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353108#comment-15353108
 ] 

Junfeng Mu commented on SOLR-9253:
--

Thanks very much!

I wonder if there is a standard to config "maxConnections" and 
"maxConnectionsPerHost", or the relation (scale) between them.

our application is an e-commerce site, so the access amount is large, so I 
wonder how big we should set them.

> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2016-06-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353106#comment-15353106
 ] 

Joel Bernstein commented on SOLR-4587:
--

We also have the topic() function, which stores it's checkpoints for a topic in 
a SolrCloud collection. Topics currently just store the checkpoints but we 
could have it store the query as well. This would satisfy the stored query 
feature.

Then you could shuffle stored queries off to worker nodes to be executed in 
parallel. If you need to scale up you just add more workers and replicas.





> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: 6.0
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 17085 - Still Failing!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17085/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.update.PeerSyncTest.test

Error Message:
.response[0][id][0]:3!=1

Stack Trace:
junit.framework.AssertionFailedError: .response[0][id][0]:3!=1
at 
__randomizedtesting.SeedInfo.seed([BE256205CF098A49:36715DDF61F5E7B1]:0)
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:913)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:932)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryAndCompare(BaseDistributedSearchTestCase.java:650)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryAndCompare(BaseDistributedSearchTestCase.java:641)
at org.apache.solr.update.PeerSyncTest.test(PeerSyncTest.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-8612) DIH JdbcDataSource - statement not always closed

2016-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353049#comment-15353049
 ] 

ASF GitHub Bot commented on SOLR-8612:
--

Github user shalinmangar commented on the issue:

https://github.com/apache/lucene-solr/pull/6
  
This has already been merged so this pull request can be closed.


> DIH JdbcDataSource - statement not always closed
> 
>
> Key: SOLR-8612
> URL: https://issues.apache.org/jira/browse/SOLR-8612
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.4.1
>Reporter: Kristine Jetzke
>Assignee: Mikhail Khludnev
> Fix For: 5.5.2, 5.6, 6.0.2, 6.1, master (7.0)
>
> Attachments: SOLR-8612.patch, SOLR-8612.patch, SOLR-8612.patch, 
> SOLR-8612.patch, SOLR-8612.patch
>
>
> There are several cases where the Statement used by JdbcDataSource is not 
> closed, potentially resulting in too many open connections:
> - an exception is throw in the {{ResultSetIterator}} constructor
> - the result set is null in the {{ResultSetIterator}} constructor
> - an exception is thrown during import and the import is aborted (onError 
> flag set to abort)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #6: [SOLR-8612] DIH JdbcDataSource: Always close ResultSet...

2016-06-28 Thread shalinmangar
Github user shalinmangar commented on the issue:

https://github.com/apache/lucene-solr/pull/6
  
This has already been merged so this pull request can be closed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353043#comment-15353043
 ] 

Mark Miller commented on SOLR-9076:
---

Actually, the first thing I see failing in the MR job is:
{noformat}
Caused by: java.lang.NoClassDefFoundError: 
org/bouncycastle/operator/OperatorCreationException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at 
org.apache.solr.morphlines.cell.SolrCellBuilder$SolrCell.(SolrCellBuilder.java:175)
... 21 more
Caused by: java.lang.ClassNotFoundException: 
org.bouncycastle.operator.OperatorCreationException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 24 more
{noformat}

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-fixnetty.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7360) Remove Explanation.toHtml()

2016-06-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353042#comment-15353042
 ] 

Adrien Grand commented on LUCENE-7360:
--

+1

> Remove Explanation.toHtml()
> ---
>
> Key: LUCENE-7360
> URL: https://issues.apache.org/jira/browse/LUCENE-7360
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>
> This seems to be something of a relic.  It's still used in Solr, but I think 
> it makes more sense to move it directly into the ExplainAugmenter there 
> rather than having it in Lucene itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7360) Remove Explanation.toHtml()

2016-06-28 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-7360:
-

 Summary: Remove Explanation.toHtml()
 Key: LUCENE-7360
 URL: https://issues.apache.org/jira/browse/LUCENE-7360
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward


This seems to be something of a relic.  It's still used in Solr, but I think it 
makes more sense to move it directly into the ExplainAugmenter there rather 
than having it in Lucene itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7359) Add equals() and hashcode() to Explanation

2016-06-28 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7359:
--
Attachment: LUCENE-7359.patch

Patch with some basic tests.

> Add equals() and hashcode() to Explanation
> --
>
> Key: LUCENE-7359
> URL: https://issues.apache.org/jira/browse/LUCENE-7359
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7359.patch
>
>
> I don't think there's any reason *not* to add these?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2016-06-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353023#comment-15353023
 ] 

Jan Høydahl commented on SOLR-4587:
---

Lots of things have happened the last 18 months... We got streaming 
expressions, which could perhaps be a way for clients to consume the stream of 
matches in an asynchronous fashion? And we could create a configset for 
alerting which keeps all the wiring in one place... [~joel.bernstein] do you 
think that the {{daemon()}} stuff from streaming could be suitable as an API 
for consuming alerts in this context?

> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: 6.0
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352999#comment-15352999
 ] 

Mark Miller commented on SOLR-9076:
---

Okay, I tried again with a clean checkout and your patch. Things seem to work 
on this attempt, except the mapreduce job fails, but I think that is just 
SOLR-9073.

I still think it's really strange we need two versions of Netty though...

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-fixnetty.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2016-06-28 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352997#comment-15352997
 ] 

Christine Poerschke commented on SOLR-6312:
---

Refreshed patch file for SOLR-9090 (which is related to this ticket but also 
slightly different), with a view towards committing the change towards the end 
of this or beginning of next week. Questions, comments, reviews etc. welcome as 
usual. Thank you.

> CloudSolrServer doesn't honor updatesToLeaders constructor argument
> ---
>
> Key: SOLR-6312
> URL: https://issues.apache.org/jira/browse/SOLR-6312
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
> Fix For: 4.10
>
> Attachments: SOLR-6312.patch
>
>
> The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
> requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7359) Add equals() and hashcode() to Explanation

2016-06-28 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-7359:
-

 Summary: Add equals() and hashcode() to Explanation
 Key: LUCENE-7359
 URL: https://issues.apache.org/jira/browse/LUCENE-7359
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor


I don't think there's any reason *not* to add these?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9090) solrj CloudSolrClient: add directUpdatesToLeadersOnly support

2016-06-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9090:
--
Attachment: SOLR-9090.patch

Refreshed patch file (adding solr/CHANGES.txt entry), with a view towards 
committing the change towards the end of this or beginning of next week. 
Questions, comments, reviews etc. welcome as usual. Thank you.

> solrj CloudSolrClient: add directUpdatesToLeadersOnly support
> -
>
> Key: SOLR-9090
> URL: https://issues.apache.org/jira/browse/SOLR-9090
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9090.patch, SOLR-9090.patch
>
>
> solrj CloudSolrClient: add directUpdatesToLeadersOnly support
> (Marvin Justice, Christine Poerschke)
> Proposed change:
> * Addition of a {{directUpdatesToLeadersOnly}} flag to allow clients to 
> request that direct updates be sent to the shard leaders and only to the 
> shard leaders.
> Motivation:
> * In a scenario where there is temporarily no shard leader the update request 
> will 'fail fast' allowing the client to handle retry logic.
> Related tickets:
> * SOLR-6312 concerns the ((currently) no longer used) {{updatesToLeaders}} 
> flag. The updatesToLeaders logic however appears to be slightly different 
> from the proposed directUpdatesToLeadersOnly logic: {{updatesToLeaders}} 
> indicates that sending to leaders is preferred but not mandatory whereas 
> {{directUpdatesToLeadersOnly}} mandates sending to leaders only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7340) MemoryIndex.toString is broken if you enable payloads

2016-06-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned LUCENE-7340:


Assignee: David Smiley

> MemoryIndex.toString is broken if you enable payloads
> -
>
> Key: LUCENE-7340
> URL: https://issues.apache.org/jira/browse/LUCENE-7340
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1, 6.0.1, master (7.0)
>Reporter: Daniel Collins
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-7340.diff, LUCENE-7340.diff
>
>
> Noticed this as we use Luwak which creates a MemoryIndex(true, true) storing 
> both offsets and payloads (though in reality we never put any payloads in it).
> We used to use MemoryIndex.toString() for debugging and noticed it broke in 
> Lucene 5.x  and beyond.  I think LUCENE-6155 broke it when it added support 
> for payloads?
> Creating default memoryindex (as all the tests currently do) works fine, as 
> does one with just offsets, it is just the payload version which is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Michael Daum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352951#comment-15352951
 ] 

Michael Daum edited comment on SOLR-7883 at 6/28/16 1:10 PM:
-

My use case is to return similar tags given the ones provided in a document. 
The MLT search component of a /select rather seems to be returning pointers to 
other documents. Alas faceting these doesn't work anymore. A /select?q=id:xxx 
only returns facets of this query (which is of course a bit bogus) ... but not 
those of the results of the MLT component.

Note that this once worked perfectly in 4.2 or so. 


was (Author: nuddlegg):
My use case is to return similar tags given the ones provided in a document. 
The MLT search component of a /select rather seems to be returning pointers to 
other documents. Alas faceting these doesn't work anymore. A /select?q=id:xxx 
only returns facets of this query (which is of course a bit bogus) ... but not 
those of the results of the MLT component.

Note that this once perfectly worked in 4.2 or so. 

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2016-06-28 Thread Michael Daum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352951#comment-15352951
 ] 

Michael Daum commented on SOLR-7883:


My use case is to return similar tags given the ones provided in a document. 
The MLT search component of a /select rather seems to be returning pointers to 
other documents. Alas faceting these doesn't work anymore. A /select?q=id:xxx 
only returns facets of this query (which is of course a bit bogus) ... but not 
those of the results of the MLT component.

Note that this once perfectly worked in 4.2 or so. 

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>Assignee: Shalin Shekhar Mangar
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt=id:item1=content}}
> This doesn't: {{?qt=mlt=id:item1=content=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 989 - Failure!

2016-06-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/989/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [NRTCachingDirectory, 
NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [NRTCachingDirectory, NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([B53D6FBB8C4D54E8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11607 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_B53D6FBB8C4D54E8-001/init-core-data-001
   [junit4]   2> 1327895 INFO  
(SUITE-TestReplicationHandler-seed#[B53D6FBB8C4D54E8]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None)
   [junit4] IGNOR/A 0.00s J1 | 
TestReplicationHandler.doTestIndexFetchOnMasterRestart
   [junit4]> Assumption #1: 'awaitsfix' test group is disabled 
(@AwaitsFix(bugUrl=https://issues.apache.org/jira/browse/SOLR-9036))
   [junit4]   2> 1327898 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[B53D6FBB8C4D54E8])
 [] o.a.s.SolrTestCaseJ4 ###Starting doTestIndexFetchWithMasterUrl
   [junit4]   2> 1327898 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[B53D6FBB8C4D54E8])
 [] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_B53D6FBB8C4D54E8-001/solr-instance-001/collection1
   [junit4]   2> 1327901 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[B53D6FBB8C4D54E8])
 [] o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1327902 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[B53D6FBB8C4D54E8])
 [] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@ab51af{/solr,null,AVAILABLE}
   [junit4]   2> 1327905 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[B53D6FBB8C4D54E8])
 [] o.e.j.s.ServerConnector Started 
ServerConnector@e5ce17{HTTP/1.1,[http/1.1]}{127.0.0.1:40543}
   [junit4]   2> 1327905 INFO  

[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-06-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352910#comment-15352910
 ] 

Shalin Shekhar Mangar commented on SOLR-8858:
-

There are many tests that fail after applying this patch on master. I don't 
have the time to dig in to this right now. Caleb, if you can fix the failures, 
I'd be happy to commit this patch.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >