[jira] [Commented] (LUCENE-6430) FilterPath needs hashCode/equals

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499078#comment-14499078
 ] 

ASF subversion and git services commented on LUCENE-6430:
-

Commit 1674177 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1674177 ]

LUCENE-6430: fix URI delegation for non-ascii files

 FilterPath needs hashCode/equals
 

 Key: LUCENE-6430
 URL: https://issues.apache.org/jira/browse/LUCENE-6430
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6430.patch


 Its defined here:
 https://docs.oracle.com/javase/7/docs/api/java/nio/file/Path.html#equals%28java.lang.Object%29
 Currently we always use Object.equals/hashcode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6285) test seeds are not reproducing.

2015-04-17 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6285.
-
Resolution: Duplicate

I have a patch here: LUCENE-6431

 test seeds are not reproducing.
 ---

 Key: LUCENE-6285
 URL: https://issues.apache.org/jira/browse/LUCENE-6285
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 even for very simple tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6431:
---

 Summary: Make extrasfs reproducible
 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Today this is really bad, it can easily cause non-reproducible test failures. 
Its a per-class thing, but its decisions are based on previous events happening 
for that class (e.g. directory operations). 

Even using the filename can't work, its setup so early in the process, before 
test framework even ensures java.io.tempdir and similar exist. Even 
disregarding that, test files use a temp directory facility and those names are 
not reproducible (they depend on what already exists, e.g. from a previous test 
run).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7413) Website: downloading past releases is harder than it should be

2015-04-17 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7413:
-
Description: 
Clicking on the Download button at the top of every Solr website page takes 
you to [http://lucene.apache.org/solr/mirrors-solr-latest-redir.html] (let's 
call it the download-redirect page), which pauses for 3 seconds and then 
auto-redirects to the Apache download mirror page for the latest Solr release.  
The download-redirect page has info about downloading past releases, but there 
are problems with the current setup:

# The 3 second auto-redirect doesn't allow enough time to read the page before 
it's gone.
# Firefox (latest version, on OS X and Windows) doesn't include the page in its 
browser history, so you can't go back - clicking the back button will take you 
to the page you were on when you clicked the Download button, not back to the 
download-redirect page.
# Internet Explorer and Chrome include the download-redirect page in their 
history, so you clicking the back button will go there, but then after three 
seconds you get redirected to the Apache download mirrors page, whack-a-mole 
style.

When I was putting the download-redirect page together, I guess I only tested 
on Safari on OS X 10.10.  This browser keeps the download-redirect page in its 
history, so clicking the back button after the auto-redirect takes you to the 
mirror pages will take you back to the download-redirect page, and the 
auto-redirect never recurs.


  was:
Clicking on the Download button at the top of every Solr website page takes 
you to [http://lucene.apache.org/solr/mirrors-solr-latest-redir.html] (let's 
call it the download-redirect page), which pauses for 3 seconds and then 
auto-redirects to the Apache download mirror page for the latest Solr release.  
The download-redirect page has info about downloading past releases, but there 
are problems with the current setup:

# The 3 second auto-redirect doesn't allow enough time to read the page before 
it's gone.
# Firefox (latest version, on OS X and Windows) doesn't include the page in its 
browser history, so you can't go back - clicking the back button will take you 
to the page you were on when you clicked the Download button, not back to the 
download-redirect page.
# Internet Explorer and Chrome include the download-redirect page in their 
history, so you clicking the back button will go there, but then after three 
seconds you get redirected to the Apache download mirrors page, whack-a-mole 
style.

When I was putting this page together, I guess I only tested on Safari on OS X 
10.10.  This browser keeps the download-redirect page in its history, so 
clicking the back button after the auto-redirect takes you to the mirror pages 
will take you back to the download-redirect page, and the auto-redirect never 
recurs.



 Website: downloading past releases is harder than it should be
 --

 Key: SOLR-7413
 URL: https://issues.apache.org/jira/browse/SOLR-7413
 Project: Solr
  Issue Type: Bug
  Components: website
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor

 Clicking on the Download button at the top of every Solr website page takes 
 you to [http://lucene.apache.org/solr/mirrors-solr-latest-redir.html] (let's 
 call it the download-redirect page), which pauses for 3 seconds and then 
 auto-redirects to the Apache download mirror page for the latest Solr 
 release.  The download-redirect page has info about downloading past 
 releases, but there are problems with the current setup:
 # The 3 second auto-redirect doesn't allow enough time to read the page 
 before it's gone.
 # Firefox (latest version, on OS X and Windows) doesn't include the page in 
 its browser history, so you can't go back - clicking the back button will 
 take you to the page you were on when you clicked the Download button, not 
 back to the download-redirect page.
 # Internet Explorer and Chrome include the download-redirect page in their 
 history, so you clicking the back button will go there, but then after three 
 seconds you get redirected to the Apache download mirrors page, whack-a-mole 
 style.
 When I was putting the download-redirect page together, I guess I only tested 
 on Safari on OS X 10.10.  This browser keeps the download-redirect page in 
 its history, so clicking the back button after the auto-redirect takes you to 
 the mirror pages will take you back to the download-redirect page, and the 
 auto-redirect never recurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6432:

Attachment: LUCENE-6432.patch

 Add SuppressReproduceLine
 -

 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6432.patch


 {code}
   /**
* Suppress the default {@code reproduce with: ant test...}
* Your own listener can be added as needed for your build.
*/
   @Documented
   @Inherited
   @Retention(RetentionPolicy.RUNTIME)
   @Target(ElementType.TYPE)
   public @interface SuppressReproduceLine {}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-04-17 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-7243:
---
Attachment: SOLR-7243.patch

Updated patch.  Still need to run tests and precommit.

 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
 

 Key: SOLR-7243
 URL: https://issues.apache.org/jira/browse/SOLR-7243
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
Priority: Minor
 Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
 SOLR-7243.patch, SOLR-7243.patch


 We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
 integration test is similar to this Solr unit test,
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
 Specifically we test if the Solr server returns BAD_REQUEST when provided 
 with incorrect input.The only difference is that it uses CloudSolrServer 
 instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
 error code. Please take a look
 https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
 I think we can improve the error handling by checking if the first exception 
 in the list is of type SolrException and if that is the case return the error 
 code associated with that exception. If the first exception is not of type 
 SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6432:
---

 Summary: Add SuppressReproduceLine
 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


{code}
  /**
   * Suppress the default {@code reproduce with: ant test...}
   * Your own listener can be added as needed for your build.
   */
  @Documented
  @Inherited
  @Retention(RetentionPolicy.RUNTIME)
  @Target(ElementType.TYPE)
  public @interface SuppressReproduceLine {}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6431:

Attachment: LUCENE-6431.patch

Here is a modification that is reproducible. Decisions are based solely upon 
target test class name. That means randomly, either a test class does not get 
impacted at all or gets completely terrorized by extras. And the name of the 
file added is always extra0 since it happens on creation.

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Race condition can cause a config directory listener to be removed

2015-04-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499217#comment-14499217
 ] 

Shai Erera commented on SOLR-7408:
--

bq. Though I think I understand what you're saying here, can you elaborate more 
on this?

If we wanted to change the code such that we put a listener in the map on a 
SolrCore creation, and remove it from the map on a SolrCore close, I believe we 
wouldn't be running into such concurrency issues. In a sense, this is what is 
done when all is *good*: a SolrCore puts a listener in its ctor, and removes it 
in its close().

But if something goes *wrong*, we may leave dangling listeners, of SolrCore 
instances that no longer exist. This is what I believe 
({{CoreAdminHandler.handleCreateAction}} attempts to do -- if a core creation 
failed, it attempts to unregister all listeners of a configDir from the map, 
and lets {{unregister}} decide if the entry itself can be removed or not. This 
ensures that we won't be left w/ dangling listeners that will never be released 
- what I referred to as leaking listeners.

The code in {{unregister}} relies on the same logic that introduces the bug -- 
if there is core in SolrCores which references this configDir, remove all 
listeners. The problem is that a core registers a listener, before it is put in 
SolrCores, and hence the race condition.

I would personally prefer that we stop removing all listeners, and let a core 
take care of itself, but I don't know how safe is Solr code in that regard. 
I.e. are all places that create a SolrCore clean up after it in the event of a 
failure? Clearly {{CoreAdminHandler.handleCreateAction}} doesn't, which got me 
thinking what other places don't do that as well.

But, if we want to change the logic like that, we can certainly look at all the 
places that do {{new SolrCore(...)}} and make sure they call 
{{SolrCore.close()}} in the event of any failure.

 Race condition can cause a config directory listener to be removed
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7408.patch, SOLR-7408.patch


 This has been reported here: http://markmail.org/message/ynkm2axkdprppgef, 
 and I was able to reproduce it in a test, although I am only able to 
 reproduce if I put break points and manually simulate the problematic context 
 switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.1-Linux (64bit/jdk1.7.0_76) - Build # 282 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/282/
Java: 64bit/jdk1.7.0_76 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=627 sum(shards)=626 cloudClient=626

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=627 sum(shards)=626 
cloudClient=626
at 
__randomizedtesting.SeedInfo.seed([A5A4BF5807594830:2DF08082A9A525C8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1347)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:240)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Updating CWIKI jdoc link macros -- was: Re: Solr Ref Guide for 5.1

2015-04-17 Thread Chris Hostetter

FYI: Uwe replied to me privately a few hours ago that he had done this.

: Date: Wed, 15 Apr 2015 09:14:11 -0700 (MST)
: From: Chris Hostetter hossman_luc...@fucit.org
: To: u...@thetaphi.de
: Cc: Lucene Dev dev@lucene.apache.org
: Subject: Updating CWIKI jdoc link macros -- was: Re: Solr Ref Guide for 5.1
: 
: 
: Uwe: can you please update the confluence link macros for the lucene/solr 
: javadoc urls to reflect 5_1_0 ?
: 
:  To update the shortcut links to point to the current version, remove 
:  and recreate the shortcut links with keys SolrReleaseDocs and 
:  LuceneReleaseDocs , making their expanded values include the 
:  underscore-separated release version followed by a slash, e.g. for the 
:  4.8 release, the expanded values should be 
:  http://lucene.apache.org/solr/4_8_0/ and  
:  http://lucene.apache.org/core/4_8_0/, respectively.  See the Confluence  
:  documentation for instructions.  Note: Uwe Schindler says that 
:  Confluence has a bug that disallows editing shortcut links' expanded 
:  values - that's why you have to remove and then recreate the shortcut 
links.
: 
: 
https://cwiki.apache.org/confluence/display/solr/Internal+-+How+To+Publish+This+Documentation#Internal-HowToPublishThisDocumentation-Pre-publicationActions
: 
: 
https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation
: 
: 
: 
: : Date: Tue, 14 Apr 2015 10:42:50 -0500
: : From: Cassandra Targett casstarg...@gmail.com
: : Reply-To: dev@lucene.apache.org
: : To: dev@lucene.apache.org
: : Subject: Re: Solr Ref Guide for 5.1
: : 
: : Just to let folks know, I'm targeting early Thursday morning for making a
: : release candidate for the Ref Guide and putting it up for a vote.
: : 
: : If anyone really needs more time, please let me know.
: : 
: : Cassandra
: : 
: : On Thu, Apr 9, 2015 at 6:11 PM, Yonik Seeley ysee...@gmail.com wrote:
: : 
: :  On Thu, Apr 9, 2015 at 6:42 PM, Cassandra Targett casstarg...@gmail.com
: :  wrote:
: :   Thanks Yonik. I think there are a lot of places that use 'curl'; not
: :  sure if
: :   there will be time to fix them for 5.1, but if not we'll add it to the
: :   backlog list for future edits.
: :  
: :   Since you replied :) - are you going to have a chance to add anything on
: :  the
: :   other new stuff you added, like SOLR-7214, 7218, or 7212?
: : 
: :  Yep, I do plan on it... just not sure of the timing (this weekend is
: :  tax weekend for me... can't put that off much longer ;-)
: : 
: :  -Yonik
: : 
: :  -
: :  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: :  For additional commands, e-mail: dev-h...@lucene.apache.org
: : 
: : 
: : 
: 
: -Hoss
: http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6375) facet_ranges count for before,after,between differ if #shards1

2015-04-17 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6375.

   Resolution: Duplicate
Fix Version/s: 5.2
 Assignee: Tomás Fernández Löbbe

thanks for reporting this ... sorry it slipped through the cracks for so long, 
but it looks like another user recently reported the same problem and tomas 
spoted the quick fix: SOLR-7412

 facet_ranges count for before,after,between differ if #shards1
 ---

 Key: SOLR-6375
 URL: https://issues.apache.org/jira/browse/SOLR-6375
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: Trunk
Reporter: Vamsee Yarlagadda
Assignee: Tomás Fernández Löbbe
 Fix For: 5.2


 I am currently running the test 
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L859
  on multi shard environment and i notice some discrepancies with facet_range 
 count for after,before, and between tags if the # of shards !=1
 Running the 
 query(https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L874)
  on #shards = 1 and matches the expected output
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime12/int
   lst name=params
 str name=facet.range.includelower/str
 str name=facet.range.otherall/str
 str name=facettrue/str
 str name=indenttrue/str
 str name=q*:*/str
 str name=facet.range.start1976-07-01T00:00:00.000Z/str
 str name=facet.rangea_tdt/str
 str name=facet.range.end1976-07-16T00:00:00.000Z/str
 str name=facet.range.gap+1DAY/str
 str name=wtxml/str
 str name=rows0/str
   /lst
 /lst
 result name=response numFound=63 start=0
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields/
   lst name=facet_dates/
   lst name=facet_ranges
 lst name=a_tdt
   lst name=counts
 int name=1976-07-01T00:00:00Z1/int
 int name=1976-07-02T00:00:00Z0/int
 int name=1976-07-03T00:00:00Z0/int
 int name=1976-07-04T00:00:00Z1/int
 int name=1976-07-05T00:00:00Z2/int
 int name=1976-07-06T00:00:00Z0/int
 int name=1976-07-07T00:00:00Z1/int
 int name=1976-07-08T00:00:00Z0/int
 int name=1976-07-09T00:00:00Z0/int
 int name=1976-07-10T00:00:00Z0/int
 int name=1976-07-11T00:00:00Z0/int
 int name=1976-07-12T00:00:00Z0/int
 int name=1976-07-13T00:00:00Z2/int
 int name=1976-07-14T00:00:00Z0/int
 int name=1976-07-15T00:00:00Z1/int
   /lst
   str name=gap+1DAY/str
   date name=start1976-07-01T00:00:00Z/date
   date name=end1976-07-16T00:00:00Z/date
   int name=before1/int
   int name=after1/int
   int name=between8/int
 /lst
   /lst
 /lst
 /response
 {code}
 Running the same above on #shards  1 (facet_range count for 
 after,before,between differs)
 {code}
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime7/int
   lst name=params
 str name=facet.range.includelower/str
 str name=facet.range.otherall/str
 str name=facettrue/str
 str name=indenttrue/str
 str name=q*:*/str
 str name=facet.range.start1976-07-01T00:00:00.000Z/str
 str name=facet.rangea_tdt/str
 str name=facet.range.end1976-07-16T00:00:00.000Z/str
 str name=facet.range.gap+1DAY/str
 str name=wtxml/str
 str name=rows0/str
   /lst
 /lst
 result name=response numFound=63 start=0 maxScore=1.0
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields/
   lst name=facet_dates/
   lst name=facet_ranges
 lst name=a_tdt
   lst name=counts
 int name=1976-07-01T00:00:00Z1/int
 int name=1976-07-02T00:00:00Z0/int
 int name=1976-07-03T00:00:00Z0/int
 int name=1976-07-04T00:00:00Z1/int
 int name=1976-07-05T00:00:00Z2/int
 int name=1976-07-06T00:00:00Z0/int
 int name=1976-07-07T00:00:00Z1/int
 int name=1976-07-08T00:00:00Z0/int
 int name=1976-07-09T00:00:00Z0/int
 int name=1976-07-10T00:00:00Z0/int
 int name=1976-07-11T00:00:00Z0/int
 int name=1976-07-12T00:00:00Z0/int
 int name=1976-07-13T00:00:00Z2/int
 int name=1976-07-14T00:00:00Z0/int
 int name=1976-07-15T00:00:00Z1/int
   /lst
   str name=gap+1DAY/str
   date name=start1976-07-01T00:00:00Z/date
   date name=end1976-07-16T00:00:00Z/date
   int name=before1/int
   int name=after0/int
   int name=between3/int
 /lst
   /lst
 /lst
 /response
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499244#comment-14499244
 ] 

Ryan Ernst commented on LUCENE-6431:


+1!

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_40) - Build # 4570 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4570/
Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([325806DE56F740AD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:496)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=10023, name=searcherExecutor-5272-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=10023, name=searcherExecutor-5272-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([325806DE56F740AD]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=10023, 

[jira] [Commented] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499327#comment-14499327
 ] 

Ryan Ernst commented on LUCENE-6432:


+1

 Add SuppressReproduceLine
 -

 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6432.patch


 {code}
   /**
* Suppress the default {@code reproduce with: ant test...}
* Your own listener can be added as needed for your build.
*/
   @Documented
   @Inherited
   @Retention(RetentionPolicy.RUNTIME)
   @Target(ElementType.TYPE)
   public @interface SuppressReproduceLine {}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7413) Website: downloading past releases is harder than it should be

2015-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499233#comment-14499233
 ] 

Shawn Heisey commented on SOLR-7413:


I like your ideas.  I think the current setup is problematic, as you said, 
because the redirect page isn't visible long enough for a newcomer to find the 
link for older releases.

It took me a minute to find the Download Older Releases link at the bottom of 
the page, because it is in a different place than the Download link.  I'm not 
sure if it's a bad idea or a good idea to move it so it's right below Download.


 Website: downloading past releases is harder than it should be
 --

 Key: SOLR-7413
 URL: https://issues.apache.org/jira/browse/SOLR-7413
 Project: Solr
  Issue Type: Bug
  Components: website
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor

 Clicking on the Download button at the top of every Solr website page takes 
 you to [http://lucene.apache.org/solr/mirrors-solr-latest-redir.html] (let's 
 call it the download-redirect page), which pauses for 3 seconds and then 
 auto-redirects to the Apache download mirror page for the latest Solr 
 release.  The download-redirect page has info about downloading past 
 releases, but there are problems with the current setup:
 # The 3 second auto-redirect doesn't allow enough time to read the page 
 before it's gone.
 # Firefox (latest version, on OS X and Windows) doesn't include the page in 
 its browser history, so you can't go back - clicking the back button will 
 take you to the page you were on when you clicked the Download button, not 
 back to the download-redirect page.
 # Internet Explorer and Chrome include the download-redirect page in their 
 history, so clicking the back button will go there, but then after three 
 seconds you get redirected to the Apache download mirrors page, whack-a-mole 
 style.
 When I was putting the download-redirect page together, I guess I only tested 
 on Safari on OS X 10.10.  This browser keeps the download-redirect page in 
 its history, so clicking the back button after the auto-redirect takes you to 
 the mirror pages will take you back to the download-redirect page, and the 
 auto-redirect never recurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Race condition can cause a config directory listener to be removed

2015-04-17 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499394#comment-14499394
 ] 

Anshum Gupta commented on SOLR-7408:


I like the idea of calling {{SolrCore.close()}} and letting that handle the 
responsibility of unregistering (already happens).
Does it make more sense to have this in a different JIRA or at least change the 
title/summary of this one to highlight the new/updated intention?


 Race condition can cause a config directory listener to be removed
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7408.patch, SOLR-7408.patch


 This has been reported here: http://markmail.org/message/ynkm2axkdprppgef, 
 and I was able to reproduce it in a test, although I am only able to 
 reproduce if I put break points and manually simulate the problematic context 
 switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Race condition can cause a config directory listener to be removed

2015-04-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499354#comment-14499354
 ] 

Noble Paul commented on SOLR-7408:
--

looks good to me

 Race condition can cause a config directory listener to be removed
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7408.patch, SOLR-7408.patch


 This has been reported here: http://markmail.org/message/ynkm2axkdprppgef, 
 and I was able to reproduce it in a test, although I am only able to 
 reproduce if I put break points and manually simulate the problematic context 
 switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499397#comment-14499397
 ] 

Noble Paul commented on SOLR-7176:
--

I would like to make another proposal

{noformat}
zkcli.sh -zkhost 127.0.0.1:9983 -collection-action CLUSTERPROP -name urlScheme 
-val https 
{noformat}

This should behave exactly like the collections API . All the params and 
behavior will be same as CLUSTERPROP API but will work directly on the command 
line

The advantage is that the user does not need to learn new param names and their 
semantics, Moreover  we can extend the same pattern to all our other collection 
APIs as required

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499400#comment-14499400
 ] 

Dawid Weiss commented on LUCENE-6431:
-

I see the problem... You'd like to reset the state of this thing before every 
test (and likely after, for hook methods) so that it doesn't rely on any 
previous calls within the suite. I don't see the point of this though:
{code}
+// a little funky: we only look at hashcode (well-defined) of the target 
class name.
+// using a generator won't reproduce, because we are a per-class resource.
+// using hashing on filenames won't reproduce, because many of the names 
rely on other things
+// the test class did.
+// so a test gets terrorized with extras or gets none at all depending on 
the initial seed.
+int hash = 
RandomizedContext.current().getTargetClass().toString().hashCode() ^ seed;
+if ((hash  3) == 0) {
{code}

the class is going to be the same for every instantiation of the ExtraFS class. 
You might as well just initialize it in the constructor -- it's never going to 
change for the same suite (?).

Thinking if there's any other way to have it consistently reset its state.

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499409#comment-14499409
 ] 

Dawid Weiss commented on LUCENE-6431:
-

Ok. I guess you could do it by combining a static ExtraFs rule (and instance) 
with a before-test rule that would reset extrafs's random seed before every 
test. This way every test gets its independent ExtraFs call chain. But it adds 
complexity and I don't think it buys us anything (or not much).

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6427) BitSet fixes - assert on presence of 'ghost bits' and others

2015-04-17 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499509#comment-14499509
 ] 

Luc Vanlerberghe commented on LUCENE-6427:
--

I updated my pull request:
* Deleted obsolete doc comments on @Override methods
* TestFixedBitSet: Made an accidentally public method private again
* org.apache.solr.search.TestFiltering: Corrected possible generation of 
'ghost' bits for FixedBitSet

bq. But it doesn't bring anything either since this method is not used anywhere 
for now?
I did find a case where it would be useful: In oals.SloppyPhraseScorer there's 
this code:
{code}
// collisions resolved, now re-queue
// empty (partially) the queue until seeing all pps advanced for resolving 
collisions
int n = 0;
// TODO would be good if we can avoid calling cardinality() in each 
iteration!
int numBits = bits.length(); // larges bit we set
while (bits.cardinality()  0) {
  PhrasePositions pp2 = pq.pop();
  rptStack[n++] = pp2;
  if (pp2.rptGroup = 0 
   pp2.rptInd  numBits  // this bit may not have been set
   bits.get(pp2.rptInd)) {
bits.clear(pp2.rptInd);
  }
}
{code}
and some places that assert that .cardinality() == 0.


 BitSet fixes - assert on presence of 'ghost bits' and others
 

 Key: LUCENE-6427
 URL: https://issues.apache.org/jira/browse/LUCENE-6427
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe

 Fixes after reviewing org.apache.lucene.util.FixedBitSet, LongBitSet and 
 corresponding tests:
 * Some methods rely on the fact that no bits are set after numBits (what I 
 call 'ghost' bits here).
 ** cardinality, nextSetBit, intersects and others may yield wrong results
 ** If ghost bits are present, they may become visible after ensureCapacity is 
 called.
 ** The tests deliberately create bitsets with ghost bits, but then do not 
 detect these failures
 * FixedBitSet.cardinality scans the complete backing array, even if only 
 numWords are in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2962 - Failure

2015-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2962/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in collection1 come up within 3 
ms! ClusterState: {   control_collection:{ replicationFactor:1, 
maxShardsPerNode:1, autoAddReplicas:false, shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ node_name:127.0.0.1:37291_,  
   core:collection1, base_url:http://127.0.0.1:37291;,   
  state:active, leader:true, 
autoCreated:true, router:{name:compositeId}},   collection1:{   
  replicationFactor:1, maxShardsPerNode:1, 
autoAddReplicas:false, shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ node_name:127.0.0.1:37297_, 
core:collection1, base_url:http://127.0.0.1:37297;,  
   state:active, leader:true},   core_node2:{ 
node_name:127.0.0.1:37301_, core:collection1,   
  base_url:http://127.0.0.1:37301;, 
state:recovering, autoCreated:true, 
router:{name:compositeId}}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
collection1 come up within 3 ms! ClusterState: {
  control_collection:{
replicationFactor:1,
maxShardsPerNode:1,
autoAddReplicas:false,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
node_name:127.0.0.1:37291_,
core:collection1,
base_url:http://127.0.0.1:37291;,
state:active,
leader:true,
autoCreated:true,
router:{name:compositeId}},
  collection1:{
replicationFactor:1,
maxShardsPerNode:1,
autoAddReplicas:false,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
node_name:127.0.0.1:37297_,
core:collection1,
base_url:http://127.0.0.1:37297;,
state:active,
leader:true},
  core_node2:{
node_name:127.0.0.1:37301_,
core:collection1,
base_url:http://127.0.0.1:37301;,
state:recovering,
autoCreated:true,
router:{name:compositeId}}}
at 
__randomizedtesting.SeedInfo.seed([B2BA3FF0D83C0F12:3AEE002A76C062EA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 

[jira] [Commented] (LUCENE-6422) Add StreamingQuadPrefixTree

2015-04-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499520#comment-14499520
 ] 

Michael McCandless commented on LUCENE-6422:


bq. baselined on Lucene trunk (standard practice for contributing to Lucene)

Please don't require this of contributors: it is not standard practice.  Make 
the bar as low as possible!

 Add StreamingQuadPrefixTree
 ---

 Key: LUCENE-6422
 URL: https://issues.apache.org/jira/browse/LUCENE-6422
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 5.x
Reporter: Nicholas Knize
 Attachments: LUCENE-6422.patch, 
 LUCENE-6422_with_SPT_factory_and_benchmark.patch


 To conform to Lucene's inverted index, SpatialStrategies use strings to 
 represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 
 bits per GeoHash cell, respectively.  To create the terms representing a 
 Shape, the BytesRefIteratorTokenStream first builds all of the terms into an 
 ArrayList of Cells in memory, then passes the ArrayList.Iterator back to 
 invert() which creates a second lexicographically sorted array of Terms. This 
 doubles the memory consumption when indexing a shape.
 This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to 
 accomplish the following:
 1.  Create a packed 8byte representation for a QuadCell
 2.  Build the Packed cells 'on demand' when incrementToken is called
 Improvements over this approach include the generation of the packed cells 
 using an AutoPrefixAutomaton



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4212) Let facet queries hang off of pivots

2015-04-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4212:

Attachment: SOLR-6353-6686-4212.patch

Thanks for the detailed review, Hoss.

bq. verbose: why not just ranges and queries for the local param names, 
similar to how stats is already used?

That was an oversight. Fixed.

bq. I'm not a big fan of the fact that this patch breaks the determinism of the 
order that the different types of facets are returned in – It's probably not a 
blocker, but i suspect there may be some clients (or perhaps even client 
libraries other then SolrJ) which will be broken by this

Fixed.

bq. In the SolrJ PivotField class, this patch adds NamedListObject 
getRanges() and NamedListInteger getQueryCounts() methods. We should 
really be consistent with the existing equivalent methods in QueryResponse

Fixed.

bq. why can't we just use Number instead of Object in this new code since 
that's the lavel all of the casting code that deals with this list seems to use 
anyway?

Done.

bq. doesn't this break the SolrJ code if/when it returns a Long? (see above new 
method NamedListInteger getQueryCounts())

Yes it does. And the current SolrJ code for range  query facets also needs to 
be fixed. I'll open another issue to fix the client side of things.

bq. DateFacetProcessor -- this entire class should be marked deprecated

Done.

bq. maybe i'm missing something, but what exactly is the advantage of this 
subclassing RangeFacetProcessor? ... if they are sharing code it's not obvious 
to me, and if they aren't (intentionally) sharing code then this subclass 
relationship seems dangerous if/when future improvements to range faceting are 
made.

This was an oversight. Fixed.

bq. FacetComponent -- why does doDistribRanges() still need to exist? why not 
just move that casting logic directly into 
RangeFacetAccumulator.mergeContributionFromShard like the rest of the code that 
use to be here and call it directly?

I inlined the method into FacetComponent.countFacets. I didn't move the casting 
logic though. The mergeContributionFromShard method could in theory accept an 
object and cast it to the right type but a method accepting just a 
java.lang.Object doesn't feel right to me.

bq. FacetComponent -- now that these skethy num() functions are no longer 
private, can we please get some javadocs on them.

Done.

{quote}
* PivotFacetProcessor
** unless i'm missunderstanding the usage, the way addPivotQueriesAndRanges 
(and removeUnwantedQueriesAndRanges) works means that every facet.query and 
facet.range param value (with all localparams) is going to be reparsed over and 
over and over again for every unique value in every pivot field – just to check 
the tag values and see if it's one that should be computed for this pivot.
This seems really unneccessary – why not parse each param once into a simple 
datastructure (isn't that what the new ParsedParams class is designed for?), 
and then put them in a map by tag available fro mthe request context – just 
like we did for the stats with StatsInfo.getStatsFieldsByTag(String) ?
** in particular won't this slow down existing requests containing both 
facet.pivot and facet.range || facet.query) ... even if the later aren't tagged 
or hung off of the pivots at all? because they'll still get parsed over and 
over again won't they?
{quote}

You're right. This was horrible and I should've noticed it myself. We now cache 
the ParsedParam by tags and use them instead of removing unwanted 
ranges/queries and re-parsing the request.

bq. this logic also seems to completely break instances of facet.query used w/o 
linking it to a face.tpivot

This is also fixed.

bq. also broken: neither of these requests should result in the facet.query 
hanging off of the pivots, but because of how StringUtils.contains() is used 
they both do erroniously...

Also fixed.

{quote}
* tests...
** a lot of code in the new test classes seems to have been copied verbatim 
from other existing tests – in some cases this is fine, because the copied test 
logic has been modified to include new params+asserts of the new functionality 
– but theres still a lot of redundent copy/past cruft w/o any logic changes
*** eg: DistributedFacetPivotQuerySmallTest lines 428-532 seem to be verbatim 
copied from DistributedFacetPivotSmallTest w/o any additions to test the 
racet.query logic, or even new negative-assertions that stra facet.queries are 
hung off by mistake (ie: to catch the bugs i mentioned above)
*** dito for DistributedFacetPivotRangeSmallTest
** there doesn't seem to be any new tests that show hanging both ranges  
queries onto a pivot at the same time – let alone combining it with the 
existing stats logic
** likewise i don't see any testing of using the same tag with multiple 
facet.query instances (or multiple facet.range instances) and confirming that 
both get hung 

[jira] [Updated] (SOLR-4212) Let facet queries hang off of pivots

2015-04-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4212:

Fix Version/s: (was: 4.9)
   5.2

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499507#comment-14499507
 ] 

Jan Høydahl commented on SOLR-7176:
---

Yea, I even wonder if we should have a Cluster API {{/admin/cluster/}} and move 
commands like {{CLUSTERPROP}}, {{ADDROLE}}, {{REMOVEROLE}}, {{OVERSEERSTATUS}}, 
{{CLUSTERSTATUS}} away from collections API? Then we could have a 
{{cluster.sh}} which aids in calling these from cmdline. Of course some cmds 
may require Solr to be running while others can work with ZK only?

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6427) BitSet fixes - assert on presence of 'ghost bits' and others

2015-04-17 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499509#comment-14499509
 ] 

Luc Vanlerberghe edited comment on LUCENE-6427 at 4/17/15 9:08 AM:
---

I updated my pull request:
* Deleted obsolete doc comments on @Override methods
* TestFixedBitSet: Made an accidentally public method private again
* org.apache.solr.search.TestFiltering: Corrected possible generation of 
'ghost' bits for FixedBitSet

About scanIsEmpty():
bq. But it doesn't bring anything either since this method is not used anywhere 
for now?
I did find a case where it would be useful: In oals.SloppyPhraseScorer there's 
this code:
{code}
// collisions resolved, now re-queue
// empty (partially) the queue until seeing all pps advanced for resolving 
collisions
int n = 0;
// TODO would be good if we can avoid calling cardinality() in each 
iteration!
int numBits = bits.length(); // larges bit we set
while (bits.cardinality()  0) {
  PhrasePositions pp2 = pq.pop();
  rptStack[n++] = pp2;
  if (pp2.rptGroup = 0 
   pp2.rptInd  numBits  // this bit may not have been set
   bits.get(pp2.rptInd)) {
bits.clear(pp2.rptInd);
  }
}
{code}
and some places that assert that .cardinality() == 0.



was (Author: lvl):
I updated my pull request:
* Deleted obsolete doc comments on @Override methods
* TestFixedBitSet: Made an accidentally public method private again
* org.apache.solr.search.TestFiltering: Corrected possible generation of 
'ghost' bits for FixedBitSet

bq. But it doesn't bring anything either since this method is not used anywhere 
for now?
I did find a case where it would be useful: In oals.SloppyPhraseScorer there's 
this code:
{code}
// collisions resolved, now re-queue
// empty (partially) the queue until seeing all pps advanced for resolving 
collisions
int n = 0;
// TODO would be good if we can avoid calling cardinality() in each 
iteration!
int numBits = bits.length(); // larges bit we set
while (bits.cardinality()  0) {
  PhrasePositions pp2 = pq.pop();
  rptStack[n++] = pp2;
  if (pp2.rptGroup = 0 
   pp2.rptInd  numBits  // this bit may not have been set
   bits.get(pp2.rptInd)) {
bits.clear(pp2.rptInd);
  }
}
{code}
and some places that assert that .cardinality() == 0.


 BitSet fixes - assert on presence of 'ghost bits' and others
 

 Key: LUCENE-6427
 URL: https://issues.apache.org/jira/browse/LUCENE-6427
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe

 Fixes after reviewing org.apache.lucene.util.FixedBitSet, LongBitSet and 
 corresponding tests:
 * Some methods rely on the fact that no bits are set after numBits (what I 
 call 'ghost' bits here).
 ** cardinality, nextSetBit, intersects and others may yield wrong results
 ** If ghost bits are present, they may become visible after ensureCapacity is 
 called.
 ** The tests deliberately create bitsets with ghost bits, but then do not 
 detect these failures
 * FixedBitSet.cardinality scans the complete backing array, even if only 
 numWords are in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499527#comment-14499527
 ] 

Noble Paul commented on SOLR-7176:
--

A lot of commands will need a running cluster. What good is an 
{{OVERSEERSTATUS}} without overseer. 

The only relevant one I see now is CLUSTERPROP . Andd there is an immediate 
need for that as well. 

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6433) Classifier API should use generics in getClasses

2015-04-17 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-6433:
---

 Summary: Classifier API should use generics in getClasses
 Key: LUCENE-6433
 URL: https://issues.apache.org/jira/browse/LUCENE-6433
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


{{Classifier#getClasses}} APIs return {{ListClassificationResultBytesRef}} 
while they should be consistent with the generics used in the other APIs (e.g. 
{{assignClass}} returns {{ClassificationResultT}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499407#comment-14499407
 ] 

Hrishikesh Gadre commented on SOLR-7176:


The advantage is that the user does not need to learn new param names and 
their semantics, Moreover we can extend the same pattern to all our other 
collection APIs as required

Sure. I like this idea. But can we define it as a separate script (in line with 
the earlier reasoning for not adding in the zkcli.sh) ?


 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8

2015-04-17 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499488#comment-14499488
 ] 

Uwe Schindler commented on LUCENE-6420:
---

[~simonw] did include the new Annotation support of forbiddenapis 1.8 into 
elasticsearch: [https://github.com/elastic/elasticsearch/pull/10560/files]
We can do the same in Lucene, so we have a more fine-granular exclusion pattern 
than the current file-level exclusions. I also like his reason on his 
annotation, so you can/have to give a reason why you apply 
{{@SuppressForbidden}}.
I would suggest to add this annotation to lucene-core.jar as class level, non 
runtime annotation. I can work on that next week.

 Update forbiddenapis to 1.8
 ---

 Key: LUCENE-6420
 URL: https://issues.apache.org/jira/browse/LUCENE-6420
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6420.patch


 Update forbidden-apis plugin to 1.8:
 - Initial support for Java 9 including JIGSAW
 - Errors are now reported sorted by line numbers and correctly grouped 
 (synthetic methods/lambdas)
 - Package-level forbids: Deny all classes from a package: org.hatedpkg.** 
 (also other globs work)
 - In addition to file-level excludes, forbiddenapis now supports fine 
 granular excludes using Java annotations. You can use the one shipped, but 
 define your own, e.g. inside Lucene and pass its name to forbidden (e.g. 
 using a glob: **.SuppressForbidden would any annotation in any package to 
 suppress errors). Annotation need to be on class level, no runtime annotation 
 required.
 This will for now only update the dependency and remove the additional forbid 
 by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). 
 But we should review and for example suppress forbidden failures in command 
 line tools using @SuppressForbidden (or similar annotation). The discussion 
 is open, I can make a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6686) facet.threads can return wrong results when using facet.prefix multiple times on same field

2015-04-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499593#comment-14499593
 ] 

Shalin Shekhar Mangar commented on SOLR-6686:
-

Hi Tim, this will go into 5.2. I was about to do something similar for 
SOLR-6353/SOLR-4212 so I incorporated your changes to my patch on SOLR-4212. 
It's almost ready and should be committed next week.

 facet.threads can return wrong results when using facet.prefix multiple times 
 on same field
 ---

 Key: SOLR-6686
 URL: https://issues.apache.org/jira/browse/SOLR-6686
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.9
Reporter: Michael Ryan
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6686.patch


 When using facet.threads, SimpleFacets can return the wrong results when 
 using facet.prefix multiple times on the same field.
 The problem is that SimpleFacets essentially stores the prefix value in a 
 global variable, rather than passing the current prefix value into the 
 Callable. So, the prefix value that is used when getting the term counts is 
 whichever one was the last one parsed.
 STEPS TO REPRODUCE:
 # Create a document with a string field named myFieldName and value foo
 # Create another document with a string field named myFieldName and value 
 bar
 # Run this query: {noformat}q=*:*rows=0facet=truefacet.field={!key=key1 
 facet.prefix=foo}myFieldNamefacet.field={!key=key2 
 facet.prefix=bar}myFieldNamefacet.threads=1{noformat}
 EXPECTED:
 {noformat}lst name=facet_fields
   lst name=key1
 int name=foo1/int
   /lst
   lst name=key2
 int name=bar1/int
   /lst
 /lst{noformat}
 ACTUAL:
 {noformat}lst name=facet_fields
   lst name=key1
 int name=bar1/int
   /lst
   lst name=key2
 int name=bar1/int
   /lst
 /lst{noformat}
 I'm using 4.9, but I think this affects all versions.
 A user previously reported this here:
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201405.mbox/%3cbay169-w52cef09187a88286de5417d5...@phx.gbl%3E
 I think this affects parameters other than facet.prefix, but I have not tried 
 that yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499632#comment-14499632
 ] 

Per Steffensen commented on SOLR-7176:
--

{quote}
{code}
zkcli.sh -zkhost 127.0.0.1:9983 -collection-action CLUSTERPROP -name urlScheme 
-val https
{code}
{quote}
I agree, except that it should not be the zkcli.sh tool that is extended. Since 
it is the collections API you make a CLI for, so to speak, make a 
collectionscli.sh script
{code}
collectionscli.sh -zkhost 127.0.0.1:9983 -action CLUSTERPROP -name urlScheme 
-val https
{code}
And later maybe
{code}
collectionscli.sh -zkhost 127.0.0.1:9983 -action ADDROLE -role overseer -val ...
{code}
etc

It think also, that it needs to be considered how and if this is an 
extension/modification to the SolrCLI-tool (used from solr/bin/solr and 
solr/bin/solr.cmd)
{code}
solr.sh CLUSTERPROP -zkhost 127.0.0.1:9983 -name urlScheme -val https
{code}
Just saying, even though I do not like the current state of it, because of the 
enormous amounts of redundant code. But we do not want to end up with a million 
different cli-tools either.
BTW, I think solr/bin/solr should be renamed to solr.sh, so I pretended above

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7414) CSVResponseWriter returns empty field when alias requested

2015-04-17 Thread Michael Lawrence (JIRA)
Michael Lawrence created SOLR-7414:
--

 Summary: CSVResponseWriter returns empty field when alias requested
 Key: SOLR-7414
 URL: https://issues.apache.org/jira/browse/SOLR-7414
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Michael Lawrence


Attempting to retrieve all fields while renaming one, e.g., inStock to 
stocked (URL below), results in CSV output that has a column for inStock 
(should be stocked), and the column has no values. I would have expected this 
to behave like the JSON and XML response writers.

http://localhost:8983/solr/select?q=*fl=*,stocked:inStockwt=csv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Race condition can cause a config directory listener to be removed

2015-04-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499636#comment-14499636
 ] 

Shai Erera commented on SOLR-7408:
--

I will update the title of this JIRA and handle it here. I like this better 
than doing what I consider more of a hack to the code and later change it. 
SolrCore is initialized in two places, so shouldn't be complicated to ensure it 
is closed in case of errors.

While I'm at it, I'll try to simplify the ctor by breaking it out to some 
auxiliary methods, instead of having a 250 lines method!

 Race condition can cause a config directory listener to be removed
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7408.patch, SOLR-7408.patch


 This has been reported here: http://markmail.org/message/ynkm2axkdprppgef, 
 and I was able to reproduce it in a test, although I am only able to 
 reproduce if I put break points and manually simulate the problematic context 
 switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499647#comment-14499647
 ] 

ASF subversion and git services commented on LUCENE-6431:
-

Commit 1674275 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1674275 ]

LUCENE-6431: make ExtrasFS reproducible

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499827#comment-14499827
 ] 

Noble Paul commented on SOLR-7176:
--

I don't expect a lot of commands to be exposed with this. This will be used 
when you can't use the command because the solr cliuster is not up and running 
. It will be an expert thing  Having  a dedicated script for this seems 
overklll.

I would still prefer to overload the zkCli command 

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7177) ConcurrentUpdateSolrClient should log connection information on http failures

2015-04-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499842#comment-14499842
 ] 

Mark Miller commented on SOLR-7177:
---

I just came back to this this morning.

{noformat}
+try {
   response = client.getHttpClient().execute(method);
+} catch (Exception ex) {
+  SolrServerException solrExc = new SolrServerException(Error 
during http connection. Request:  + method.getURI(), ex);
+  throw solrExc;
+}
+
{noformat}

I'm not sure if it's a good idea to wrap any exception from execute as a 
SolrServerException. That seems like it can be a little tricky.

Looking at the code though, isn't this already handled? If the status is not 
200, there should be a log of 'error' and the exception, including a message 
that includes the method.getURI() info. Was that added after this issue 
perhaps? Or is that not working as intended?

 ConcurrentUpdateSolrClient should log connection information on http failures 
 --

 Key: SOLR-7177
 URL: https://issues.apache.org/jira/browse/SOLR-7177
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10.3, 5.0
Reporter: Vamsee Yarlagadda
Priority: Minor
 Attachments: SOLR-7177.patch, SOLR-7177v2.patch


 I notice when there is an http connection failure, we simply log the error 
 but not the connection information. It would be good to log this info to make 
 debugging easier.
 e.g:
 1.
 {code}
 2015-02-27 08:56:51,503 ERROR org.apache.solr.update.StreamingSolrServers: 
 error
 java.net.SocketException: Connection reset
   at java.net.SocketInputStream.read(SocketInputStream.java:196)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:166)
   at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:90)
   at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:281)
   at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:92)
   at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)
   at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
   at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
   at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
   at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
   at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
   at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
   at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
   at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
   at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
   at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
   at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
   at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:235)
 {code}
  
 2.
 {code}
 2015-02-27 10:26:12,363 ERROR org.apache.solr.update.StreamingSolrServers: 
 error
 org.apache.http.NoHttpResponseException: The target server failed to respond
   at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
   at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)
   at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
   at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
   at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
   at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
   at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
   at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
   at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
   at 
 

Re: Updating CWIKI jdoc link macros -- was: Re: Solr Ref Guide for 5.1

2015-04-17 Thread Cassandra Targett
Great, thanks Uwe!

I'll get started on the RC for the Ref Guide now.

Cassandra

On Fri, Apr 17, 2015 at 2:18 AM, Chris Hostetter hossman_luc...@fucit.org
wrote:


 FYI: Uwe replied to me privately a few hours ago that he had done this.

 : Date: Wed, 15 Apr 2015 09:14:11 -0700 (MST)
 : From: Chris Hostetter hossman_luc...@fucit.org
 : To: u...@thetaphi.de
 : Cc: Lucene Dev dev@lucene.apache.org
 : Subject: Updating CWIKI jdoc link macros -- was: Re: Solr Ref Guide for
 5.1
 :
 :
 : Uwe: can you please update the confluence link macros for the lucene/solr
 : javadoc urls to reflect 5_1_0 ?
 :
 :  To update the shortcut links to point to the current version, remove
 :  and recreate the shortcut links with keys SolrReleaseDocs and
 :  LuceneReleaseDocs , making their expanded values include the
 :  underscore-separated release version followed by a slash, e.g. for the
 :  4.8 release, the expanded values should be
 :  http://lucene.apache.org/solr/4_8_0/ and
 :  http://lucene.apache.org/core/4_8_0/, respectively.  See the
 Confluence
 :  documentation for instructions.  Note: Uwe Schindler says that
 :  Confluence has a bug that disallows editing shortcut links' expanded
 :  values - that's why you have to remove and then recreate the shortcut
 links.
 :
 :
 https://cwiki.apache.org/confluence/display/solr/Internal+-+How+To+Publish+This+Documentation#Internal-HowToPublishThisDocumentation-Pre-publicationActions
 :
 :
 https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation
 :
 :
 :
 : : Date: Tue, 14 Apr 2015 10:42:50 -0500
 : : From: Cassandra Targett casstarg...@gmail.com
 : : Reply-To: dev@lucene.apache.org
 : : To: dev@lucene.apache.org
 : : Subject: Re: Solr Ref Guide for 5.1
 : :
 : : Just to let folks know, I'm targeting early Thursday morning for
 making a
 : : release candidate for the Ref Guide and putting it up for a vote.
 : :
 : : If anyone really needs more time, please let me know.
 : :
 : : Cassandra
 : :
 : : On Thu, Apr 9, 2015 at 6:11 PM, Yonik Seeley ysee...@gmail.com
 wrote:
 : :
 : :  On Thu, Apr 9, 2015 at 6:42 PM, Cassandra Targett 
 casstarg...@gmail.com
 : :  wrote:
 : :   Thanks Yonik. I think there are a lot of places that use 'curl';
 not
 : :  sure if
 : :   there will be time to fix them for 5.1, but if not we'll add it to
 the
 : :   backlog list for future edits.
 : :  
 : :   Since you replied :) - are you going to have a chance to add
 anything on
 : :  the
 : :   other new stuff you added, like SOLR-7214, 7218, or 7212?
 : : 
 : :  Yep, I do plan on it... just not sure of the timing (this weekend is
 : :  tax weekend for me... can't put that off much longer ;-)
 : : 
 : :  -Yonik
 : : 
 : :  -
 : :  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 : :  For additional commands, e-mail: dev-h...@lucene.apache.org
 : : 
 : : 
 : :
 :
 : -Hoss
 : http://www.lucidworks.com/
 :

 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-4260) Inconsistent numDocs between leader and replica

2015-04-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499853#comment-14499853
 ] 

Mark Miller commented on SOLR-4260:
---

This ticket addressed specific issues - please open a new ticket for any 
further reports.

 Inconsistent numDocs between leader and replica
 ---

 Key: SOLR-4260
 URL: https://issues.apache.org/jira/browse/SOLR-4260
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
 Environment: 5.0.0.2013.01.04.15.31.51
Reporter: Markus Jelsma
Assignee: Mark Miller
Priority: Critical
 Fix For: 4.6.1, Trunk

 Attachments: 192.168.20.102-replica1.png, 
 192.168.20.104-replica2.png, SOLR-4260.patch, clusterstate.png, 
 demo_shard1_replicas_out_of_sync.tgz


 After wiping all cores and reindexing some 3.3 million docs from Nutch using 
 CloudSolrServer we see inconsistencies between the leader and replica for 
 some shards.
 Each core hold about 3.3k documents. For some reason 5 out of 10 shards have 
 a small deviation in then number of documents. The leader and slave deviate 
 for roughly 10-20 documents, not more.
 Results hopping ranks in the result set for identical queries got my 
 attention, there were small IDF differences for exactly the same record 
 causing a record to shift positions in the result set. During those tests no 
 records were indexed. Consecutive catch all queries also return different 
 number of numDocs.
 We're running a 10 node test cluster with 10 shards and a replication factor 
 of two and frequently reindex using a fresh build from trunk. I've not seen 
 this issue for quite some time until a few days ago.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499807#comment-14499807
 ] 

Shawn Heisey commented on SOLR-7176:


bq. BTW, I think solr/bin/solr should be renamed to solr.sh, so I pretended 
above

Renaming the script would be a bad idea, IMHO.  With the current setup, you can 
use bin/solr at the commandline on *NIX and bin\solr on Windows, the only 
difference is the path separator, which will not be a surprise to most admins.

If we rename solr to solr.sh, then the command will be different on *NIX and 
unified documentation becomes more difficult.

If there is going to be any renaming, I believe it should be to remove .sh from 
the other scripts, so zkCli.sh becomes zkCli ... and it should be handled in a 
separate issue.


 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2963 - Still Failing

2015-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2963/

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in collection1 come up within 3 
ms! ClusterState: {   control_collection:{ shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ node_name:127.0.0.1:47431_ei%2Fuu,   
  core:collection1, state:active, 
base_url:http://127.0.0.1:47431/ei/uu;, leader:true, 
router:{name:compositeId}, replicationFactor:1, 
autoCreated:true, autoAddReplicas:false, 
maxShardsPerNode:1},   collection1:{ shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ node_name:127.0.0.1:47437_ei%2Fuu, 
core:collection1, state:active, 
base_url:http://127.0.0.1:47437/ei/uu;, leader:true},
   core_node2:{ node_name:127.0.0.1:47441_ei%2Fuu,
 core:collection1, state:recovering, 
base_url:http://127.0.0.1:47441/ei/uu, 
router:{name:compositeId}, replicationFactor:1, 
autoCreated:true, autoAddReplicas:false, 
maxShardsPerNode:1}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
collection1 come up within 3 ms! ClusterState: {
  control_collection:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
node_name:127.0.0.1:47431_ei%2Fuu,
core:collection1,
state:active,
base_url:http://127.0.0.1:47431/ei/uu;,
leader:true,
router:{name:compositeId},
replicationFactor:1,
autoCreated:true,
autoAddReplicas:false,
maxShardsPerNode:1},
  collection1:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
node_name:127.0.0.1:47437_ei%2Fuu,
core:collection1,
state:active,
base_url:http://127.0.0.1:47437/ei/uu;,
leader:true},
  core_node2:{
node_name:127.0.0.1:47441_ei%2Fuu,
core:collection1,
state:recovering,
base_url:http://127.0.0.1:47441/ei/uu,
router:{name:compositeId},
replicationFactor:1,
autoCreated:true,
autoAddReplicas:false,
maxShardsPerNode:1}}
at 
__randomizedtesting.SeedInfo.seed([D815A505B8855056:50419ADF16793DAE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 

[jira] [Commented] (LUCENE-6422) Add StreamingQuadPrefixTree

2015-04-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499814#comment-14499814
 ] 

David Smiley commented on LUCENE-6422:
--

That's fair; you're right that the bar shouldn't be the same for committers vs. 
contributors.

 Add StreamingQuadPrefixTree
 ---

 Key: LUCENE-6422
 URL: https://issues.apache.org/jira/browse/LUCENE-6422
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 5.x
Reporter: Nicholas Knize
 Attachments: LUCENE-6422.patch, 
 LUCENE-6422_with_SPT_factory_and_benchmark.patch


 To conform to Lucene's inverted index, SpatialStrategies use strings to 
 represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 
 bits per GeoHash cell, respectively.  To create the terms representing a 
 Shape, the BytesRefIteratorTokenStream first builds all of the terms into an 
 ArrayList of Cells in memory, then passes the ArrayList.Iterator back to 
 invert() which creates a second lexicographically sorted array of Terms. This 
 doubles the memory consumption when indexing a shape.
 This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to 
 accomplish the following:
 1.  Create a packed 8byte representation for a QuadCell
 2.  Build the Packed cells 'on demand' when incrementToken is called
 Improvements over this approach include the generation of the packed cells 
 using an AutoPrefixAutomaton



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6433) Classifier API should use generics in getClasses

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499847#comment-14499847
 ] 

ASF subversion and git services commented on LUCENE-6433:
-

Commit 1674304 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1674304 ]

LUCENE-6433 - using generics in Classifier#getClasses

 Classifier API should use generics in getClasses
 

 Key: LUCENE-6433
 URL: https://issues.apache.org/jira/browse/LUCENE-6433
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 {{Classifier#getClasses}} APIs return 
 {{ListClassificationResultBytesRef}} while they should be consistent with 
 the generics used in the other APIs (e.g. {{assignClass}} returns 
 {{ClassificationResultT}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6431.
-
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6434) simplify extrasfs more

2015-04-17 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6434:
---

 Summary: simplify extrasfs more
 Key: LUCENE-6434
 URL: https://issues.apache.org/jira/browse/LUCENE-6434
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


As Dawid mentions on LUCENE-6431, we can do all conditions once in the ctor, 
since it will not change at the very least.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499650#comment-14499650
 ] 

Robert Muir commented on LUCENE-6431:
-

I opened LUCENE-6434 to simplify the code as you suggested. 

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499646#comment-14499646
 ] 

ASF subversion and git services commented on LUCENE-6431:
-

Commit 1674274 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1674274 ]

LUCENE-6431: make ExtrasFS reproducible

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6431) Make extrasfs reproducible

2015-04-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499645#comment-14499645
 ] 

Robert Muir commented on LUCENE-6431:
-

I'm not worried about randomness of this thing at all. That's a false economy.

Reproducibility has to be number one. I'll commit this and anything else is 
sugar-on-top.

 Make extrasfs reproducible
 --

 Key: LUCENE-6431
 URL: https://issues.apache.org/jira/browse/LUCENE-6431
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6431.patch


 Today this is really bad, it can easily cause non-reproducible test failures. 
 Its a per-class thing, but its decisions are based on previous events 
 happening for that class (e.g. directory operations). 
 Even using the filename can't work, its setup so early in the process, before 
 test framework even ensures java.io.tempdir and similar exist. Even 
 disregarding that, test files use a temp directory facility and those names 
 are not reproducible (they depend on what already exists, e.g. from a 
 previous test run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6432.
-
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

 Add SuppressReproduceLine
 -

 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6432.patch


 {code}
   /**
* Suppress the default {@code reproduce with: ant test...}
* Your own listener can be added as needed for your build.
*/
   @Documented
   @Inherited
   @Retention(RetentionPolicy.RUNTIME)
   @Target(ElementType.TYPE)
   public @interface SuppressReproduceLine {}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499660#comment-14499660
 ] 

ASF subversion and git services commented on LUCENE-6432:
-

Commit 1674278 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1674278 ]

LUCENE-6432: add SuppressReproduceLine

 Add SuppressReproduceLine
 -

 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6432.patch


 {code}
   /**
* Suppress the default {@code reproduce with: ant test...}
* Your own listener can be added as needed for your build.
*/
   @Documented
   @Inherited
   @Retention(RetentionPolicy.RUNTIME)
   @Target(ElementType.TYPE)
   public @interface SuppressReproduceLine {}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6432) Add SuppressReproduceLine

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499657#comment-14499657
 ] 

ASF subversion and git services commented on LUCENE-6432:
-

Commit 1674277 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1674277 ]

LUCENE-6432: add SuppressReproduceLine

 Add SuppressReproduceLine
 -

 Key: LUCENE-6432
 URL: https://issues.apache.org/jira/browse/LUCENE-6432
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6432.patch


 {code}
   /**
* Suppress the default {@code reproduce with: ant test...}
* Your own listener can be added as needed for your build.
*/
   @Documented
   @Inherited
   @Retention(RetentionPolicy.RUNTIME)
   @Target(ElementType.TYPE)
   public @interface SuppressReproduceLine {}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7121) Solr nodes should go down based on configurable thresholds and not rely on resource exhaustion

2015-04-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499864#comment-14499864
 ] 

Mark Miller commented on SOLR-7121:
---

Sorry it's been a while with no response. I'll try and do a review of this soon.

 Solr nodes should go down based on configurable thresholds and not rely on 
 resource exhaustion
 --

 Key: SOLR-7121
 URL: https://issues.apache.org/jira/browse/SOLR-7121
 Project: Solr
  Issue Type: New Feature
Reporter: Sachin Goyal
 Attachments: SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch, 
 SOLR-7121.patch, SOLR-7121.patch


 Currently, there is no way to control when a Solr node goes down.
 If the server is having high GC pauses or too many threads or is just getting 
 too many queries due to some bad load-balancer, the cores in the machine keep 
 on serving unless they exhaust the machine's resources and everything comes 
 to a stall.
 Such a slow-dying core can affect other cores as well by taking huge time to 
 serve their distributed queries.
 There should be a way to specify some threshold values beyond which the 
 targeted core can its ill-health and proactively go down to recover.
 When the load improves, the core should come up automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6433) Classifier API should use generics in getClasses

2015-04-17 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-6433.
-
   Resolution: Fixed
Fix Version/s: 5.2

 Classifier API should use generics in getClasses
 

 Key: LUCENE-6433
 URL: https://issues.apache.org/jira/browse/LUCENE-6433
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk, 5.2


 {{Classifier#getClasses}} APIs return 
 {{ListClassificationResultBytesRef}} while they should be consistent with 
 the generics used in the other APIs (e.g. {{assignClass}} returns 
 {{ClassificationResultT}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499895#comment-14499895
 ] 

Mark Miller commented on SOLR-7176:
---

bq. I would like to make another proposal, zkcli.sh -zkhost 127.0.0.1:9983 
-collection-action CLUSTERPROP -name urlScheme -val https

Something along these lines seems like the best current proposal to me. I don't 
think it really calls for anything more expansive.

bq. If we rename solr to solr.sh, then the command will be different on *NIX 
and unified documentation becomes more difficult.

FWIW, I think dropping the accepted and normal usage of file extensions to aid 
in unified doc is a terrible idea.

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6435) java.util.ConcurrentModificationException: Removal from the cache failed error in SimpleNaiveBayesClassifier

2015-04-17 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6435:

Description: 
While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
Wikipedia articles) I see the following code triggering a 
{{ConcurrentModificationException}} when evicting the {{Query}} from the 
{{LRUCache}}.
{code}
BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery subQuery = new BooleanQuery();
for (String textFieldName : textFieldNames) {
  subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
word)), BooleanClause.Occur.SHOULD));
}
booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
c)), BooleanClause.Occur.MUST));
//...
TotalHitCountCollector totalHitCountCollector = new 
TotalHitCountCollector();
indexSearcher.search(booleanQuery, totalHitCountCollector);
return totalHitCountCollector.getTotalHits();
{code}

this is the complete stacktrace:
{code}
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
{code}

The strange thing is that the above doesn't happen if I change the last lines 
of the above piece of code to not use the {{TotalHitCountsCollector}}:
{code}
return indexSearcher.search(booleanQuery, 1).totalHits;
{code}

  was:
While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
Wikipedia articles) I see the following code triggering a 
{{ConcurrentModificationException}} when evicting the {{Query}} from the 
{{LRUCache}}.
{code}
BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery subQuery = new BooleanQuery();
for (String textFieldName : textFieldNames) {
  subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
word)), BooleanClause.Occur.SHOULD));
}
booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
c)), BooleanClause.Occur.MUST));
//...
TotalHitCountCollector totalHitCountCollector = new 
TotalHitCountCollector();
indexSearcher.search(booleanQuery, totalHitCountCollector);
return totalHitCountCollector.getTotalHits();
{code}

this is the complete stacktrace:
{noformat}
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
{noformat}

The strange thing is that the above doesn't happen if I change the last lines 
of the above piece of code to not use the {{TotalHitCountsCollector}}:
{code}
return indexSearcher.search(booleanQuery, 1).totalHits;
{code}


 java.util.ConcurrentModificationException: Removal from the cache failed 
 error in SimpleNaiveBayesClassifier
 

 Key: LUCENE-6435
 URL: https://issues.apache.org/jira/browse/LUCENE-6435
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
 Wikipedia articles) I see the following code triggering a 
 {{ConcurrentModificationException}} when evicting the {{Query}} from the 
 {{LRUCache}}.
 {code}
 BooleanQuery booleanQuery = new BooleanQuery();
 BooleanQuery subQuery = new BooleanQuery();
 for (String textFieldName : textFieldNames) {
   subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
 word)), BooleanClause.Occur.SHOULD));
 }
 booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
 booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
 c)), BooleanClause.Occur.MUST));
 //...
 TotalHitCountCollector totalHitCountCollector = new 
 TotalHitCountCollector();
 indexSearcher.search(booleanQuery, totalHitCountCollector);
 return totalHitCountCollector.getTotalHits();
 {code}
 this is the complete stacktrace:
 {code}
 java.util.ConcurrentModificationException: Removal from the cache failed! 
 This is probably due to a query which has been modified after having been put 
 into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.BooleanQuery], query: [#text:panoram 

[jira] [Commented] (LUCENE-6433) Classifier API should use generics in getClasses

2015-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499875#comment-14499875
 ] 

ASF subversion and git services commented on LUCENE-6433:
-

Commit 1674317 from [~teofili] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1674317 ]

LUCENE-6433 - using generics in Classifier#getClasses [branch_5x]

 Classifier API should use generics in getClasses
 

 Key: LUCENE-6433
 URL: https://issues.apache.org/jira/browse/LUCENE-6433
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 {{Classifier#getClasses}} APIs return 
 {{ListClassificationResultBytesRef}} while they should be consistent with 
 the generics used in the other APIs (e.g. {{assignClass}} returns 
 {{ClassificationResultT}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6435) java.util.ConcurrentModificationException: Removal from the cache failed error in SimpleNaiveBayesClassifier

2015-04-17 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-6435:
---

 Summary: java.util.ConcurrentModificationException: Removal from 
the cache failed error in SimpleNaiveBayesClassifier
 Key: LUCENE-6435
 URL: https://issues.apache.org/jira/browse/LUCENE-6435
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
Wikipedia articles) I see the following code triggering a 
{{ConcurrentModificationException}} when evicting the {{Query}} from the 
{{LRUCache}}.
{code}
BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery subQuery = new BooleanQuery();
for (String textFieldName : textFieldNames) {
  subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
word)), BooleanClause.Occur.SHOULD));
}
booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
c)), BooleanClause.Occur.MUST));
//...
TotalHitCountCollector totalHitCountCollector = new 
TotalHitCountCollector();
indexSearcher.search(booleanQuery, totalHitCountCollector);
return totalHitCountCollector.getTotalHits();
{code}

this is the complete stacktrace:
{noformat}
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
{noformat}

The strange thing is that the above doesn't happen if I change the last lines 
of the above piece of code to not use the {{TotalHitCountsCollector}}:
{code}
return indexSearcher.search(booleanQuery, 1).totalHits;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6435) java.util.ConcurrentModificationException: Removal from the cache failed error in SimpleNaiveBayesClassifier

2015-04-17 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6435:

Description: 
While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
Wikipedia articles) I see the following code triggering a 
{{ConcurrentModificationException}} when evicting the {{Query}} from the 
{{LRUCache}}.
{code}
BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery subQuery = new BooleanQuery();
for (String textFieldName : textFieldNames) {
  subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
word)), BooleanClause.Occur.SHOULD));
}
booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
c)), BooleanClause.Occur.MUST));
//...
TotalHitCountCollector totalHitCountCollector = new 
TotalHitCountCollector();
indexSearcher.search(booleanQuery, totalHitCountCollector);
return totalHitCountCollector.getTotalHits();
{code}

this is the complete stacktrace:
{code}
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
at 
__randomizedtesting.SeedInfo.seed([B6513DEC3681FEF5:138235BE33532634]:0)
at 
org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:285)
at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:268)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:569)
at 
org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:82)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:137)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:560)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:367)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.getWordFreqForClass(SimpleNaiveBayesClassifier.java:288)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.calculateLogLikelihood(SimpleNaiveBayesClassifier.java:248)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClassNormalizedList(SimpleNaiveBayesClassifier.java:169)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClass(SimpleNaiveBayesClassifier.java:125)
at 
org.apache.lucene.classification.WikipediaTest.testItalianWikipedia(TestLuceneIndexClassifier.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 

[jira] [Updated] (LUCENE-6435) java.util.ConcurrentModificationException: Removal from the cache failed error in SimpleNaiveBayesClassifier

2015-04-17 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6435:

Description: 
While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
Wikipedia articles) I see the following code triggering a 
{{ConcurrentModificationException}} when evicting the {{Query}} from the 
{{LRUCache}}.
{code}
BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery subQuery = new BooleanQuery();
for (String textFieldName : textFieldNames) {
  subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
word)), BooleanClause.Occur.SHOULD));
}
booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
c)), BooleanClause.Occur.MUST));
//...
TotalHitCountCollector totalHitCountCollector = new 
TotalHitCountCollector();
indexSearcher.search(booleanQuery, totalHitCountCollector);
return totalHitCountCollector.getTotalHits();
{code}

this is the complete stacktrace:
{code}
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
at 
__randomizedtesting.SeedInfo.seed([B6513DEC3681FEF5:138235BE33532634]:0)
at 
org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:285)
at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:268)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:569)
at 
org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:82)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:137)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:560)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:367)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.getWordFreqForClass(SimpleNaiveBayesClassifier.java:288)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.calculateLogLikelihood(SimpleNaiveBayesClassifier.java:248)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClassNormalizedList(SimpleNaiveBayesClassifier.java:169)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClass(SimpleNaiveBayesClassifier.java:125)
at 
org.apache.lucene.classification.WikipediaTest.testItalianWikipedia(WikipediaTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40) - Build # 12163 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12163/
Java: 64bit/jdk1.8.0_40 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
expected:0 but was:1

Stack Trace:
java.lang.AssertionError: expected:0 but was:1
at 
__randomizedtesting.SeedInfo.seed([5DE634C3AF4F6420:D5B20B1901B309D8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf

2015-04-17 Thread Cassandra Targett
Please vote for the release of the Apache Solr Reference Guide for Solr 5.1.

The PDF is available at:
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/

Steve Rowe  I made some big changes to the styling of the guide, so please
raise any issues you find in your review.

Here's my +1.

Thanks,
Cassandra


[jira] [Commented] (LUCENE-6435) java.util.ConcurrentModificationException: Removal from the cache failed error in SimpleNaiveBayesClassifier

2015-04-17 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499938#comment-14499938
 ] 

Adrien Grand commented on LUCENE-6435:
--

Hi Tommaso, the reason why it does not fail with 
indexSearcher.search(booleanQuery, 1).totalHits is that in that case you are 
computing scores so caching does not kick in. The above exception means that 
somehow you executed a query against an indexsearcher, then modified it (eg. by 
adding clauses). What happens under the hood is that after the first search 
operation, the query is used as a cache key but then the changes changed the 
hashcode which made the eviction from the query cache impossible. (This is one 
of the motivations to make queries immutable.)

 java.util.ConcurrentModificationException: Removal from the cache failed 
 error in SimpleNaiveBayesClassifier
 

 Key: LUCENE-6435
 URL: https://issues.apache.org/jira/browse/LUCENE-6435
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Affects Versions: 5.1
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: Trunk


 While using {{SimpleNaiveBayesClassifier}} on a very large index (all Italian 
 Wikipedia articles) I see the following code triggering a 
 {{ConcurrentModificationException}} when evicting the {{Query}} from the 
 {{LRUCache}}.
 {code}
 BooleanQuery booleanQuery = new BooleanQuery();
 BooleanQuery subQuery = new BooleanQuery();
 for (String textFieldName : textFieldNames) {
   subQuery.add(new BooleanClause(new TermQuery(new Term(textFieldName, 
 word)), BooleanClause.Occur.SHOULD));
 }
 booleanQuery.add(new BooleanClause(subQuery, BooleanClause.Occur.MUST));
 booleanQuery.add(new BooleanClause(new TermQuery(new Term(classFieldName, 
 c)), BooleanClause.Occur.MUST));
 //...
 TotalHitCountCollector totalHitCountCollector = new 
 TotalHitCountCollector();
 indexSearcher.search(booleanQuery, totalHitCountCollector);
 return totalHitCountCollector.getTotalHits();
 {code}
 this is the complete stacktrace:
 {code}
 java.util.ConcurrentModificationException: Removal from the cache failed! 
 This is probably due to a query which has been modified after having been put 
 into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.BooleanQuery], query: [#text:panoram #cat:1356]
   at 
 __randomizedtesting.SeedInfo.seed([B6513DEC3681FEF5:138235BE33532634]:0)
   at 
 org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:285)
   at 
 org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:268)
   at 
 org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:569)
   at 
 org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:82)
   at org.apache.lucene.search.Weight.bulkScorer(Weight.java:137)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:560)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:367)
   at 
 org.apache.lucene.classification.SimpleNaiveBayesClassifier.getWordFreqForClass(SimpleNaiveBayesClassifier.java:288)
   at 
 org.apache.lucene.classification.SimpleNaiveBayesClassifier.calculateLogLikelihood(SimpleNaiveBayesClassifier.java:248)
   at 
 org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClassNormalizedList(SimpleNaiveBayesClassifier.java:169)
   at 
 org.apache.lucene.classification.SimpleNaiveBayesClassifier.assignClass(SimpleNaiveBayesClassifier.java:125)
   at 
 org.apache.lucene.classification.WikipediaTest.testItalianWikipedia(WikipediaTest.java:126)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
   at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
   at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
   at 
 

[jira] [Commented] (SOLR-7408) Race condition can cause a config directory listener to be removed

2015-04-17 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499956#comment-14499956
 ] 

Anshum Gupta commented on SOLR-7408:


+1 to both of those!

 Race condition can cause a config directory listener to be removed
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: SOLR-7408.patch, SOLR-7408.patch


 This has been reported here: http://markmail.org/message/ynkm2axkdprppgef, 
 and I was able to reproduce it in a test, although I am only able to 
 reproduce if I put break points and manually simulate the problematic context 
 switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6427) BitSet fixes - assert on presence of 'ghost bits' and others

2015-04-17 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499958#comment-14499958
 ] 

Adrien Grand commented on LUCENE-6427:
--

To be honest this doesn't look like a valid use-case of scanIfEmpty to me. As 
the code comment suggests, we should rewrite this code to not check that the 
bitset is empty in a loop. In practice we are trying to move away from these 
linear-time operations of FixedBitSet (nextDoc(), cardinality(), ...) as much 
as we can. For instance when we use this class for query execution (for 
multi-term queries mainly), we strive to only use a FixedBitSet if we know that 
the set of documents that we are going to store is dense enough for FixedBitSet 
to not be slower than a sparser implementation.

Other than the new unused methods (scanIfEmpty and LongBitset.flip) the PR 
looks good to me now.

 BitSet fixes - assert on presence of 'ghost bits' and others
 

 Key: LUCENE-6427
 URL: https://issues.apache.org/jira/browse/LUCENE-6427
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe

 Fixes after reviewing org.apache.lucene.util.FixedBitSet, LongBitSet and 
 corresponding tests:
 * Some methods rely on the fact that no bits are set after numBits (what I 
 call 'ghost' bits here).
 ** cardinality, nextSetBit, intersects and others may yield wrong results
 ** If ghost bits are present, they may become visible after ensureCapacity is 
 called.
 ** The tests deliberately create bitsets with ghost bits, but then do not 
 detect these failures
 * FixedBitSet.cardinality scans the complete backing array, even if only 
 numWords are in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6625) HttpClient callback in HttpSolrServer

2015-04-17 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499977#comment-14499977
 ] 

Anshum Gupta commented on SOLR-6625:


Thanks for the patch Ishan. I like the simplified approach but among other 
things that Greg pointed out, I'm most concerned about enforcing that every 
call to httpclient has the complete and correct information.

 HttpClient callback in HttpSolrServer
 -

 Key: SOLR-6625
 URL: https://issues.apache.org/jira/browse/SOLR-6625
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Attachments: SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, 
 SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, 
 SOLR-6625_interceptor.patch, SOLR-6625_r1654079.patch, 
 SOLR-6625_r1654079.patch


 Some of our setups use Solr in a SPNego/kerberos setup (we've done this by 
 adding our own filters to the web.xml).  We have an issue in that SPNego 
 requires a negotiation step, but some HttpSolrServer requests are not 
 repeatable, notably the PUT/POST requests.  So, what happens is, 
 HttpSolrServer sends the requests, the server responds with a negotiation 
 request, and the request fails because the request is not repeatable.  We've 
 modified our code to send a repeatable request beforehand in these cases.
 It would be nicer if HttpSolrServer provided a pre/post callback when it was 
 making an httpclient request.  This would allow administrators to make 
 changes to the request for authentication purposes, and would allow users to 
 make per-request changes to the httpclient calls (i.e. modify httpclient 
 requestconfig to modify the timeout on a per-request basis).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12165 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12165/
Java: 64bit/jdk1.8.0_60-ea-b06 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestDistribDocBasedVersion.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:56798/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:56798/collection1
at 
__randomizedtesting.SeedInfo.seed([B0C30417129F74A0:38973BCDBC631958]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:955)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:846)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:789)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.vadd(TestDistribDocBasedVersion.java:257)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.doTestDocVersions(TestDistribDocBasedVersion.java:157)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.test(TestDistribDocBasedVersion.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (LUCENE-6422) Add StreamingQuadPrefixTree

2015-04-17 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500324#comment-14500324
 ] 

Nicholas Knize edited comment on LUCENE-6422 at 4/17/15 6:42 PM:
-

Updated to remove StreamingPrefixTreeStrategy.  PackedQuadTree is now self 
contained to one file and uses the RecursivePrefixTreeStrategy but ignores 
leafyBranchPruning.  

Still only integrated and tested on branch_5x, per discussion above.


was (Author: nknize):
Updated to remove StreamingPrefixTreeStrategy.  PackedQuadTree is now self 
contained to one file and uses the RecursivePrefixTreeStrategy but ignores 
leafyBranchPruning.  

Still only integrated and tested on branch_5x, per discussion below.

 Add StreamingQuadPrefixTree
 ---

 Key: LUCENE-6422
 URL: https://issues.apache.org/jira/browse/LUCENE-6422
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 5.x
Reporter: Nicholas Knize
 Attachments: LUCENE-6422.patch, LUCENE-6422.patch, 
 LUCENE-6422_with_SPT_factory_and_benchmark.patch


 To conform to Lucene's inverted index, SpatialStrategies use strings to 
 represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 
 bits per GeoHash cell, respectively.  To create the terms representing a 
 Shape, the BytesRefIteratorTokenStream first builds all of the terms into an 
 ArrayList of Cells in memory, then passes the ArrayList.Iterator back to 
 invert() which creates a second lexicographically sorted array of Terms. This 
 doubles the memory consumption when indexing a shape.
 This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to 
 accomplish the following:
 1.  Create a packed 8byte representation for a QuadCell
 2.  Build the Packed cells 'on demand' when incrementToken is called
 Improvements over this approach include the generation of the packed cells 
 using an AutoPrefixAutomaton



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6422) Add StreamingQuadPrefixTree

2015-04-17 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500327#comment-14500327
 ] 

Nicholas Knize commented on LUCENE-6422:


bq.  If you can suggest a better name to what leafy branch pruning does, then 
at a minimum it could be expressed in the javadocs

++.  Just 'prune' is probably more clear since its universally used all over 
data structures.  We can add a javadoc comment that describes it in further 
detail if necessary.

bq. if SpatialPrefixTree might have a better literature/industry based name 
then I'd love to know what that is.

There are trie based spatial trees all over (kd-Trie, kd-b-trie, buddy tree) 
industry and the literature. The one you call QuadPrefixTree was originally 
introduced in 1991 called the QuadTrie.  (reference: Gaston H. Gonnet and 
Ricardo Baeza-Yates, Handbook of Algorithms and Data Structures -- in Pascal 
and C, 2nd edition, Addison-Wesley, 1991.)  Dr. Hanan Samet from UMD has a 
great section on MX and PR QuadTrees (same as QuadPrefixTree and a name someone 
mentioned to you in another issue).  He provides a nice discussion on the 
differences between MX, PR and their point based counterparts (compared by the 
decomposition methods).  There's certainly nothing wrong with an implementation 
specific name. If you are asking for suggestions then I offer: SpatialTrie, 
GeoHashTrie, QuadTrie as being shorter, more to the point, and probably more 
relate-able to other spatial SMEs (whom I'm hoping would be willing to get more 
involved). 

bq. It's not obvious to me but where in the code of PackedQuadCell are the 5 
depth bits encoded  decoded?

PackedQuadCell.getLevel() decodes, and its encoded in PackedQuadCell.nextTerm()

bq. Preferably it would stay enabled but I think you indicated it's not 
supported by PackedQuadTree? I didn't look closer.

That's correct.

bq. I also wonder whether we need a new, lighter weight spatial module 
(spatial2? spatia_light?), or maybe spatial_sandbox, where the barrier is lower?

+++ I think this is a great idea for experimental/research features we don't 
want cluttering up the spatial module.

bq. RE abstractions: I respect your opinion although I don't agree that there 
are too many here.

IMHO, this is a slippery slope.  There are so many diverse spatial data 
structures we should be taking a look at for improving spatial search in higher 
order dimensions (4D space-time for just a start).  That's a personal interest 
area for me, in how the most powerful high dimension structures (that already 
exist) can fit within the design and implementation of lucene-core (a green 
field to explore).  Something like this does require a sophisticated 
abstraction framework and this particular one has a bit of a learning curve. I 
think that can work itself out over time with a bit of refactoring (which it 
sounds like all are open to?).  In the meantime it does set the bar rather high 
for new contributors. This is another +1 for a spatial sandbox for experimental 
research (heck make it a separate repo). 

bq. Sigh; these conversations are stressfull 

They're very verbose, but maybe that's the kick in the pants needed to help the 
spatial module really take off. That is, after all, the common goal of the 
community?




 Add StreamingQuadPrefixTree
 ---

 Key: LUCENE-6422
 URL: https://issues.apache.org/jira/browse/LUCENE-6422
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 5.x
Reporter: Nicholas Knize
 Attachments: LUCENE-6422.patch, LUCENE-6422.patch, 
 LUCENE-6422_with_SPT_factory_and_benchmark.patch


 To conform to Lucene's inverted index, SpatialStrategies use strings to 
 represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 
 bits per GeoHash cell, respectively.  To create the terms representing a 
 Shape, the BytesRefIteratorTokenStream first builds all of the terms into an 
 ArrayList of Cells in memory, then passes the ArrayList.Iterator back to 
 invert() which creates a second lexicographically sorted array of Terms. This 
 doubles the memory consumption when indexing a shape.
 This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to 
 accomplish the following:
 1.  Create a packed 8byte representation for a QuadCell
 2.  Build the Packed cells 'on demand' when incrementToken is called
 Improvements over this approach include the generation of the packed cells 
 using an AutoPrefixAutomaton



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6886) Grouping.java tweaks

2015-04-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6886:

Attachment: SOLR-6886.patch

Patch in sync with trunk. There was another place where the finish method was 
not being called so I fixed that too.

 Grouping.java tweaks
 

 Key: SOLR-6886
 URL: https://issues.apache.org/jira/browse/SOLR-6886
 Project: Solr
  Issue Type: Wish
Reporter: Christine Poerschke
Priority: Minor
 Attachments: SOLR-6886.patch


 There's a size0 check which seems to be redundant and some 
 DelegatingCollector.finish calls seem to be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6422) Add StreamingQuadPrefixTree

2015-04-17 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6422:
---
Attachment: LUCENE-6422.patch

Updated to remove StreamingPrefixTreeStrategy.  PackedQuadTree is now self 
contained to one file and uses the RecursivePrefixTreeStrategy but ignores 
leafyBranchPruning.  

Still only integrated and tested on branch_5x, per discussion below.

 Add StreamingQuadPrefixTree
 ---

 Key: LUCENE-6422
 URL: https://issues.apache.org/jira/browse/LUCENE-6422
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 5.x
Reporter: Nicholas Knize
 Attachments: LUCENE-6422.patch, LUCENE-6422.patch, 
 LUCENE-6422_with_SPT_factory_and_benchmark.patch


 To conform to Lucene's inverted index, SpatialStrategies use strings to 
 represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 
 bits per GeoHash cell, respectively.  To create the terms representing a 
 Shape, the BytesRefIteratorTokenStream first builds all of the terms into an 
 ArrayList of Cells in memory, then passes the ArrayList.Iterator back to 
 invert() which creates a second lexicographically sorted array of Terms. This 
 doubles the memory consumption when indexing a shape.
 This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to 
 accomplish the following:
 1.  Create a packed 8byte representation for a QuadCell
 2.  Build the Packed cells 'on demand' when incrementToken is called
 Improvements over this approach include the generation of the packed cells 
 using an AutoPrefixAutomaton



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf

2015-04-17 Thread Steve Rowe
+1

I skimmed looking for obvious format problems, and only found a couple small 
things that shouldn’t block release.  (I’ll fix them later.)

Checksum matches, signature verifies.

Steve

 On Apr 17, 2015, at 10:34 AM, Cassandra Targett casstarg...@gmail.com wrote:
 
 Please vote for the release of the Apache Solr Reference Guide for Solr 5.1.
 
 The PDF is available at:
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/
 
 Steve Rowe  I made some big changes to the styling of the guide, so please 
 raise any issues you find in your review.
 
 Here's my +1.
 
 Thanks,
 Cassandra


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf

2015-04-17 Thread Tomás Fernández Löbbe
+1
Went through some of the sections. I didn't find any blockers.
One thing I saw were some strange snippets (empty, or with huge empty
spaces) in pages 203, 297 and 527.
Another trivial thing is that in many pages, the title of a section falls
on a different page than the actual section.

Tomás

On Fri, Apr 17, 2015 at 10:37 AM, Steve Rowe sar...@gmail.com wrote:

 +1

 I skimmed looking for obvious format problems, and only found a couple
 small things that shouldn’t block release.  (I’ll fix them later.)

 Checksum matches, signature verifies.

 Steve

  On Apr 17, 2015, at 10:34 AM, Cassandra Targett casstarg...@gmail.com
 wrote:
 
  Please vote for the release of the Apache Solr Reference Guide for Solr
 5.1.
 
  The PDF is available at:
 
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/
 
  Steve Rowe  I made some big changes to the styling of the guide, so
 please raise any issues you find in your review.
 
  Here's my +1.
 
  Thanks,
  Cassandra


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf

2015-04-17 Thread Anshum Gupta
+1 to releasing. I found a few things but no blockers.

I think it'd be good to have a clear mention of all the examples on some
page. I'll work on that one as soon as I get some time.
Right now, the films dataset isn't mentioned anywhere but during the post
tool section.

On Fri, Apr 17, 2015 at 7:34 AM, Cassandra Targett casstarg...@gmail.com
wrote:

 Please vote for the release of the Apache Solr Reference Guide for Solr
 5.1.

 The PDF is available at:

 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/

 Steve Rowe  I made some big changes to the styling of the guide, so
 please raise any issues you find in your review.

 Here's my +1.

 Thanks,
 Cassandra




-- 
Anshum Gupta


[jira] [Updated] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2015-04-17 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7414:
---
Description: 
Attempting to retrieve all fields while renaming one, e.g., inStock to 
stocked (URL below), results in CSV output that has a column for inStock 
(should be stocked), and the column has no values. 

steps to reproduce using 5.1...

{noformat}
$ bin/solr -e techproducts
...
$ curl -X POST -H 'Content-Type: application/json' 
'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary '[{ 
id : aaa, bar_i : 7, inStock : true }, { id : bbb, bar_i : 7, 
inStock : false }, { id : ccc, bar_i : 7, inStock : true }]'
{responseHeader:{status:0,QTime:730}}
$ curl 
'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=id,stocked:inStockwt=csv'
id,stocked
aaa,true
bbb,false
ccc,true
$ curl 
'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=*,stocked:inStockwt=csv'
bar_i,id,_version_,inStock
7,aaa,1498719888088236032,
7,bbb,1498719888090333184,
7,ccc,1498719888090333185,
$ curl 
'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=stocked:inStock,*wt=csv'
bar_i,id,_version_,inStock
7,aaa,1498719888088236032,
7,bbb,1498719888090333184,
7,ccc,1498719888090333185,
{noformat}

  was:
Attempting to retrieve all fields while renaming one, e.g., inStock to 
stocked (URL below), results in CSV output that has a column for inStock 
(should be stocked), and the column has no values. I would have expected this 
to behave like the JSON and XML response writers.

http://localhost:8983/solr/select?q=*fl=*,stocked:inStockwt=csv

Summary: CSVResponseWriter returns empty field when fl alias is 
combined with '*' selector  (was: CSVResponseWriter returns empty field when 
alias requested)

thanks for reporting this.

tweaked summary/description to clarify conditions that trigger this, and 
provide full steps to reproduce using the stock example configs

 CSVResponseWriter returns empty field when fl alias is combined with '*' 
 selector
 -

 Key: SOLR-7414
 URL: https://issues.apache.org/jira/browse/SOLR-7414
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Michael Lawrence

 Attempting to retrieve all fields while renaming one, e.g., inStock to 
 stocked (URL below), results in CSV output that has a column for inStock 
 (should be stocked), and the column has no values. 
 steps to reproduce using 5.1...
 {noformat}
 $ bin/solr -e techproducts
 ...
 $ curl -X POST -H 'Content-Type: application/json' 
 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
 '[{ id : aaa, bar_i : 7, inStock : true }, { id : bbb, bar_i : 
 7, inStock : false }, { id : ccc, bar_i : 7, inStock : true }]'
 {responseHeader:{status:0,QTime:730}}
 $ curl 
 'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=id,stocked:inStockwt=csv'
 id,stocked
 aaa,true
 bbb,false
 ccc,true
 $ curl 
 'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=*,stocked:inStockwt=csv'
 bar_i,id,_version_,inStock
 7,aaa,1498719888088236032,
 7,bbb,1498719888090333184,
 7,ccc,1498719888090333185,
 $ curl 
 'http://localhost:8983/solr/techproducts/query?q=bar_i:7fl=stocked:inStock,*wt=csv'
 bar_i,id,_version_,inStock
 7,aaa,1498719888088236032,
 7,bbb,1498719888090333184,
 7,ccc,1498719888090333185,
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf

2015-04-17 Thread Chris Hostetter

: The PDF is available at:
: 
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/

+1 to releasing...

af80178bd864ffe0a354c8780c22296808d0423b  apache-solr-ref-guide-5.1.pdf



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2962 - Failure

2015-04-17 Thread Timothy Potter
This keeps happening ... I'll dig (borrowing Mike's terminology) and
if I can't figure it out quickly, I'll @BadApple it for now

On Fri, Apr 17, 2015 at 3:16 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2962/

 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

 Error Message:
 Didn't see all replicas for shard shard1 in collection1 come up within 3 
 ms! ClusterState: {   control_collection:{ replicationFactor:1, 
 maxShardsPerNode:1, autoAddReplicas:false, 
 shards:{shard1:{ range:8000-7fff, 
 state:active, replicas:{core_node1:{ 
 node_name:127.0.0.1:37291_, core:collection1, 
 base_url:http://127.0.0.1:37291;, state:active,
  leader:true, autoCreated:true, 
 router:{name:compositeId}},   collection1:{ 
 replicationFactor:1, maxShardsPerNode:1, 
 autoAddReplicas:false, shards:{shard1:{ 
 range:8000-7fff, state:active, replicas:{   
 core_node1:{ node_name:127.0.0.1:37297_,
  core:collection1, base_url:http://127.0.0.1:37297;,   
   state:active, leader:true},   
 core_node2:{ node_name:127.0.0.1:37301_, 
 core:collection1, base_url:http://127.0.0.1:37301;,
  state:recovering, autoCreated:true, 
 router:{name:compositeId}}}

 Stack Trace:
 java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
 collection1 come up within 3 ms! ClusterState: {
   control_collection:{
 replicationFactor:1,
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 node_name:127.0.0.1:37291_,
 core:collection1,
 base_url:http://127.0.0.1:37291;,
 state:active,
 leader:true,
 autoCreated:true,
 router:{name:compositeId}},
   collection1:{
 replicationFactor:1,
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   core_node1:{
 node_name:127.0.0.1:37297_,
 core:collection1,
 base_url:http://127.0.0.1:37297;,
 state:active,
 leader:true},
   core_node2:{
 node_name:127.0.0.1:37301_,
 core:collection1,
 base_url:http://127.0.0.1:37301;,
 state:recovering,
 autoCreated:true,
 router:{name:compositeId}}}
 at 
 __randomizedtesting.SeedInfo.seed([B2BA3FF0D83C0F12:3AEE002A76C062EA]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
 at 
 org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500074#comment-14500074
 ] 

Hrishikesh Gadre commented on SOLR-7176:


[~markrmil...@gmail.com] [~nobel.paul] Thanks a lot for clarification. I will 
submit a patch shortly.

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 820 - Failure

2015-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/820/

1 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:39323/pf_o/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39323/pf_o/collection1
at 
__randomizedtesting.SeedInfo.seed([4F803D02AB4DD869:C7D402D805B1B591]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:558)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:606)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:588)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:567)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8

2015-04-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500189#comment-14500189
 ] 

Hoss Man commented on LUCENE-6420:
--

uwe: should probably split this out into a distinct issue yeah?

I think an annotation approach is a great idea ... as long as the txt file 
based approach is still also supported correct? -- that way we won't have to 
introduce a lucene-core.jar dependency on stuff that doesn't already depend on 
it (biggest concern: solrj)

 Update forbiddenapis to 1.8
 ---

 Key: LUCENE-6420
 URL: https://issues.apache.org/jira/browse/LUCENE-6420
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6420.patch


 Update forbidden-apis plugin to 1.8:
 - Initial support for Java 9 including JIGSAW
 - Errors are now reported sorted by line numbers and correctly grouped 
 (synthetic methods/lambdas)
 - Package-level forbids: Deny all classes from a package: org.hatedpkg.** 
 (also other globs work)
 - In addition to file-level excludes, forbiddenapis now supports fine 
 granular excludes using Java annotations. You can use the one shipped, but 
 define your own, e.g. inside Lucene and pass its name to forbidden (e.g. 
 using a glob: **.SuppressForbidden would any annotation in any package to 
 suppress errors). Annotation need to be on class level, no runtime annotation 
 required.
 This will for now only update the dependency and remove the additional forbid 
 by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). 
 But we should review and for example suppress forbidden failures in command 
 line tools using @SuppressForbidden (or similar annotation). The discussion 
 is open, I can make a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500028#comment-14500028
 ] 

Per Steffensen commented on SOLR-7176:
--

I agree, but from time to time I want to add a (async) command to the overseer 
while the cluster is not running, expecting the overseer to pick it up and 
execute it when I start my cluster. If you would enable this tool to do this 
kind of stuff, then suddenly most of the cluster-commands become relevant for 
this tool - if it is able to both execute the command directly (if supported - 
e.g. by {{CLUSTERPROP}} command) or to leave the command for execution by the 
overseer.
And, if you have numerous machines that might or might not currently run a 
Solr-node, maybe you actually want to be able to run the {{OVERSEERSTATUS}} 
command as a command-line just to get an not running response.

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2962 - Failure

2015-04-17 Thread Timothy Potter
This keeps happening ... I'll dig (borrowing Mike's terminology) and
if I can't figure it out quickly, I'll @BadApple it for now

On Fri, Apr 17, 2015 at 3:16 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2962/

 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

 Error Message:
 Didn't see all replicas for shard shard1 in collection1 come up within 3 
 ms! ClusterState: {   control_collection:{ replicationFactor:1, 
 maxShardsPerNode:1, autoAddReplicas:false, 
 shards:{shard1:{ range:8000-7fff, 
 state:active, replicas:{core_node1:{ 
 node_name:127.0.0.1:37291_, core:collection1, 
 base_url:http://127.0.0.1:37291;, state:active,
  leader:true, autoCreated:true, 
 router:{name:compositeId}},   collection1:{ 
 replicationFactor:1, maxShardsPerNode:1, 
 autoAddReplicas:false, shards:{shard1:{ 
 range:8000-7fff, state:active, replicas:{   
 core_node1:{ node_name:127.0.0.1:37297_,
  core:collection1, base_url:http://127.0.0.1:37297;,   
   state:active, leader:true},   
 core_node2:{ node_name:127.0.0.1:37301_, 
 core:collection1, base_url:http://127.0.0.1:37301;,
  state:recovering, autoCreated:true, 
 router:{name:compositeId}}}

 Stack Trace:
 java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
 collection1 come up within 3 ms! ClusterState: {
   control_collection:{
 replicationFactor:1,
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 node_name:127.0.0.1:37291_,
 core:collection1,
 base_url:http://127.0.0.1:37291;,
 state:active,
 leader:true,
 autoCreated:true,
 router:{name:compositeId}},
   collection1:{
 replicationFactor:1,
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   core_node1:{
 node_name:127.0.0.1:37297_,
 core:collection1,
 base_url:http://127.0.0.1:37297;,
 state:active,
 leader:true},
   core_node2:{
 node_name:127.0.0.1:37301_,
 core:collection1,
 base_url:http://127.0.0.1:37301;,
 state:recovering,
 autoCreated:true,
 router:{name:compositeId}}}
 at 
 __randomizedtesting.SeedInfo.seed([B2BA3FF0D83C0F12:3AEE002A76C062EA]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
 at 
 org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
 at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Comment Edited] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500028#comment-14500028
 ] 

Per Steffensen edited comment on SOLR-7176 at 4/17/15 3:27 PM:
---

I agree, but from time to time I want to add a (async) command to the overseer 
while the cluster is not running, expecting the overseer to pick it up and 
execute it when I start my cluster. If you would enable this tool to do this 
kind of stuff, then suddenly most of the cluster-commands become relevant for 
this tool - if it is able to both execute the command directly (if supported - 
e.g. the {{CLUSTERPROP}} command) or to leave the command for execution by the 
overseer.
And, if you have numerous machines that might or might not currently run a 
Solr-node, maybe you actually want to be able to run the {{OVERSEERSTATUS}} 
command as a command-line just to get a not running response.


was (Author: steff1193):
I agree, but from time to time I want to add a (async) command to the overseer 
while the cluster is not running, expecting the overseer to pick it up and 
execute it when I start my cluster. If you would enable this tool to do this 
kind of stuff, then suddenly most of the cluster-commands become relevant for 
this tool - if it is able to both execute the command directly (if supported - 
e.g. by {{CLUSTERPROP}} command) or to leave the command for execution by the 
overseer.
And, if you have numerous machines that might or might not currently run a 
Solr-node, maybe you actually want to be able to run the {{OVERSEERSTATUS}} 
command as a command-line just to get an not running response.

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7412) range.facet.other produces incorrect counts in distributed search

2015-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-7412.
-
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

Thanks for reporting Will.

 range.facet.other produces incorrect counts in distributed search
 -

 Key: SOLR-7412
 URL: https://issues.apache.org/jira/browse/SOLR-7412
 Project: Solr
  Issue Type: Bug
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Fix For: Trunk, 5.2

 Attachments: SOLR-7412.patch, SOLR-7412.patch


 Reported by Will Miller in the users list: 
 {quote}
 This first query is against node1 with distrib=false:
 http://localhost:8983/solr/gettingstarted/select/?q=*:*wt=jsonindent=truedistrib=falsefacet=truefacet.range=pricef.price.facet.range.start=0.00f.price.facet.range.end=100.00f.price.facet.range.gap=20f.price.facet.range.other=alldefType=edismaxq.op=AND
 There are 7 Results (results ommited).
 facet_ranges:{
   price:{
 counts:[
   0.0,1,
   20.0,0,
   40.0,0,
   60.0,0,
   80.0,1],
 gap:20.0,
 start:0.0,
 end:100.0,
 before:0,
 after:5,
 between:2}},
 This second query is against node2 with distrib=false:
 http://localhost:7574/solr/gettingstarted/select/?q=*:*wt=jsonindent=truedistrib=falsefacet=truefacet.range=pricef.price.facet.range.start=0.00f.price.facet.range.end=100.00f.price.facet.range.gap=20f.price.facet.range.other=alldefType=edismaxq.op=AND
 7 Results (one product does not have a price):
 facet_ranges:{
   price:{
 counts:[
   0.0,1,
   20.0,0,
   40.0,0,
   60.0,1,
   80.0,0],
 gap:20.0,
 start:0.0,
 end:100.0,
 before:0,
 after:4,
 between:2}},
 Finally querying the entire collection:
 http://localhost:7574/solr/gettingstarted/select/?q=*:*wt=jsonindent=truefacet=truefacet.range=pricef.price.facet.range.start=0.00f.price.facet.range.end=100.00f.price.facet.range.gap=20f.price.facet.range.other=alldefType=edismaxq.op=AND
 14 results (one without a price range):
 facet_ranges:{
   price:{
 counts:[
   0.0,2,
   20.0,0,
   40.0,0,
   60.0,1,
   80.0,1],
 gap:20.0,
 start:0.0,
 end:100.0,
 before:0,
 after:5,
 between:2}},
 Notice that both the after and the between are wrong here. The actual 
 buckets do correctly represent the right values but I would expect between 
 to be 5 and after to be 13.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON

2015-04-17 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500039#comment-14500039
 ] 

Per Steffensen commented on SOLR-7176:
--

bq. With the current setup, you can use bin/solr at the commandline on *NIX 
and bin\solr on Windows, the only difference is the path separator, which 
will not be a surprise to most admins

Well I think it might come as a surprise to most *NIX admins that the script is 
just called solr - and not e.g. solr.sh. But never mind, this JIRA is not 
about that. I just had a hard time writing {{solr CLUSTERPROP ...}}, because I 
would have to think twice to understand it myself

bq. and it should be handled in a separate issue

Yes, definitely, no one talked about doing the renaming in this issue

 allow zkcli to modify JSON
 --

 Key: SOLR-7176
 URL: https://issues.apache.org/jira/browse/SOLR-7176
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Priority: Minor

 To enable SSL, we have instructions like the following:
 {code}
 server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put 
 /clusterprops.json '{urlScheme:https}'
 {code}
 Overwriting the value won't work well when we have more properties to put in 
 clusterprops.  We should be able to change individual values or perhaps merge 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Building on Mac

2015-04-17 Thread Gus Heck
Is it expected that one should be able to build the whole project (ant test
at top level) on a macbook pro (circa 2012, ssd, OS X 10.9.1, java
1.8.0_20)?

I've attempted it about a dozen times, sometimes updating, sometimes
repeating on the same revision. I've never had success and generally 1-5
tests fail, with no single test failing consistently. This happens on the
5_1 branch and I just switched to 5x to see if it made a difference and it
happens there too. (running a second time now)

Is this expected?

I've seen this message a number of times:
  [junit4] Throwable #1:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at https://127.0.0.1:53208//collection1:
org.apache.solr.client.solrj.SolrServerException:
java.lang.IllegalStateException: Scheme 'http' not registered.

I also notice that the word Distributed is common in failing test names,
but I have seen other failures on occasion (possibly real?)  Here's the
output of the one that just finished as I was typing...

   [junit4] Tests with failures:
   [junit4]   -
org.apache.solr.handler.component.DistributedFacetPivotLongTailTest.test
   [junit4]   -
org.apache.solr.handler.component.DistributedQueryComponentCustomSortTest.test
   [junit4]   - org.apache.solr.DistributedIntervalFacetingTest.test
   [junit4]
   [junit4]
   [junit4] JVM J0: 0.89 ..  1000.84 =   999.95s
   [junit4] JVM J1: 0.89 ..  1000.77 =   999.88s
   [junit4] JVM J2: 0.89 ..  1000.86 =   999.97s
   [junit4] JVM J3: 0.89 ..  1000.73 =   999.84s
   [junit4] Execution time total: 16 minutes 41 seconds
   [junit4] Tests summary: 483 suites, 1916 tests, 3 errors, 37 ignored (21
assumptions)

BUILD FAILED
/Users/gus/projects/solr/solr51/branch_5x/build.xml:61: The following error
occurred while executing this line:
/Users/gus/projects/solr/solr51/branch_5x/extra-targets.xml:39: The
following error occurred while executing this line:
/Users/gus/projects/solr/solr51/branch_5x/solr/build.xml:229: The following
error occurred while executing this line:
/Users/gus/projects/solr/solr51/branch_5x/solr/common-build.xml:511: The
following error occurred while executing this line:
/Users/gus/projects/solr/solr51/branch_5x/lucene/common-build.xml:1434: The
following error occurred while executing this line:
/Users/gus/projects/solr/solr51/branch_5x/lucene/common-build.xml:991:
There were test failures: 483 suites, 1916 tests, 3 errors, 37 ignored (21
assumptions)

Total time: 29 minutes 44 seconds

-Gus


[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12333 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12333/
Java: 32bit/jdk1.8.0_60-ea-b06 -client -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=236, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=235, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=233, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)4) Thread[id=237, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=238, 
name=NioSocketAcceptor-1, state=RUNNABLE, group=TGRP-SaslZkACLProviderTest] 
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.select(NioSocketAcceptor.java:234)
 at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:417)
 at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=234, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 

[JENKINS] Lucene-Solr-5.1-Linux (32bit/ibm-j9-jdk7) - Build # 285 - Failure!

2015-04-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/285/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

No tests ran.

Build Log:
[...truncated 272 lines...]
ERROR: Publisher hudson.tasks.junit.JUnitResultArchiver aborted due to exception
hudson.AbortException: No test report files were found. Configuration error?
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:116)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:93)
at hudson.FilePath.act(FilePath.java:989)
at hudson.FilePath.act(FilePath.java:967)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:90)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:120)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:137)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:74)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:761)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:721)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:670)
at hudson.model.Run.execute(Run.java:1766)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:374)
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4212) Let facet queries hang off of pivots

2015-04-17 Thread Mike Murphy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500475#comment-14500475
 ] 

Mike Murphy commented on SOLR-4212:
---

bq. A new RangeFacetProcessor is refactored out of SimpleFacets

It's great that you are re-factoring code out of the horror that is 
SimpleFacets, but there is already a class called FacetRangeProcessor in the 
new facet module.  That was very confusing when I was trying to make sense of 
this in eclipse.

 Let facet queries hang off of pivots
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-6353-6686-4212.patch, 
 SOLR-6353-6686-4212.patch, SOLR-6353-6686-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to use local parameters to specify facet query to apply 
 for pivot which matches a tag set on facet query. Parameter format would look 
 like:
 facet.pivot={!query=r1}category,manufacturer
 facet.query={!tag=r1}somequery
 facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8

2015-04-17 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500458#comment-14500458
 ] 

Uwe Schindler commented on LUCENE-6420:
---

You can just place the Annotation definition into solrj's internals, too. 
Foibrddenapis supports stuff like {{forbiddenAnnotation=**.SuppressForbidden}}, 
so it accepts any annotation from any package named @SuppressAnoontation to 
filter. It can also be package private!

But sure, the file-based exclude still works.

 Update forbiddenapis to 1.8
 ---

 Key: LUCENE-6420
 URL: https://issues.apache.org/jira/browse/LUCENE-6420
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6420.patch


 Update forbidden-apis plugin to 1.8:
 - Initial support for Java 9 including JIGSAW
 - Errors are now reported sorted by line numbers and correctly grouped 
 (synthetic methods/lambdas)
 - Package-level forbids: Deny all classes from a package: org.hatedpkg.** 
 (also other globs work)
 - In addition to file-level excludes, forbiddenapis now supports fine 
 granular excludes using Java annotations. You can use the one shipped, but 
 define your own, e.g. inside Lucene and pass its name to forbidden (e.g. 
 using a glob: **.SuppressForbidden would any annotation in any package to 
 suppress errors). Annotation need to be on class level, no runtime annotation 
 required.
 This will for now only update the dependency and remove the additional forbid 
 by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). 
 But we should review and for example suppress forbidden failures in command 
 line tools using @SuppressForbidden (or similar annotation). The discussion 
 is open, I can make a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7415) Facet Module Improvements

2015-04-17 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7415:
--

 Summary: Facet Module Improvements
 Key: SOLR-7415
 URL: https://issues.apache.org/jira/browse/SOLR-7415
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Yonik Seeley
 Fix For: 5.2


The new facet module (specifically it's JSON Facet API) will be finalized for 
the 5.2 release (we marked it as experimental in 5.1 to give time for feedback 
 changes).  This is a parent issue for any such improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6436) add LuceneTestCase.SuppressFsync

2015-04-17 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6436:
---

 Summary: add LuceneTestCase.SuppressFsync
 Key: LUCENE-6436
 URL: https://issues.apache.org/jira/browse/LUCENE-6436
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


Filesystem chain is a per-class decision. Either fsyncs are passed thru to the 
hardware or not globally for the test. If you have a really slow test, this can 
cause occasional unbearably slow runs when it gets unlucky.

{code}
  /**
   * Annotation for test classes that should avoid always omit
   * actual fsync calls from reaching the filesystem.
   * p
   * This can be useful, e.g. if they make many lucene commits.
   */
  @Documented
  @Inherited
  @Retention(RetentionPolicy.RUNTIME)
  @Target(ElementType.TYPE)
  public @interface SuppressFsync {
String[] value();
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7416) Slow loading SolrCores should not hold up all other SolrCores that have finished loading from serving requests.

2015-04-17 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500740#comment-14500740
 ] 

Timothy Potter commented on SOLR-7416:
--

How does this relate to SOLR-7361? dupe or is this a different issue ...

 Slow loading SolrCores should not hold up all other SolrCores that have 
 finished loading from serving requests.
 ---

 Key: SOLR-7416
 URL: https://issues.apache.org/jira/browse/SOLR-7416
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller

 If a SolrCore is really slow to create (say it has to replay a really large 
 transaction log, or on hdfs it takes a long time to recover a lease, etc) 
 other SolrCores should continue to load so that one SolrCore does not 
 unnecessarily hold up other cores and collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7416) Slow loading SolrCores should not hold up all other SolrCores that have finished loading from serving requests.

2015-04-17 Thread Mark Miller (JIRA)
Mark Miller created SOLR-7416:
-

 Summary: Slow loading SolrCores should not hold up all other 
SolrCores that have finished loading from serving requests.
 Key: SOLR-7416
 URL: https://issues.apache.org/jira/browse/SOLR-7416
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller


If a SolrCore is really slow to create (say it has to replay a really large 
transaction log, or on hdfs it takes a long time to recover a lease, etc) other 
SolrCores should continue to load so that one SolrCore does not unnecessarily 
hold up other cores and collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7416) Slow loading SolrCores should not hold up all other SolrCores that have finished loading from serving requests.

2015-04-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500636#comment-14500636
 ] 

Mark Miller commented on SOLR-7416:
---

Ideally we can improve this.

It would likely have some affect on leader election and other timeouts in that 
area.

 Slow loading SolrCores should not hold up all other SolrCores that have 
 finished loading from serving requests.
 ---

 Key: SOLR-7416
 URL: https://issues.apache.org/jira/browse/SOLR-7416
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller

 If a SolrCore is really slow to create (say it has to replay a really large 
 transaction log, or on hdfs it takes a long time to recover a lease, etc) 
 other SolrCores should continue to load so that one SolrCore does not 
 unnecessarily hold up other cores and collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2965 - Failure

2015-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2965/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in collection1 come up within 3 
ms! ClusterState: {   collection1:{ router:{name:compositeId}, 
shards:{shard1:{ range:8000-7fff, 
state:active, replicas:{   core_node1:{ 
core:collection1, base_url:http://127.0.0.1:60941;,  
   node_name:127.0.0.1:60941_, state:active, 
leader:true},   core_node2:{ core:collection1,
 base_url:http://127.0.0.1:60948;, 
node_name:127.0.0.1:60948_, state:recovering, 
maxShardsPerNode:1, autoCreated:true, replicationFactor:1,  
   autoAddReplicas:false},   control_collection:{ 
router:{name:compositeId}, shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ core:collection1, 
base_url:http://127.0.0.1:60933;, 
node_name:127.0.0.1:60933_, state:active, 
leader:true, maxShardsPerNode:1, autoCreated:true, 
replicationFactor:1, autoAddReplicas:false}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
collection1 come up within 3 ms! ClusterState: {
  collection1:{
router:{name:compositeId},
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
core:collection1,
base_url:http://127.0.0.1:60941;,
node_name:127.0.0.1:60941_,
state:active,
leader:true},
  core_node2:{
core:collection1,
base_url:http://127.0.0.1:60948;,
node_name:127.0.0.1:60948_,
state:recovering,
maxShardsPerNode:1,
autoCreated:true,
replicationFactor:1,
autoAddReplicas:false},
  control_collection:{
router:{name:compositeId},
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
core:collection1,
base_url:http://127.0.0.1:60933;,
node_name:127.0.0.1:60933_,
state:active,
leader:true,
maxShardsPerNode:1,
autoCreated:true,
replicationFactor:1,
autoAddReplicas:false}}
at 
__randomizedtesting.SeedInfo.seed([BB353B2BFFC3F1C4:336104F1513F9C3C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 

[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8

2015-04-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500763#comment-14500763
 ] 

Hoss Man commented on LUCENE-6420:
--

bq. ... {{forbiddenAnnotation=**.SuppressForbidden}} ...

Ah, very cool sir.  very cool.


 Update forbiddenapis to 1.8
 ---

 Key: LUCENE-6420
 URL: https://issues.apache.org/jira/browse/LUCENE-6420
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6420.patch


 Update forbidden-apis plugin to 1.8:
 - Initial support for Java 9 including JIGSAW
 - Errors are now reported sorted by line numbers and correctly grouped 
 (synthetic methods/lambdas)
 - Package-level forbids: Deny all classes from a package: org.hatedpkg.** 
 (also other globs work)
 - In addition to file-level excludes, forbiddenapis now supports fine 
 granular excludes using Java annotations. You can use the one shipped, but 
 define your own, e.g. inside Lucene and pass its name to forbidden (e.g. 
 using a glob: **.SuppressForbidden would any annotation in any package to 
 suppress errors). Annotation need to be on class level, no runtime annotation 
 required.
 This will for now only update the dependency and remove the additional forbid 
 by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). 
 But we should review and for example suppress forbidden failures in command 
 line tools using @SuppressForbidden (or similar annotation). The discussion 
 is open, I can make a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7417) Aggregation Function unique() returns 0 when an int or date field is passed as argument

2015-04-17 Thread LevanDev (JIRA)
LevanDev created SOLR-7417:
--

 Summary: Aggregation Function unique() returns 0 when an int or 
date field is passed as argument
 Key: SOLR-7417
 URL: https://issues.apache.org/jira/browse/SOLR-7417
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.1
Reporter: LevanDev


uniqueValues:'unique(myIntField)' 
uniqueValues:'unique(myDateField)' 

Result: 

facets:{
count: someNumber,
uniqueValues:0}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6878) solr.ManagedSynonymFilterFactory all-to-all synonym switch (aka. expand)

2015-04-17 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6878:


Assignee: Timothy Potter

 solr.ManagedSynonymFilterFactory all-to-all synonym switch (aka. expand)
 

 Key: SOLR-6878
 URL: https://issues.apache.org/jira/browse/SOLR-6878
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 4.10.2
Reporter: Tomasz Sulkowski
Assignee: Timothy Potter
  Labels: ManagedSynonymFilterFactory, REST, SOLR
 Attachments: SOLR-6878.patch


 Hi,
 After switching from SynonymFilterFactory to ManagedSynonymFilterFactory I 
 have found out that there is no way to set an all-to-all synonyms relation. 
 Basically (judgind from google search) there is a need for expand 
 functionality switch (known from SynonymFilterFactory) which will treat all 
 synonyms with its keyword as equal.
 For example: if we define a car:[wagen,ride] relation it would 
 translate a query that includes one of the synonyms or keyword to car or 
 wagen or ride independently of which word was used from those three.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >