Re: JCC Project Extensions

2014-07-10 Thread Lee Skillen
Hey,

On 9 July 2014 18:38, Andi Vajda va...@apache.org wrote:

 On Wed, 9 Jul 2014, Andi Vajda wrote:



 On Wed, 9 Jul 2014, Lee Skillen wrote:

 Hey Andi,

 On 9 July 2014 13:50, Andi Vajda va...@apache.org wrote:



  Hi Lee,


 On Tue, 8 Jul 2014, Lee Skillen wrote:

 Andi, thanks for the reply - I've created a github mirror of the
 pylucene
 project for our own use (which I intend to keep synced with your SVN
 repository as its official upstream), located at:

 https://github.com/lskillen/pylucene

 As suggested I have formatted (and attached) a patch of the unfinished
 code
 that we're using for the through-layer exceptions.  Alternatively the
 diff
 can be inspected via github by diff'ing between the new
 feature-thru-exception branch that I have created and the master
 branch, as
 such:


 https://github.com/lskillen/pylucene/compare/feature-thru-exception?expand=1

 Although we've run the test suite without issues I realise there may
 still
 be functionality/style/logical issues with the code.  I also suspect
 that
 there may not be specific test cases that target regression failures
 for
 exceptions (yet), so confidence isn't high!  All comments are welcome
 and I
 realise this will likely require further changes and a repeatable test
 case
 before it is acceptable, but that's fine.



 I took a look at your patch and used the main idea to rewrite it so
 that:
   - ref handling is correct (hopefully): you had it backwards when
 calling
 Py_Restore(), it steals what you currently must own.
   - the Python error state is now saved as one tuple of (type, value,
 tb) on
 PythonException, not as three strings and three fake Objects
 (longs).
   - ref handling is correct via a finalize() method DECREF'ing the saved
 error state tuple on PythonException when that exception is
 collected.
   - getMessage() still works as before but the traceback is extracted on
 demand, not at throw time as was done before.

 The previous implementation was done so that no Python cross-VM
 reference had to be tracked, all the error data was string'ified at error
 time.
 Your change now requires that the error be kept 'alive' accross layers
 and refs must be tracked to avoid leaks.


 Great feedback, thank you - I suspected that the reference handling
 wasn't quite there (first foray into CPython internals), and I also
 really wanted to utilise a tuple and get rid of the strings but wasn't
 sure what the standard was for extending a class such as
 PythonException.  I actually did a quick attempt at running everything
 under debug mode to inspect for leaks, but suffice to say that my
 usual toolbox for debugging and tracking memory leaks
 (gdb/clang/valgrind) didn't work too well - Had some minor success
 with the excellent objgraph Python package, but would need to spend
 more time on it.


 I did _not_ test this as I don't have a system currently running that
 uses JCC in reverse like this. Since you're actively using the feature,
 please test the attached patch (against trunk) and report back with
 comments, bugs, fixes, etc...


 Not a problem!  I applied your modified patch against trunk, rebuilt
 and re-ran our application.  The good news is that everything is
 working really well, and the only bug that I could see was within the
 RegisterNatives() call within registerNatives() for PythonException
 (size was still 2, changed it to calculate the size of the struct).
 I'll re-attach the patch with the changes for you to review.


 Woops, I missed that - even though I looked for it. Oh well. Thanks.


 I attached a new patch with your fix. I also removed the 'clear()' method
 since it's no longer necessary: once the PythonException is constructed, the
 error state in the Python VM is cleared because of PyErr_Fetch() being
 called during saveErrorState().

Changes are looking great - Applied against trunk again here and all
working well.  I've also added a new (simple) test case within the
test directory under the pylucene root (test_PythonException.py) that
checks to see if the through-layer exceptions are working,
realistically it should probably be checking the traceback as well but
I think this would suffice to show that the exceptions are propagating
properly.  On a side note, I did have an issue building pylucene
because  java/org/apache/pylucene/store/PythonIndexOutput.java refused
to build, as it was referencing a BufferedIndexOutput that seems to
have been removed by LUCENE-5678 (I just removed it in my local trunk
but didn't want to add that to the patch).

Cheers,
Lee


 Andi..



 The only other comments I have are that:

 (1) This was still an issue with my patch, but I don't know if there
 is anyone out there relying upon the exact format of the string that
 gets returned by getMessage(), as it is probably going to be different
 now that it is calculated at a different point - In saying that I
 imagine you would only be doing that if you were specifically trying
 to handle an exception 

[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2014-07-10 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057188#comment-14057188
 ] 

Mikhail Khludnev commented on SOLR-6234:


[~jacklo]
I still think we need to *add* this QParse for Solr users, *rather than 
decommission the current Solr join*. I agree its' code is not easy to read, but 
I suppose it performs better in certain cases, and/or consume fewer memory than 
straightforward JoinUtil. 

re LUCENE-3759 : I don't believe that true distributed join performs well for 
practical usage (here I agree with Yonik's comment at SOLR-4905). As far as I 
understand, what you've done in SOLR-4905 allows to do _collocated join_ even 
with multiple shards ie. if we place _from_ and _to_ side documents at the same 
shard, it's able to join them. I think it's what we need.   

 

 Scoring modes for query time join 
 --

 Key: SOLR-6234
 URL: https://issues.apache.org/jira/browse/SOLR-6234
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Affects Versions: 5.0, 4.10
Reporter: Mikhail Khludnev
 Attachments: lucene-join-solr-query-parser-0.0.2.zip


 it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
 It supports:
 - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil), 
 also 
 - supports {{b=100}} param to pass {{Query.setBoost()}}.
 So far
 - it always passes {{multipleValuesPerDocument=true}}
 - it doesn't cover cross core join case, I just can't find the multicore 
 testcase in Solr test, I appreciate if you point me on one. 
 - I attach standalone plugin project, let me know if somebody interested, I 
 convert it into the proper Solr codebase patch. Also please mention the 
 blockers!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Korean Tokenizer in solr

2014-07-10 Thread Poornima Jay
Hi,

Anyone tried to implement korean language in solr 3.6.1. I define the field as 
below in my schema file but the fieldtype is not working.

fieldType name=text_kr class=solr.TextField positionIncrementGap=1000 
      analyzer type=index
        tokenizer class=solr.KoreanTokenizerFactory/
        filter class=solr.KoreanFilterFactory hasOrigin=true 
hasCNoun=true  bigrammable=true/
        filter class=solr.LowerCaseFilterFactory/
        filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords_kr.txt/
      /analyzer
      analyzer type=query
        tokenizer class=solr.KoreanTokenizerFactory/
        filter class=solr.KoreanFilterFactory hasOrigin=false 
hasCNoun=false  bigrammable=false/
        filter class=solr.LowerCaseFilterFactory/
        filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords_kr.txt/
      /analyzer      
    /fieldType
    
Error : Caused by: org.apache.solr.common.SolrException: Unknown fieldtype 
'text_kr' specified on field product_name_kr

Regards,
Poornima


[jira] [Created] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6235:
---

 Summary: SyncSliceTest fails on jenkins with no live servers 
available error
 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
 Fix For: 4.10


{code}
1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Building Solr 4.9.0 with maven: FileNotFoundException for lucene\tools\forbiddenApis\rue.txt

2014-07-10 Thread Artem Karpenko

Hi,

I'm trying to build Apache Solr 4.9.0 using Apache Maven 3.0 and am 
getting an error


[ERROR] Failed to execute goal de.thetaphi:forbiddenapis:1.4:check 
(check-rue) on project lucene-core: IO problem while reading files with 
API signatures: java.io.FileNotFoundException: 
D:\work-files\workspace\solr-4.9.0.OXSEED.1\lucene\tools\forbiddenApis\rue.txt 
(═х єфрхЄё  эрщЄш єърчрээ√щ Їрщы) - [Help 1]


I've found appropriate forbiddenapis declaration in POM file but there 
is no rue.txt file in the project at all. Is this a bug, should I raise 
an issue in JIRA?


Best,
Artem.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #655: POMs out of sync

2014-07-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/655/

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
No registered leader was found after waiting for 6ms , collection: 
c8n_1x3_lf slice: shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found after 
waiting for 6ms , collection: c8n_1x3_lf slice: shard1
at 
__randomizedtesting.SeedInfo.seed([97B1D5EEC38C078B:16575BF6B4D367B7]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:545)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf3WithLeaderFailover(HttpPartitionTest.java:350)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:149)




Build Log:
[...truncated 55189 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/extra-targets.xml:77:
 Java returned: 1

Total time: 194 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Assigned] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-6235:
---

Assignee: Shalin Shekhar Mangar

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5808) clean up postingsreader

2014-07-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057408#comment-14057408
 ] 

Robert Muir commented on LUCENE-5808:
-

Thanks Mike: very possible! At first during refactoring, I was careful and ran 
this thing every time i made a change to prevent stuff like this. But this is 
so time-consuming and slow, also I had the problem that OR* is highly variable. 
E.g. I'd run the benchmark a second time and it would show huge gains...

 clean up postingsreader
 ---

 Key: LUCENE-5808
 URL: https://issues.apache.org/jira/browse/LUCENE-5808
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-5808.patch


 The current postingsreader is ~ 1500 lines of code (mostly duplicated) 
 calling something like 4,000 lines of generated decompression code.
 This is really heavyweight and complicated, and bloats the lucene jar. It 
 would be nice to simplify it so we can eventually remove the baggage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5809) Simplify ExactPhraseScorer

2014-07-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057411#comment-14057411
 ] 

Robert Muir commented on LUCENE-5809:
-

Thanks Mike: it looks all within noise to me. Its kinda sad luceneutil doesn't 
show the leapfrog bug though. Maybe its partially because we only use two term 
phrases. As the number of terms increase, wrong leapfrogging has a higher 
impact I think. Maybe another reason is the phrase selection is based on how 
common the phrase is, but doesnt have good enough variety (like rare phrase but 
high frequency terms).

Anyway I think this is good to go, unless you have concerns.

 Simplify ExactPhraseScorer
 --

 Key: LUCENE-5809
 URL: https://issues.apache.org/jira/browse/LUCENE-5809
 Project: Lucene - Core
  Issue Type: Task
  Components: core/search
Reporter: Robert Muir
 Attachments: LUCENE-5809.patch


 While looking at this scorer i see a few little things which are remnants of 
 the past:
 * crazy heuristics to use next() over advance(): I think it should just use 
 advance(), like conjunctionscorer. these days advance() isnt stupid anymore
 * incorrect leapfrogging. the lead scorer is never advanced if a subsequent 
 scorer goes past it, it just falls into this nextDoc() loop.
 * pre-next()'ing: we are using cost() api to sort, so there is no need to do 
 that.
 * UnionDocsAndPositionsEnum doesnt follow docsenum contract and set initial 
 doc to -1
 * postingsreader advance() doesnt need to check docFreq  BLOCK_SIZE on each 
 advance call, thats easy to remove.
 So I think really this scorer should just look like conjunctionscorer that 
 verifies positions on match.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5809) Simplify ExactPhraseScorer

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057423#comment-14057423
 ] 

ASF subversion and git services commented on LUCENE-5809:
-

Commit 1609453 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1609453 ]

LUCENE-5809: Simplify ExactPhraseScorer

 Simplify ExactPhraseScorer
 --

 Key: LUCENE-5809
 URL: https://issues.apache.org/jira/browse/LUCENE-5809
 Project: Lucene - Core
  Issue Type: Task
  Components: core/search
Reporter: Robert Muir
 Attachments: LUCENE-5809.patch


 While looking at this scorer i see a few little things which are remnants of 
 the past:
 * crazy heuristics to use next() over advance(): I think it should just use 
 advance(), like conjunctionscorer. these days advance() isnt stupid anymore
 * incorrect leapfrogging. the lead scorer is never advanced if a subsequent 
 scorer goes past it, it just falls into this nextDoc() loop.
 * pre-next()'ing: we are using cost() api to sort, so there is no need to do 
 that.
 * UnionDocsAndPositionsEnum doesnt follow docsenum contract and set initial 
 doc to -1
 * postingsreader advance() doesnt need to check docFreq  BLOCK_SIZE on each 
 advance call, thats easy to remove.
 So I think really this scorer should just look like conjunctionscorer that 
 verifies positions on match.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5809) Simplify ExactPhraseScorer

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057435#comment-14057435
 ] 

ASF subversion and git services commented on LUCENE-5809:
-

Commit 1609455 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1609455 ]

LUCENE-5809: Simplify ExactPhraseScorer

 Simplify ExactPhraseScorer
 --

 Key: LUCENE-5809
 URL: https://issues.apache.org/jira/browse/LUCENE-5809
 Project: Lucene - Core
  Issue Type: Task
  Components: core/search
Reporter: Robert Muir
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5809.patch


 While looking at this scorer i see a few little things which are remnants of 
 the past:
 * crazy heuristics to use next() over advance(): I think it should just use 
 advance(), like conjunctionscorer. these days advance() isnt stupid anymore
 * incorrect leapfrogging. the lead scorer is never advanced if a subsequent 
 scorer goes past it, it just falls into this nextDoc() loop.
 * pre-next()'ing: we are using cost() api to sort, so there is no need to do 
 that.
 * UnionDocsAndPositionsEnum doesnt follow docsenum contract and set initial 
 doc to -1
 * postingsreader advance() doesnt need to check docFreq  BLOCK_SIZE on each 
 advance call, thats easy to remove.
 So I think really this scorer should just look like conjunctionscorer that 
 verifies positions on match.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5809) Simplify ExactPhraseScorer

2014-07-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5809.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 Simplify ExactPhraseScorer
 --

 Key: LUCENE-5809
 URL: https://issues.apache.org/jira/browse/LUCENE-5809
 Project: Lucene - Core
  Issue Type: Task
  Components: core/search
Reporter: Robert Muir
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5809.patch


 While looking at this scorer i see a few little things which are remnants of 
 the past:
 * crazy heuristics to use next() over advance(): I think it should just use 
 advance(), like conjunctionscorer. these days advance() isnt stupid anymore
 * incorrect leapfrogging. the lead scorer is never advanced if a subsequent 
 scorer goes past it, it just falls into this nextDoc() loop.
 * pre-next()'ing: we are using cost() api to sort, so there is no need to do 
 that.
 * UnionDocsAndPositionsEnum doesnt follow docsenum contract and set initial 
 doc to -1
 * postingsreader advance() doesnt need to check docFreq  BLOCK_SIZE on each 
 advance call, thats easy to remove.
 So I think really this scorer should just look like conjunctionscorer that 
 verifies positions on match.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6136) ConcurrentUpdateSolrServer includes a Spin Lock

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057445#comment-14057445
 ] 

Mark Miller commented on SOLR-6136:
---

bq. My thinking is to relax this a little to allow an additional runner if the 
queue is half full but otherwise just keep it at 1.

That sounds okay to me - my only worry is doing something that spins up too 
many threads for a small queue.

 ConcurrentUpdateSolrServer includes a Spin Lock
 ---

 Key: SOLR-6136
 URL: https://issues.apache.org/jira/browse/SOLR-6136
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6, 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1
Reporter: Brandon Chapman
Assignee: Timothy Potter
Priority: Critical
 Attachments: wait___notify_all.patch


 ConcurrentUpdateSolrServer.blockUntilFinished() includes a Spin Lock. This 
 causes an extremely high amount of CPU to be used on the Cloud Leader during 
 indexing.
 Here is a summary of our system testing. 
 Importing data on Solr4.5.0: 
 Throughput gets as high as 240 documents per second.
 [tomcat@solr-stg01 logs]$ uptime 
 09:53:50 up 310 days, 23:52, 1 user, load average: 3.33, 3.72, 5.43
 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
 9547 tomcat 21 0 6850m 1.2g 16m S 86.2 5.0 1:48.81 java
 Importing data on Solr4.7.0 with no replicas: 
 Throughput peaks at 350 documents per second.
 [tomcat@solr-stg01 logs]$ uptime 
 10:03:44 up 311 days, 2 min, 1 user, load average: 4.57, 2.55, 4.18
 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
 9728 tomcat 23 0 6859m 2.2g 28m S 62.3 9.0 2:20.20 java
 Importing data on Solr4.7.0 with replicas: 
 Throughput peaks at 30 documents per second because the Solr machine is out 
 of CPU.
 [tomcat@solr-stg01 logs]$ uptime 
 09:40:04 up 310 days, 23:38, 1 user, load average: 30.54, 12.39, 4.79
 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
 9190 tomcat 17 0 7005m 397m 15m S 198.5 1.6 7:14.87 java



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread Alex Ksikes (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Ksikes updated LUCENE-5795:


Attachment: LUCENE-5795

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread Alex Ksikes (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057463#comment-14057463
 ] 

Alex Ksikes commented on LUCENE-5795:
-

Thanks for the comment Simon. I've just updated the patch.

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5811) TestFieldCacheSort.testStringValReverse reproduce failure: java.lang.RuntimeException: CheckReader failed

2014-07-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5811:


Attachment: LUCENE-5811.patch

The problem is checkreader doesn't like the exception we throw if the user 
screws up.

On the other hand, I sorta like the idea of this reader always passing 
checkreader: it allows it to be used in IW.addIndexes() as a viable way to 
upgrade to docvalues when you dont have them.

So to fix that, we just have to fix the leniency and exceptions to properly 
behave: here is a patch.

 TestFieldCacheSort.testStringValReverse reproduce failure: 
 java.lang.RuntimeException: CheckReader failed
 -

 Key: LUCENE-5811
 URL: https://issues.apache.org/jira/browse/LUCENE-5811
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: LUCENE-5811.patch


 Found here...
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1702/
 Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC
 Reproduces on my linux machine @ trunk r1609232 
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestFieldCacheSort -Dtests.method=testStringValReverse 
 -Dtests.seed=E9ADA2F0253960ED -Dtests.slow=true -Dtests.locale=th_TH 
 -Dtests.timezone=Asia/Urumqi -Dtests.file.encoding=UTF-8
[junit4] ERROR   0.60s | TestFieldCacheSort.testStringValReverse 
[junit4] Throwable #1: java.lang.RuntimeException: CheckReader failed
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([E9ADA2F0253960ED:2E067D8E7A200475]:0)
[junit4]  at 
 org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:240)
[junit4]  at 
 org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:218)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1598)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1572)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1564)
[junit4]  at 
 org.apache.lucene.uninverting.TestFieldCacheSort.testStringValReverse(TestFieldCacheSort.java:343)
[junit4]  at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5811) TestFieldCacheSort.testStringValReverse reproduce failure: java.lang.RuntimeException: CheckReader failed

2014-07-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057466#comment-14057466
 ] 

Michael McCandless commented on LUCENE-5811:


+1

I'm sad to see tryToBeInsane is departing.  Maybe it can be re-incarnated 
sometime soon!

 TestFieldCacheSort.testStringValReverse reproduce failure: 
 java.lang.RuntimeException: CheckReader failed
 -

 Key: LUCENE-5811
 URL: https://issues.apache.org/jira/browse/LUCENE-5811
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: LUCENE-5811.patch


 Found here...
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1702/
 Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC
 Reproduces on my linux machine @ trunk r1609232 
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestFieldCacheSort -Dtests.method=testStringValReverse 
 -Dtests.seed=E9ADA2F0253960ED -Dtests.slow=true -Dtests.locale=th_TH 
 -Dtests.timezone=Asia/Urumqi -Dtests.file.encoding=UTF-8
[junit4] ERROR   0.60s | TestFieldCacheSort.testStringValReverse 
[junit4] Throwable #1: java.lang.RuntimeException: CheckReader failed
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([E9ADA2F0253960ED:2E067D8E7A200475]:0)
[junit4]  at 
 org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:240)
[junit4]  at 
 org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:218)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1598)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1572)
[junit4]  at 
 org.apache.lucene.util.LuceneTestCase.newSearcher(LuceneTestCase.java:1564)
[junit4]  at 
 org.apache.lucene.uninverting.TestFieldCacheSort.testStringValReverse(TestFieldCacheSort.java:343)
[junit4]  at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread Alex Ksikes (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Ksikes updated LUCENE-5795:


Attachment: LUCENE-5795

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread Alex Ksikes (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Ksikes updated LUCENE-5795:


Attachment: (was: LUCENE-5795)

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057467#comment-14057467
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

Wow, crazy crazy bug! I finally found the root cause.

The problem is with the leader initiated replica code which uses core name to 
set/get status. This works fine as long as the core names for all nodes are 
different but if they all happened to be collection1 then we have this 
problem  :)

In this particular failure that I investigated:
http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1667/consoleText

Here's the sequence of events:
# port:51916 - core_node1 was initially the leader, docs were indexed and then 
it was killed
# port:51919 - core_node2 became the leader, peer sync happened, shards were 
checked for consistency
# port:51916 - core_node1 was brought back online, it recovered from the 
leader, consistency check passed
# port:51923 core_node3 and port:51932 core_node4 were added to the skipped 
servers
# 300 docs were indexed (to go beyond the peer sync limit)
# port:51919 - core_node2 (the leader was killed)

Here is where things get interesting:
# port:51923 core_node3 tries to become the leader and initiates sync with 
other replicas
# In the meanwhile, a commit request from checkShardConsistency makes its way 
to port:51923 core_node3 (even though it's not clear whether it has indeed 
become the leader)
# port:51923 core_node3 calls commit on all shards including port:51919 
core_node2 which should've been down but perhaps the local state at 51923 is 
not updated yet?
# port:51923 core_node3 puts replica collection1 on 127.0.0.1:51919_ into 
leader-initiated recovery
# port:51923 - core_node3 fails to peersync (because number of changes were too 
large) and rejoins election
# After this point each shard that tries to become the leader fails because it 
thinks that it has been put under leader initiated recovery and goes into 
actual recovery
# Of course, since there is no leader, recovery cannot happen and each shard 
eventually goes to recovery_failed state
# Eventually the test gives up and throws an error saying that there are no 
live server available to handle the request.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057469#comment-14057469
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

We should use coreNode instead of core names for setting leader initiated 
recovery. I'll put up a patch.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Building Solr 4.9.0 with maven: FileNotFoundException for lucene\tools\forbiddenApis\rue.txt

2014-07-10 Thread Steve Rowe
Hi Artem,

LUCENE-5757 https://issues.apache.org/jira/browse/LUCENE-5757 removed this 
file, but didn’t modify the Maven POM templates. 

I didn’t notice the problem until after 4.9 was released, but I fixed the 
problem on branch_4x in the following commit: 
http://svn.apache.org/viewvc?view=revisionrevision=r1607523.  You can make 
the same changes locally to get the Maven build to work, e.g. from a checked 
out 4.9 tag 
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_9_0/:

svn merge -c 1607523 http://svn.apache.org/repos/asf/lucene/dev/branch_4x

When I did the above, ‘mvn -DskipTests install’ worked for me.

Steve

On Jul 10, 2014, at 5:02 AM, Artem Karpenko gooy...@gmail.com wrote:

 Hi,
 
 I'm trying to build Apache Solr 4.9.0 using Apache Maven 3.0 and am getting 
 an error
 
 [ERROR] Failed to execute goal de.thetaphi:forbiddenapis:1.4:check 
 (check-rue) on project lucene-core: IO problem while reading files with API 
 signatures: java.io.FileNotFoundException: 
 D:\work-files\workspace\solr-4.9.0.OXSEED.1\lucene\tools\forbiddenApis\rue.txt
  (═х єфрхЄё  эрщЄш єърчрээ√щ Їрщы) - [Help 1]
 
 I've found appropriate forbiddenapis declaration in POM file but there is no 
 rue.txt file in the project at all. Is this a bug, should I raise an issue in 
 JIRA?
 
 Best,
 Artem.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057486#comment-14057486
 ] 

ASF subversion and git services commented on LUCENE-5812:
-

Commit 1609459 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1609459 ]

LUCENE-5812: NRTCachingDirectory now implements Accountable

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Building Solr 4.9.0 with maven: FileNotFoundException for lucene\tools\forbiddenApis\rue.txt

2014-07-10 Thread Steve Rowe

On Jul 10, 2014, at 9:48 AM, Steve Rowe sar...@gmail.com wrote:

 You can make the same changes locally to get the Maven build to work, e.g. 
 from a checked out 4.9 tag 
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_9_0/:
 
 svn merge -c 1607523 http://svn.apache.org/repos/asf/lucene/dev/branch_4x

Oops, the above URL isn’t quite right - “branches/” is missing after “dev/“ - 
here’s the corrected version:

svn merge -c 1607523 
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x

Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057495#comment-14057495
 ] 

Adrien Grand commented on LUCENE-5812:
--

Instead of just having NRTCachingDirectory implementing this interface, should 
we just make Directory implement it?

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5746) solr.xml parsing of str vs int vs bool is brittle; fails silently; expects odd type for shareSchema

2014-07-10 Thread Maciej Zasada (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Zasada updated SOLR-5746:


Attachment: SOLR-5746.patch

Hi [~hossman],

I'd like to submit a patch for that issue. I made changes accordingly to your 
suggestions:
* parsing logic has changed, so that config parameters are transformed to the 
expected types at the parsing time, instead of value-reading time. I'm 
transforming each solr.xml section to the NamedList, and later on to the 
SolrParams; Essentially, if {{boolean}} type of {{foo}} parameter is expected, 
{{str name=footrue/str}} will work just fine. Same goes for other types.
* exception is thrown if any unexpected values are found in the config at the 
parse time.

If you have any suggestions, I'm more than happy to hear them.

Cheers,
Maciej

 solr.xml parsing of str vs int vs bool is brittle; fails silently; 
 expects odd type for shareSchema   
 --

 Key: SOLR-5746
 URL: https://issues.apache.org/jira/browse/SOLR-5746
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 4.4, 4.5, 4.6
Reporter: Hoss Man
 Attachments: SOLR-5746.patch


 A comment in the ref guide got me looking at ConfigSolrXml.java and noticing 
 that the parsing of solr.xml options here is very brittle and confusing.  In 
 particular:
 * if a boolean option foo is expected along the lines of {{bool 
 name=footrue/bool}} it will silently ignore {{str 
 name=footrue/str}}
 * likewise for an int option {{int name=bar32/int}} vs {{str 
 name=bar32/str}}
 ... this is inconsistent with the way solrconfig.xml is parsed.  In 
 solrconfig.xml, the xml nodes are parsed into a NamedList, and the above 
 options will work in either form, but an invalid value such as {{bool 
 name=fooNOT A BOOLEAN/bool}} will generate an error earlier (when 
 parsing config) then {{str name=fooNOT A BOOLEAN/str}} (attempt to 
 parse the string as a bool the first time the config value is needed)
 In addition, i notice this really confusing line...
 {code}
 propMap.put(CfgProp.SOLR_SHARESCHEMA, 
 doSub(solr/str[@name='shareSchema']));
 {code}
 shareSchema is used internally as a boolean option, but as written the 
 parsing code will ignore it unless the user explicitly configures it as a 
 {{str/}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-5812.
-

Resolution: Fixed
  Assignee: Simon Willnauer

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057502#comment-14057502
 ] 

ASF subversion and git services commented on LUCENE-5812:
-

Commit 1609465 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1609465 ]

LUCENE-5812: NRTCachingDirectory now implements Accountable

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5714) Improve tests for BBoxStrategy then port to 4x.

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057512#comment-14057512
 ] 

ASF subversion and git services commented on LUCENE-5714:
-

Commit 1609468 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1609468 ]

LUCENE-5714: BBoxStrategy should convert shapes to bounding box on indexing 
(but not search)

 Improve tests for BBoxStrategy then port to 4x.
 ---

 Key: LUCENE-5714
 URL: https://issues.apache.org/jira/browse/LUCENE-5714
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5714_Enhance_BBoxStrategy.patch, 
 LUCENE-5714__Enhance_BBoxStrategy__more_tests,_fix_dateline_bugs,_new_AreaSimilarity_algor.patch


 BBoxStrategy needs better tests before I'm comfortable seeing it in 4x.  
 Specifically it should use random rectangles based validation (ones that may 
 cross the dateline), akin to the other tests.  And I think I see an 
 equals/hashcode bug to be fixed in there too.
 One particular thing I'd like to see added is how to handle a zero-area case 
 for AreaSimilarity.  I think an additional feature in which you declare a 
 minimum % area (relative to the query shape) would be good.
 It should be possible for the user to combine rectangle center-point to query 
 shape center-point distance sorting as well.  I think it is but I need to 
 make sure it's possible without _having_ to index a separate center point 
 field.
 Another possibility (probably not to be addressed here) is a minimum ratio 
 between width/height, perhaps 10%.  A long but nearly no height line should 
 not be massively disadvantaged relevancy-wise to an equivalently long 
 diagonal road that has a square bbox.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5714) Improve tests for BBoxStrategy then port to 4x.

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057516#comment-14057516
 ] 

ASF subversion and git services commented on LUCENE-5714:
-

Commit 1609469 from [~dsmiley] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1609469 ]

LUCENE-5714: BBoxStrategy should convert shapes to bounding box on indexing 
(but not search)

 Improve tests for BBoxStrategy then port to 4x.
 ---

 Key: LUCENE-5714
 URL: https://issues.apache.org/jira/browse/LUCENE-5714
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5714_Enhance_BBoxStrategy.patch, 
 LUCENE-5714__Enhance_BBoxStrategy__more_tests,_fix_dateline_bugs,_new_AreaSimilarity_algor.patch


 BBoxStrategy needs better tests before I'm comfortable seeing it in 4x.  
 Specifically it should use random rectangles based validation (ones that may 
 cross the dateline), akin to the other tests.  And I think I see an 
 equals/hashcode bug to be fixed in there too.
 One particular thing I'd like to see added is how to handle a zero-area case 
 for AreaSimilarity.  I think an additional feature in which you declare a 
 minimum % area (relative to the query shape) would be good.
 It should be possible for the user to combine rectangle center-point to query 
 shape center-point distance sorting as well.  I think it is but I need to 
 make sure it's possible without _having_ to index a separate center point 
 field.
 Another possibility (probably not to be addressed here) is a minimum ratio 
 between width/height, perhaps 10%.  A long but nearly no height line should 
 not be massively disadvantaged relevancy-wise to an equivalently long 
 diagonal road that has a square bbox.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057546#comment-14057546
 ] 

Timothy Potter commented on SOLR-6235:
--

Hi Shalin,

Great find! Using coreNode is a good idea, but why would all the cores have the 
name collection1? Is that valid or an indication of a problem upstream from 
this code?

Also, you raise a good point about all replicas thinking they are in 
leader-initiated recovery (LIR). In ElectionContext, when running 
shouldIBeLeader, the node will choose to not be the leader if it is in LIR. 
However, this could lead to no leader. My thinking there is the state is bad 
enough that we would need manual intervention to clear one of the LIR znodes to 
allow a replica to get past this point. But maybe we can do better here?

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057553#comment-14057553
 ] 

Adrien Grand commented on LUCENE-5812:
--

This change doesn't look right to me: by having a class implementing 
{{Accountable}}, I would expect {{ramBytesUsed()}} to return memory usage for 
the whole instance, but in that case we only return memory usage for the NRT 
cache. I think this is confusing if the directory implementation that you are 
wrapping is not purely disk-based (such as BlockDirectory).

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057563#comment-14057563
 ] 

ASF subversion and git services commented on LUCENE-5795:
-

Commit 1609474 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1609474 ]

LUCENE-5795:  MoreLikeThisQuery now only collects the top N terms

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057566#comment-14057566
 ] 

Mark Miller commented on SOLR-6235:
---

bq.  but why would all the cores have the name collection1?

It's probably historical. When we first where trying to make it easier to use 
SolrCloud and no Collections API existed, you could start up cores and have 
them be part of the same collection by giving them the same core name. This 
helped in trying to make a demo startup that required minimal extra work. So, 
most of the original tests probably just followed suit.

As we get rid of predefined cores in SolrCloud and move to the collections API, 
that stuff will go away.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057569#comment-14057569
 ] 

Simon Willnauer commented on LUCENE-5812:
-

I guess we can make directory implement that - do you want to open a new issue 
for this?

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5813) Directory should implement Accountable

2014-07-10 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5813:


 Summary: Directory should implement Accountable
 Key: LUCENE-5813
 URL: https://issues.apache.org/jira/browse/LUCENE-5813
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.10


Follow-up of LUCENE-5812.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5812) NRTCachingDirectory should implement Accountable

2014-07-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057573#comment-14057573
 ] 

Adrien Grand commented on LUCENE-5812:
--

Here it is: LUCENE-5813

 NRTCachingDirectory should implement Accountable
 

 Key: LUCENE-5812
 URL: https://issues.apache.org/jira/browse/LUCENE-5812
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5812.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057613#comment-14057613
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

bq. but why would all the cores have the name collection1? Is that valid or 
an indication of a problem upstream from this code?

The reasons are what Mark said but it is a supported use-case and pretty 
common. Imagine stock solr running on 4 nodes - each node would have the same 
collection1 core name.

bq. Also, you raise a good point about all replicas thinking they are in 
leader-initiated recovery (LIR). In ElectionContext, when running 
shouldIBeLeader, the node will choose to not be the leader if it is in LIR. 
However, this could lead to no leader. My thinking there is the state is bad 
enough that we would need manual intervention to clear one of the LIR znodes to 
allow a replica to get past this point. But maybe we can do better here?

Good question. With careful use of minRf, the user can retry operations and 
maintain consistency even if we arbitrarily elect a leader in this case. But 
most people won't use minRf and don't care about consistency as much as 
availability. For them there should be a way to get out of this mess easily. We 
can have a collection property (boolean + timeout value) to force elect a 
leader even if all shards were in LIR. What do you think?

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057620#comment-14057620
 ] 

Mark Miller commented on SOLR-6235:
---

bq. you could start up cores and have them be part of the same collection by 
giving them the same core name.

If you don't specify a collection name, it also defaults to the core name - 
hence collection1 for the core name.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057622#comment-14057622
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

bq.  We can have a collection property (boolean + timeout value) to force elect 
a leader even if all shards were in LIR

In case it wasn't clear, I think it should be true by default.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057634#comment-14057634
 ] 

ASF subversion and git services commented on LUCENE-5795:
-

Commit 1609493 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1609493 ]

LUCENE-5795:  MoreLikeThisQuery now only collects the top N terms

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057631#comment-14057631
 ] 

Mark Miller commented on SOLR-6235:
---

Great work tracking this down!

Indeed, it's a current limitation that you can have all nodes in a shard 
thinking they cannot be leader, even when all of them are available. This is 
not required by the distributed model we have at all, it's just a consequence 
of being over restrictive on the initial implementation - if all known replicas 
are participating, you should be able to get a leader. So I'm not sure if this 
case should be optional. But iff not all known replicas are participating and 
you still want to force a leader, that should be optional - I think it should 
default to false though. I think the system should default to reasonable data 
safety in these cases.

How best to solve this, I'm not quite sure, but happy to look at a patch. How 
do you plan on monitoring and taking action? Via the Overseer? It seems tricky 
to do it from the replicas.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-10 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-5795.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0
 Assignee: Simon Willnauer

committed thanks Alex

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Assignee: Simon Willnauer
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5795, LUCENE-5795, LUCENE-5795, LUCENE-5795, 
 LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057637#comment-14057637
 ] 

Mark Miller commented on SOLR-6235:
---

On another note, it almost seems we can do better than ask for a recovery on a 
failed commit.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057639#comment-14057639
 ] 

Timothy Potter commented on SOLR-6235:
--

We have a similar issue where a replica attempting to be the leader needs to 
wait a while to see other replicas before declaring itself the leader, see 
ElectionContext around line 200:

int leaderVoteWait = cc.getZkController().getLeaderVoteWait();
if (!weAreReplacement) {
  waitForReplicasToComeUp(weAreReplacement, leaderVoteWait);
}

So one quick idea might be to have the code that checks if it's in LIR see if 
all replicas are in LIR and if so, wait out the leaderVoteWait period and check 
again. If all are still in LIR, then move on with becoming the leader (in the 
spirit of availability).

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057641#comment-14057641
 ] 

Noble Paul commented on SOLR-5473:
--

 [~markrmil...@gmail.com] If you are fine with the watch stuff, I shall go 
ahead with a complete patch with the new appoach outlined in the latest patch

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057652#comment-14057652
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

bq.  But iff not all known replicas are participating and you still want to 
force a leader, that should be optional - I think it should default to false 
though. I think the system should default to reasonable data safety in these 
cases.

That's the same case as the leaderVoteWait situation and we do go ahead after 
that amount of time even if all replicas aren't participating. Therefore, I 
think that we should handle it the same way. But to help people who care about 
consistency over availability, there should be a configurable property which 
bans this auto-promotion completely.

In any case, we should switch to coreNodeName instead of coreName and open an 
issue to improve the leader election part.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057659#comment-14057659
 ] 

Mark Miller commented on SOLR-5473:
---

bq. ClusterState has no reference to ZkStateReader.

+1 on that part, but it doesn't seem to address much else, so I don't have too 
much to say.

{quote}
All changes will be visible realtime. The point is nodes NEVER cache any states 
(only Solrj does SOLR-5474). .nodes watch collections where it is a member. 
Other states are always fetched just in time from ZK.
{quote}

It sounds like what I said is an issue? You can easily be on a node in your 
cluster that doesn't have part of a collection. If you are using the admin UI 
to view your cluster and node from another collection goes down, will that 
reflect on the solr admin UI you are using that doesn't host part of that 
cluster? I think this is a big deal if not, and nothing in the patch addresses 
this kinds of issues for users or developers. You are telling me all the 
behavior is the same? I don't believe that yet.

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057659#comment-14057659
 ] 

Mark Miller edited comment on SOLR-5473 at 7/10/14 4:50 PM:


bq. ClusterState has no reference to ZkStateReader.

+1 on that part, but it doesn't seem to address much else, so I don't have too 
much to say.

{quote}
All changes will be visible realtime. The point is nodes NEVER cache any states 
(only Solrj does SOLR-5474). .nodes watch collections where it is a member. 
Other states are always fetched just in time from ZK.
{quote}

It sounds like what I said is an issue? You can easily be on a node in your 
cluster that doesn't have part of a collection. If you are using the admin UI 
to view your cluster and a node from another collection goes down, will that 
reflect on the solr admin UI you are using that doesn't host part of that 
collection? I think this is a big deal if not, and nothing in the patch 
addresses this kinds of issues for users or developers. You are telling me all 
the behavior is the same? I don't believe that yet.


was (Author: markrmil...@gmail.com):
bq. ClusterState has no reference to ZkStateReader.

+1 on that part, but it doesn't seem to address much else, so I don't have too 
much to say.

{quote}
All changes will be visible realtime. The point is nodes NEVER cache any states 
(only Solrj does SOLR-5474). .nodes watch collections where it is a member. 
Other states are always fetched just in time from ZK.
{quote}

It sounds like what I said is an issue? You can easily be on a node in your 
cluster that doesn't have part of a collection. If you are using the admin UI 
to view your cluster and node from another collection goes down, will that 
reflect on the solr admin UI you are using that doesn't host part of that 
cluster? I think this is a big deal if not, and nothing in the patch addresses 
this kinds of issues for users or developers. You are telling me all the 
behavior is the same? I don't believe that yet.

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057663#comment-14057663
 ] 

Noble Paul commented on SOLR-5473:
--

bq.You can easily be on a node in your cluster that doesn't have part of a 
collection. If you are using the admin UI to view your cluster and a node from 
another collection goes down, will that reflect on the solr admin UI

For all collections where this node is not a part of, the states are fetched 
realtime from ZK. So whatever you see in the admin console will be the latest 
state. The full patch is already taking care of this 

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057669#comment-14057669
 ] 

Mark Miller commented on SOLR-5473:
---

bq. For all collections where this node is not a part of, the states are 
fetched realtime from ZK.

Okay, great, that's off the table then.

So the *only* real trade off is that a leader might not learn about a state 
change until a request fails rather than being alerted when the state change 
happens?

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057677#comment-14057677
 ] 

Noble Paul commented on SOLR-5473:
--

bq.So the only real trade off is that a leader might not learn about a state 
change until a request fails rather than being alerted when the state change 
happens?

 A leader is always a part of a collection and is always notified of state 
changes. 

OTOH SolrJ will not learn about state changes till it makes a request

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5746) solr.xml parsing of str vs int vs bool is brittle; fails silently; expects odd type for shareSchema

2014-07-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057675#comment-14057675
 ] 

Hoss Man commented on SOLR-5746:


Maciej:

At first glance this looks awsome -- i'll try to review it more closely in the 
next few days.

A few quick things i noticed:

* can you update your tests to use the frameworks randomization when picking 
the boolean/numeric values that you put into the config strings -- instead of 
using hardcoded values?  that way we reduce the risk of false positives due to 
the code using defaults instead of the value you intended (even if the defaults 
change)
* can you add some asserts regarding the error message included in the 
SolrExceptions that are caught by the tests, so we verify that the user is 
getting a useful message?
* in the case where there might be multiple unexpected config keys found, can 
you add logging of each of the unexpected keys, and then make the exception 
thrown something like Found 5 unexpected config options in solr.xml: foo, bar, 
baz, yak, zot




 solr.xml parsing of str vs int vs bool is brittle; fails silently; 
 expects odd type for shareSchema   
 --

 Key: SOLR-5746
 URL: https://issues.apache.org/jira/browse/SOLR-5746
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 4.4, 4.5, 4.6
Reporter: Hoss Man
 Attachments: SOLR-5746.patch


 A comment in the ref guide got me looking at ConfigSolrXml.java and noticing 
 that the parsing of solr.xml options here is very brittle and confusing.  In 
 particular:
 * if a boolean option foo is expected along the lines of {{bool 
 name=footrue/bool}} it will silently ignore {{str 
 name=footrue/str}}
 * likewise for an int option {{int name=bar32/int}} vs {{str 
 name=bar32/str}}
 ... this is inconsistent with the way solrconfig.xml is parsed.  In 
 solrconfig.xml, the xml nodes are parsed into a NamedList, and the above 
 options will work in either form, but an invalid value such as {{bool 
 name=fooNOT A BOOLEAN/bool}} will generate an error earlier (when 
 parsing config) then {{str name=fooNOT A BOOLEAN/str}} (attempt to 
 parse the string as a bool the first time the config value is needed)
 In addition, i notice this really confusing line...
 {code}
 propMap.put(CfgProp.SOLR_SHARESCHEMA, 
 doSub(solr/str[@name='shareSchema']));
 {code}
 shareSchema is used internally as a boolean option, but as written the 
 parsing code will ignore it unless the user explicitly configures it as a 
 {{str/}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6236) Need an optional fallback mechanism for selecting a leader when all replicas are in leader-initiated recovery.

2014-07-10 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6236:


 Summary: Need an optional fallback mechanism for selecting a 
leader when all replicas are in leader-initiated recovery.
 Key: SOLR-6236
 URL: https://issues.apache.org/jira/browse/SOLR-6236
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter


Offshoot from discussion in SOLR-6235, key points are:

Tim: In ElectionContext, when running shouldIBeLeader, the node will choose to 
not be the leader if it is in LIR. However, this could lead to no leader. My 
thinking there is the state is bad enough that we would need manual 
intervention to clear one of the LIR znodes to allow a replica to get past this 
point. But maybe we can do better here?

Shalin: Good question. With careful use of minRf, the user can retry operations 
and maintain consistency even if we arbitrarily elect a leader in this case. 
But most people won't use minRf and don't care about consistency as much as 
availability. For them there should be a way to get out of this mess easily. We 
can have a collection property (boolean + timeout value) to force elect a 
leader even if all shards were in LIR. What do you think?

Mark: Indeed, it's a current limitation that you can have all nodes in a shard 
thinking they cannot be leader, even when all of them are available. This is 
not required by the distributed model we have at all, it's just a consequence 
of being over restrictive on the initial implementation - if all known replicas 
are participating, you should be able to get a leader. So I'm not sure if this 
case should be optional. But iff not all known replicas are participating and 
you still want to force a leader, that should be optional - I think it should 
default to false though. I think the system should default to reasonable data 
safety in these cases.
How best to solve this, I'm not quite sure, but happy to look at a patch. How 
do you plan on monitoring and taking action? Via the Overseer? It seems tricky 
to do it from the replicas.

Tim: We have a similar issue where a replica attempting to be the leader needs 
to wait a while to see other replicas before declaring itself the leader, see 
ElectionContext around line 200:
int leaderVoteWait = cc.getZkController().getLeaderVoteWait();
if (!weAreReplacement)
{ waitForReplicasToComeUp(weAreReplacement, leaderVoteWait); }
So one quick idea might be to have the code that checks if it's in LIR see if 
all replicas are in LIR and if so, wait out the leaderVoteWait period and check 
again. If all are still in LIR, then move on with becoming the leader (in the 
spirit of availability).

{quote}
But iff not all known replicas are participating and you still want to force a 
leader, that should be optional - I think it should default to false though. I 
think the system should default to reasonable data safety in these cases.
{quote}
Shalin: That's the same case as the leaderVoteWait situation and we do go ahead 
after that amount of time even if all replicas aren't participating. Therefore, 
I think that we should handle it the same way. But to help people who care 
about consistency over availability, there should be a configurable property 
which bans this auto-promotion completely.
In any case, we should switch to coreNodeName instead of coreName and open an 
issue to improve the leader election part.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6236) Need an optional fallback mechanism for selecting a leader when all replicas are in leader-initiated recovery.

2014-07-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6236:


Assignee: Timothy Potter

 Need an optional fallback mechanism for selecting a leader when all replicas 
 are in leader-initiated recovery.
 --

 Key: SOLR-6236
 URL: https://issues.apache.org/jira/browse/SOLR-6236
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter
Assignee: Timothy Potter

 Offshoot from discussion in SOLR-6235, key points are:
 Tim: In ElectionContext, when running shouldIBeLeader, the node will choose 
 to not be the leader if it is in LIR. However, this could lead to no leader. 
 My thinking there is the state is bad enough that we would need manual 
 intervention to clear one of the LIR znodes to allow a replica to get past 
 this point. But maybe we can do better here?
 Shalin: Good question. With careful use of minRf, the user can retry 
 operations and maintain consistency even if we arbitrarily elect a leader in 
 this case. But most people won't use minRf and don't care about consistency 
 as much as availability. For them there should be a way to get out of this 
 mess easily. We can have a collection property (boolean + timeout value) to 
 force elect a leader even if all shards were in LIR. What do you think?
 Mark: Indeed, it's a current limitation that you can have all nodes in a 
 shard thinking they cannot be leader, even when all of them are available. 
 This is not required by the distributed model we have at all, it's just a 
 consequence of being over restrictive on the initial implementation - if all 
 known replicas are participating, you should be able to get a leader. So I'm 
 not sure if this case should be optional. But iff not all known replicas are 
 participating and you still want to force a leader, that should be optional - 
 I think it should default to false though. I think the system should default 
 to reasonable data safety in these cases.
 How best to solve this, I'm not quite sure, but happy to look at a patch. How 
 do you plan on monitoring and taking action? Via the Overseer? It seems 
 tricky to do it from the replicas.
 Tim: We have a similar issue where a replica attempting to be the leader 
 needs to wait a while to see other replicas before declaring itself the 
 leader, see ElectionContext around line 200:
 int leaderVoteWait = cc.getZkController().getLeaderVoteWait();
 if (!weAreReplacement)
 { waitForReplicasToComeUp(weAreReplacement, leaderVoteWait); }
 So one quick idea might be to have the code that checks if it's in LIR see if 
 all replicas are in LIR and if so, wait out the leaderVoteWait period and 
 check again. If all are still in LIR, then move on with becoming the leader 
 (in the spirit of availability).
 {quote}
 But iff not all known replicas are participating and you still want to force 
 a leader, that should be optional - I think it should default to false 
 though. I think the system should default to reasonable data safety in these 
 cases.
 {quote}
 Shalin: That's the same case as the leaderVoteWait situation and we do go 
 ahead after that amount of time even if all replicas aren't participating. 
 Therefore, I think that we should handle it the same way. But to help people 
 who care about consistency over availability, there should be a configurable 
 property which bans this auto-promotion completely.
 In any case, we should switch to coreNodeName instead of coreName and open an 
 issue to improve the leader election part.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057682#comment-14057682
 ] 

Mark Miller commented on SOLR-6235:
---

bq. That's the same case as the leaderVoteWait situation and we do go ahead 
after that amount of time even if all replicas aren't participating. 

No, we don't - only if a new leader is elected does he try and do the wait. 
There are situations where that doesn't happen. This is like the issue where 
the leader loses the connection to zk after sending docs to replicas and then 
they fail and the leader asks them to recover and then you have no leader for 
the shard. We did a kind of workaround for that specific issue, but I've seen 
it happen with other errors as well. You can certainly lose a whole shard when 
everyone is participating in the election - no one thinks they can be the 
leader because they all published recovery last.

There are lots and lots of improvements to be made to recovery still - it's a 
baby.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057686#comment-14057686
 ] 

Mark Miller commented on SOLR-6235:
---

bq. only if a new leader is elected does he try and do the wait.

Sorry - that line is confusing - the issue is that waiting for everyone doesn't 
matter. They might all be participating anyway, the wait is irrelevant. The 
issue comes after that code, when no one will become the leader.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057691#comment-14057691
 ] 

Timothy Potter commented on SOLR-6235:
--

I opened this ticket SOLR-6236 for the leader election issue we're discussing, 
but the title might not be quite accurate ;-)

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057705#comment-14057705
 ] 

Mark Miller commented on SOLR-6235:
---

bq.  there should be a configurable property which bans this auto-promotion 
completely.

That's why I'm drawing the distinction between everyone participating and not 
everyone participating.

Sometimes you can lose a shard and it's because the leader-zk connection 
blinks. In this case, if you have all the replicas in a shard, it's safe to 
force an election anyway.

Sometimes you lose a shard and you don't have all the replicas - in that case, 
it should be optional to force an election and default to false.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057710#comment-14057710
 ] 

Mark Miller commented on SOLR-5473:
---

Okay, now we are getting somewhere.

So the important sum of these changes can be listed as:

* Nodes directly talk to zk when dealing with collections they don't host and 
do not watch those collections in zk.
* Solrj does not update it's cached cluster state unless a request fails to a 
node?

How much does this complicate things for those that try and write a cloud aware 
client in other languages? 

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057721#comment-14057721
 ] 

Noble Paul commented on SOLR-5473:
--

bq.How much does this complicate things for those that try and write a cloud 
aware client in other languages?

This is just an optimization in CloudSolrServer to minimize watches. The 
caching is done inside the CloudSolrServer class. The server is not aware of 
the cache.

The only thing server does is handling the _stateVer_  (contains collection 
name and its znode version as it has in the cache) param sent in request . If 
the assumption of collection name and version is wrong server throws an error 
(STALE_STATE). Anyone ther language client  that wants to do an intelligent 
watch must sent this extra request param. Others can choose to have a scheme of 
their own (like watching all states ) 



 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057723#comment-14057723
 ] 

Mark Miller commented on SOLR-5473:
---

Anyway, regardless, if all that holds, I'm sold on that as a change then.

Let's talk about the code.

Beyond fixing the ZkStateReader in ClusterState issue, I think we need to work 
more on making it clear what the two modes are in the code, which API's belong 
to what mode, what our plans are, and how developers should deal with all this. 
Someone should be able to very quickly get up to speed on all this so they 
don't break things, tie together API's that shouldn't be, build much on the old 
mode, etc. Let's make our intentions clear in the code, by the API's we pick 
and the comments.



 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057731#comment-14057731
 ] 

Mark Miller commented on SOLR-5473:
---

Why don't you finish the path of removing ZkStateReader and then let me take a 
pass of suggestions.

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057734#comment-14057734
 ] 

Noble Paul commented on SOLR-5473:
--

bq.Why don't you finish the path of removing ZkStateReader and then let me take 
a pass of suggestions.

Yes, this is the best way to iron out the internal APIs. I'll post a full patch 
shortly based on the new approach. 



 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Using a patch review tool for Lucene / Solr development.

2014-07-10 Thread david.w.smi...@gmail.com
On Wed, Jul 9, 2014 at 1:34 PM, Mark Miller markrmil...@gmail.com wrote:

 A few months ago, I filed INFRA JIRA issue to add the Lucene project to
 review board (https://reviews.apache.org) and it was just resolved (
 https://issues.apache.org/jira/browse/INFRA-7630).


Awesome.


 I’m not the biggest fan of review board, but it’s well supported by Apache
 and is sufficient at the key points for a patch review tool I think.


Have you considered using GitHub instead?  I’m using that with my GSOC
student, Varun Shenoy on his fork of the lucene-solr mirror on GitHub.
 That is, he has a branch and I’m commenting on his commits.  No need to
upload diff files or have a login to yet another system (doesn’t everyone
have a GitHub account by now?).


 I don’t think we should make this mandatory and that it should just be
 treated as an additional, optional resource, but I wanted to alert people
 to it’s existence and perhaps start a discussion around a few points.

 * I’ve been sold more and more over time of the advantages of a review
 tool, especially for large patches. The ability to comment at code points
 and easily view differences between successive patches is super useful.

 * I don’t know how I feel about moving comments and discussion for patches
 out of JIRA and into review board. I’m not sure what kind of integration
 there is.

 I’m using https://issues.apache.org/jira/browse/SOLR-5656 as a first
 trial issue: https://reviews.apache.org/r/23371/


I think it’s fine that line-number oriented discussion isn’t in JIRA so
long as that the relevant JIRA issue links to the discussion so people know
where to look.  It would be nice if the high level discussion could be kept
in JIRA, which is more searchable (e.g. McCandless’s jirasearch) and
observable by interested parties.

~ David


[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1165: POMs out of sync

2014-07-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1165/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
No registered leader was found after waiting for 6ms , collection: 
c8n_1x3_lf slice: shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found after 
waiting for 6ms , collection: c8n_1x3_lf slice: shard1
at 
__randomizedtesting.SeedInfo.seed([A77B944FA57F66A0:269D1A57D220069C]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:567)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf3WithLeaderFailover(HttpPartitionTest.java:349)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:148)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Task 3002 did not complete, final state: running

Stack Trace:
java.lang.AssertionError: Task 3002 did not complete, final state: running
at 
__randomizedtesting.SeedInfo.seed([4FBFA36AFA64004E:CE592D728D3B6072]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testDeduplicationOfSubmittedTasks(MultiThreadedOCPTest.java:162)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)




Build Log:
[...truncated 54911 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 164 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-07-10 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057937#comment-14057937
 ] 

Ramkumar Aiyengar commented on SOLR-5473:
-

What about nodes which just act as search federators (ie host no data but just 
distribute searches to shards and collect results back)? There should at least 
be an option to listen to collections of choice so that you don't have to fetch 
them for each request.

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3029) Poor json formatting of spelling collation info

2014-07-10 Thread Nalini Kartha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057946#comment-14057946
 ] 

Nalini Kartha commented on SOLR-3029:
-

Fixing in 5.0 sounds good to me too. I guess the next step is for one of the 
committers to review and then checkin the changes? Let me know if there's 
anything in the patch that needs fixing/improving.

 Poor json formatting of spelling collation info
 ---

 Key: SOLR-3029
 URL: https://issues.apache.org/jira/browse/SOLR-3029
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.0-ALPHA
Reporter: Antony Stubbs
 Fix For: 4.9, 5.0

 Attachments: SOLR-3029.patch, SOLR-3029.patch


 {noformat}
 spellcheck: {
 suggestions: [
 dalllas,
 {
 snip
 {
 word: canallas,
 freq: 1
 }
 ]
 },
 correctlySpelled,
 false,
 collation,
 dallas
 ]
 }
 {noformat}
 The correctlySpelled and collation key/values are stored as consecutive 
 elements in an array - quite odd. Is there a reason isn't not a key/value map 
 like most things?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6237) An option to have only leaders write and replicas read when using a shared file system with SolrCloud.

2014-07-10 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6237:
-

 Summary: An option to have only leaders write and replicas read 
when using a shared file system with SolrCloud.
 Key: SOLR-6237
 URL: https://issues.apache.org/jira/browse/SOLR-6237
 Project: Solr
  Issue Type: New Feature
  Components: hdfs, SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057955#comment-14057955
 ] 

Mark Miller commented on SOLR-5656:
---

bq. It seems this may be the case but I just want to confirm it: will this 
issue obviate the pointless replication (duplication) of data on a shared file 
system between replicas?

This is just another option. It works both with or without replicas for a 
shard. There are trade offs in failover transparency, time, and query 
throughput depending on what you choose.

Another option I'm about to start pursuing is SOLR-6237 An option to have only 
leaders write and replicas read when using a shared file system with SolrCloud.

I don't yet fully know what trade offs may come up in that.

 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057969#comment-14057969
 ] 

Jessica Cheng commented on SOLR-6235:
-

Obviously, this is not to say that the one LIR core_node3 wrote to 
'collection1' set everyone else in LIR is not a problem.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057964#comment-14057964
 ] 

Jessica Cheng commented on SOLR-6235:
-

Why is core_node3 able to put core_node2 (the old leader) into LIR when 
core_node3 has not been elected a leader yet? (Actually, why is core_node3 
processing any update at all when it's not a leader?)

That's really more of a problem than the fact that the one LIR core_node3 wrote 
to collection1 set everyone else in LIR, because what if really only 
core_node2 is up-to-date and it just went through a blip and came back--in this 
case the only right choice for leader is core_node2.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Using a patch review tool for Lucene / Solr development.

2014-07-10 Thread Mark Miller
bq. Have you considered using GitHub instead?

I have, and I'm even less comfortable spreading stuff that is not in JIRA
there, since it's not officially Apache sanctioned. I wish Apache would
make a deal with GitHub to do first class GitHub for us, as they do many
companies, but Apache seems to have a list of arcane reasons why that's
impossible. But without that, I want to keep important history on official
Apache systems.

I just gave up using git with svn for like the 10th time as well. I always
forget and try again, but it's not worth it. Until Git is first class for
the project, I'll keep coming back to SVN only.


On Thu, Jul 10, 2014 at 2:54 PM, david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

 On Wed, Jul 9, 2014 at 1:34 PM, Mark Miller markrmil...@gmail.com wrote:

 A few months ago, I filed INFRA JIRA issue to add the Lucene project to
 review board (https://reviews.apache.org) and it was just resolved (
 https://issues.apache.org/jira/browse/INFRA-7630).


 Awesome.


 I’m not the biggest fan of review board, but it’s well supported by
 Apache and is sufficient at the key points for a patch review tool I think.


 Have you considered using GitHub instead?  I’m using that with my GSOC
 student, Varun Shenoy on his fork of the lucene-solr mirror on GitHub.
  That is, he has a branch and I’m commenting on his commits.  No need to
 upload diff files or have a login to yet another system (doesn’t everyone
 have a GitHub account by now?).


 I don’t think we should make this mandatory and that it should just be
 treated as an additional, optional resource, but I wanted to alert people
 to it’s existence and perhaps start a discussion around a few points.

 * I’ve been sold more and more over time of the advantages of a review
 tool, especially for large patches. The ability to comment at code points
 and easily view differences between successive patches is super useful.

 * I don’t know how I feel about moving comments and discussion for
 patches out of JIRA and into review board. I’m not sure what kind of
 integration there is.

 I’m using https://issues.apache.org/jira/browse/SOLR-5656 as a first
 trial issue: https://reviews.apache.org/r/23371/


 I think it’s fine that line-number oriented discussion isn’t in JIRA so
 long as that the relevant JIRA issue links to the discussion so people know
 where to look.  It would be nice if the high level discussion could be kept
 in JIRA, which is more searchable (e.g. McCandless’s jirasearch) and
 observable by interested parties.

 ~ David




-- 
- Mark

http://about.me/markrmiller


[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058001#comment-14058001
 ] 

Shalin Shekhar Mangar commented on SOLR-6235:
-

bq. Why is core_node3 able to put core_node2 (the old leader) into LIR when 
core_node3 has not been elected a leader yet? (Actually, why is core_node3 
processing any update at all when it's not a leader?)

Yeah, the discussion went in another direction but this is something I found 
odd and I'm gonna find out why that happened.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058089#comment-14058089
 ] 

Mark Miller commented on SOLR-6235:
---

{quote}Why is core_node3 able to put core_node2 (the old leader) into LIR when 
core_node3 has not been elected a leader yet? (Actually, why is core_node3 
processing any update at all when it's not a leader?){quote}

I have not followed the sequences that closely, but I would guess that it's 
because of how we implemented distrib commit.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058094#comment-14058094
 ] 

Mark Miller commented on SOLR-6235:
---

That is part of my motivation for saying:

bq. On another note, it almost seems we can do better than ask for a recovery 
on a failed commit.

The current method was kind of just a least effort impl, so there might be some 
other things we can do as well. If I remember right, whoever gets the commit 
just broadcasts it out to everyone over http, including itself.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6235) SyncSliceTest fails on jenkins with no live servers available error

2014-07-10 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058096#comment-14058096
 ] 

Jessica Cheng commented on SOLR-6235:
-

{quote}I would guess that it's because of how we implemented distrib 
commit.{quote}

As in, anyone (non-leader) can distribute commits to everyone else? Is that why 
you commented earlier:

{quote}On another note, it almost seems we can do better than ask for a 
recovery on a failed commit.{quote}

If so, that totally makes sense.

 SyncSliceTest fails on jenkins with no live servers available error
 ---

 Key: SOLR-6235
 URL: https://issues.apache.org/jira/browse/SOLR-6235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10


 {code}
 1 tests failed.
 FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch
 Error Message:
 No live SolrServers available to handle this request
 Stack Trace:
 org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
 available to handle this request
 at 
 __randomizedtesting.SeedInfo.seed([685C57B3F25C854B:E9BAD9AB8503E577]:0)
 at 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:317)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:659)
 at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
 at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1149)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:236)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058134#comment-14058134
 ] 

Tomás Fernández Löbbe commented on SOLR-6216:
-

Anyone else that can take a look at the patch?

 Better faceting for multiple intervals on DV fields
 ---

 Key: SOLR-6216
 URL: https://issues.apache.org/jira/browse/SOLR-6216
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch


 There are two ways to have faceting on values ranges in Solr right now: 
 “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
 doing something similar:
 {code:java}
 searcher.numDocs(rangeQ , docs)
 {code}
 The good thing about this implementation is that it can benefit from caching. 
 The bad thing is that it may be slow with cold caches, and that there will be 
 a query for each of the ranges.
 A different implementation would be one that works similar to regular field 
 faceting, using doc values and validating ranges for each value of the 
 matching documents. This implementation would sometimes be faster than Range 
 Faceting / Query Faceting, specially on cases where caches are not very 
 effective, like on a high update rate, or where ranges change frequently.
 Functionally, the result should be exactly the same as the one obtained by 
 doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 10781 - Failure!

2014-07-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10781/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestSearchWithThreads

Error Message:
Captured an uncaught exception in thread: Thread[id=35, name=Lucene Merge 
Thread #1, state=RUNNABLE, group=TGRP-TestSearchWithThreads]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=35, name=Lucene Merge Thread #1, state=RUNNABLE, 
group=TGRP-TestSearchWithThreads]
Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([FC920280EE6B19CC]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.NullPointerException
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.add(Lucene49NormsConsumer.java:226)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:95)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeNumericField(DocValuesConsumer.java:129)
at 
org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:253)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:131)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3993)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3589)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)


REGRESSION:  
org.apache.lucene.codecs.lucene41.TestBlockPostingsFormat.testMergeStability

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([FC920280EE6B19CC:88DE44AFE3811B7A]:0)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.add(Lucene49NormsConsumer.java:226)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:95)
at 
org.apache.lucene.index.NumericDocValuesWriter.flush(NumericDocValuesWriter.java:92)
at 
org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:190)
at 
org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:94)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:415)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:509)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:620)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3063)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3039)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1686)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1662)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testMergeStability(BaseIndexFileFormatTestCase.java:181)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.testMergeStability(BasePostingsFormatTestCase.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 10781 - Failure!

2014-07-10 Thread Robert Muir
This is a J9 bug I think? this array is initialized in constructor:

short[] singleByteRange = new short[256];

so it cannot be null in line 226:

short previous = singleByteRange[index];

On Thu, Jul 10, 2014 at 8:35 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10781/
 Java: 64bit/ibm-j9-jdk7 
 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

 5 tests failed.
 FAILED:  
 junit.framework.TestSuite.org.apache.lucene.search.TestSearchWithThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=35, name=Lucene Merge 
 Thread #1, state=RUNNABLE, group=TGRP-TestSearchWithThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=35, name=Lucene Merge Thread #1, 
 state=RUNNABLE, group=TGRP-TestSearchWithThreads]
 Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
 java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([FC920280EE6B19CC]:0)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.add(Lucene49NormsConsumer.java:226)
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:95)
 at 
 org.apache.lucene.codecs.DocValuesConsumer.mergeNumericField(DocValuesConsumer.java:129)
 at 
 org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:253)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:131)
 at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3993)
 at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3589)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)


 REGRESSION:  
 org.apache.lucene.codecs.lucene41.TestBlockPostingsFormat.testMergeStability

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([FC920280EE6B19CC:88DE44AFE3811B7A]:0)
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.add(Lucene49NormsConsumer.java:226)
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:95)
 at 
 org.apache.lucene.index.NumericDocValuesWriter.flush(NumericDocValuesWriter.java:92)
 at 
 org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:190)
 at 
 org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:94)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:415)
 at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:509)
 at 
 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:620)
 at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3063)
 at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3039)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1686)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1662)
 at 
 org.apache.lucene.index.BaseIndexFileFormatTestCase.testMergeStability(BaseIndexFileFormatTestCase.java:181)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase.testMergeStability(BasePostingsFormatTestCase.java:91)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 

[jira] [Created] (LUCENE-5814) JVM crash When Run Lucene

2014-07-10 Thread JIRA
杨维云 created LUCENE-5814:
---

 Summary: JVM crash When Run Lucene
 Key: LUCENE-5814
 URL: https://issues.apache.org/jira/browse/LUCENE-5814
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6
 Environment: JVM Info:
java version 1.6.0_43
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)



Reporter: 杨维云
Priority: Critical


JVM crash when running Lucene in the case as follow:
1 Two Lucene servers A and B,Linux system. 
2 Lucene index files on A server ,we mount A Lucene index path to B with NFS
3 We run another programer on A to refresh index .
4 Lucene programer on B always crash , but A not.
when B crash it will give a JVM crash error file named hs_err_pid19495.log ,the 
conent as follow :

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x2b0787a0, pid=6830, tid=1129146688
#
# JRE version: 6.0_43-b01
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.14-b01 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# v  ~StubRoutines::jshort_disjoint_arraycopy
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---  T H R E A D  ---

Current thread (0x54a49800):  JavaThread RMI TCP 
Connection(722)-192.168.251.56 daemon [_thread_in_Java, id=7084, 
stack(0x433d6000,0x434d7000)]

siginfo:si_signo=SIGBUS: si_errno=0, si_code=2 (BUS_ADRERR), 
si_addr=0x2aaab8dbd6f6

Registers:
RAX=0x2aaab8dc1de8, RBX=0x, RCX=0x2379, 
RDX=0xf726
RSP=0x434d4b00, RBP=0x434d4b00, RSI=0x00079860fb40, 
RDI=0x2aaab8dc1dde
R8 =0x2aaab8dbd6f6, R9 =0x46f2, R10=0x2b078d80, 
R11=0x000797f9c420
R12=0x, R13=0x4702, R14=0x2aaab8dc1de8, 
R15=0x54a49800
RIP=0x2b0787a0, EFLAGS=0x00010282, CSGSFS=0x0033, 
ERR=0x0004
  TRAPNO=0x000e

Top of Stack: (sp=0x434d4b00)
0x434d4b00:   46f6 2b0cbc74
0x434d4b10:   00079860b448 46f2
0x434d4b20:   000797f9c420 000797f9c998
0x434d4b30:   000797f7ee70 000797f480f8
0x434d4b40:   54a49800 000797f49c98
0x434d4b50:   000797f7cc08 
0x434d4b60:   00079860b3a8 434d4ae0
0x434d4b70:   000797f9cb70 2b3f7f90
0x434d4b80:   f2ff388446f2 000797f9c420
0x434d4b90:   00079860b288 000797f9cb70
0x434d4ba0:   f2fefd649860b288 00079858c680
0x434d4bb0:   54a49800 0004
0x434d4bc0:   006296bf6801 2b2f861c
0x434d4bd0:   d8c07bb6 2b3a3094
0x434d4be0:   00079860b408 00079860b408
0x434d4bf0:   00079860b288 000798586b98
0x434d4c00:   000797f7cc08 000797f7cc08
0x434d4c10:   00079858c190 000798586b38
0x434d4c20:   000797f9daa8 2b1cfa70
0x434d4c30:   00079858ccf8 
0x434d4c40:   000798586b98 000797f9d9f0
0x434d4c50:   00079858ccf8 000798586b98
0x434d4c60:   d82e8229 00079858ccf8
0x434d4c70:   0007f2fe8a48 000797f45240
0x434d4c80:   000798586b98 2b319440
0x434d4c90:   00079858ccf8 f3095552d8c32f35
0x434d4ca0:    2b301b6c
0x434d4cb0:    2b306b4c
0x434d4cc0:   000797f9d9f0 0007d82e8229
0x434d4cd0:   000746fa 0006c6041a00
0x434d4ce0:   0007984aaa70 045dab416ca4
0x434d4cf0:   000186a0 2b29f730 

Instructions: (pc=0x2b0787a0)
0x2b078780:   c6 04 f7 c1 01 00 00 00 74 08 66 8b 47 08 66 89
0x2b078790:   46 08 48 33 c0 c9 c3 66 0f 1f 84 00 00 00 00 00
0x2b0787a0:   48 8b 44 d7 e8 48 89 44 d6 e8 48 8b 44 d7 f0 48
0x2b0787b0:   89 44 d6 f0 48 8b 44 d7 f8 48 89 44 d6 f8 48 8b 

Register to memory mapping:

RAX=0x2aaab8dc1de8 is an unknown value
RBX=0x is an unknown value
RCX=0x2379 is an unknown value
RDX=0xf726 is an unknown value
RSP=0x434d4b00 is pointing into the stack for thread: 0x54a49800
RBP=0x434d4b00 is pointing into the stack for thread: 0x54a49800
RSI=0x00079860fb40 is an unknown value
RDI=0x2aaab8dc1dde is an unknown value
R8 =0x2aaab8dbd6f6 is an unknown value
R9 =0x46f2 is an unknown value
R10=StubRoutines::unsafe_arraycopy [0x2b078d80, 0x2b078dbb[ (59 
bytes)R11=0x000797f9c420 is an oop

[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)

2014-07-10 Thread Gowtham Gutha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058369#comment-14058369
 ] 

Gowtham Gutha commented on SOLR-247:


Why doesn't it accept wildcards. So, that when creating the *schema.xml*, I 
will be including the faceted fields with a suffix to identify them as facet 
fields.

This would be great and even can be fixed.

[http://localhost:8983/solr/select?q=ipodrows=0facet=truefacet.limit=-1facet.field=*_facetfacet.mincount=1]

 Allow facet.field=* to facet on all fields (without knowing what they are)
 --

 Key: SOLR-247
 URL: https://issues.apache.org/jira/browse/SOLR-247
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
  Labels: beginners, newdev
 Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, 
 SOLR-247.patch, SOLR-247.patch


 I don't know if this is a good idea to include -- it is potentially a bad 
 idea to use it, but that can be ok.
 This came out of trying to use faceting for the LukeRequestHandler top term 
 collecting.
 http://www.nabble.com/Luke-request-handler-issue-tf3762155.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6238) Specialized test case for leader recovery scenario

2014-07-10 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6238:
---

 Summary: Specialized test case for leader recovery scenario
 Key: SOLR-6238
 URL: https://issues.apache.org/jira/browse/SOLR-6238
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Fix For: 4.10


A scenario which could happen at least before the addition of 
LeaderInitiatedRecoveryThread I think. Also this can happen only if one is 
using a non cloud aware client ( which might be quite a few users ) given that 
we have only SolrJ

Events are in chronological order -
Leader - Lost Connection with ZK
Replica - Became leader
Leader - add document is successful. Forwards it to the replica
Replica - add document is unsuccessful as it is the leader and the request 
says it is coming from a leader

So as of now the the Replica(new leader) won't have the doc but the leader(old 
leader) will have the document.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6238) Specialized test case for leader recovery scenario

2014-07-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6238:


Attachment: SOLR-6238.patch

Currently the counts of the leader and the replica are the same. I will try 
running the test on an older release to see if I can get it to fail.

Any comments on whether this test would be a good addition and if my approach 
is correct.

 Specialized test case for leader recovery scenario
 --

 Key: SOLR-6238
 URL: https://issues.apache.org/jira/browse/SOLR-6238
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Fix For: 4.10

 Attachments: SOLR-6238.patch


 A scenario which could happen at least before the addition of 
 LeaderInitiatedRecoveryThread I think. Also this can happen only if one is 
 using a non cloud aware client ( which might be quite a few users ) given 
 that we have only SolrJ
 Events are in chronological order -
 Leader - Lost Connection with ZK
 Replica - Became leader
 Leader - add document is successful. Forwards it to the replica
 Replica - add document is unsuccessful as it is the leader and the request 
 says it is coming from a leader
 So as of now the the Replica(new leader) won't have the doc but the 
 leader(old leader) will have the document.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6238) Specialized test case for leader recovery scenario

2014-07-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058420#comment-14058420
 ] 

Varun Thacker commented on SOLR-6238:
-

On branch lucene_solr_4_7 the assert check fails most of the times. 
LeaderInitiatedRecoveryThread is not present in that branch.

 Specialized test case for leader recovery scenario
 --

 Key: SOLR-6238
 URL: https://issues.apache.org/jira/browse/SOLR-6238
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Fix For: 4.10

 Attachments: SOLR-6238.patch


 A scenario which could happen at least before the addition of 
 LeaderInitiatedRecoveryThread I think. Also this can happen only if one is 
 using a non cloud aware client ( which might be quite a few users ) given 
 that we have only SolrJ
 Events are in chronological order -
 Leader - Lost Connection with ZK
 Replica - Became leader
 Leader - add document is successful. Forwards it to the replica
 Replica - add document is unsuccessful as it is the leader and the request 
 says it is coming from a leader
 So as of now the the Replica(new leader) won't have the doc but the 
 leader(old leader) will have the document.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org