Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 280 - Failure!

2013-03-02 Thread Robert Muir
On Sat, Mar 2, 2013 at 1:36 AM, Chris Hostetter hoss...@apache.org wrote:
 : ...but i didn' get any errors when i did ant precommit, and i'm not
 : seeing any actual error reported here -- what does Tidy was unable to
 : process file ... 1 returned mean as far as the actual problem?


 still confused as to why the html lint check doesn't print out the actual
 problem found with the html? Didn't it use to do that?


There are many checkers that we run: eclipse checker, a couple python
scripts, this jtidy checker.

the jtidy one has crappy output. thats because its not easy to use
jtidy in the ant build with thousands of source files and also avoid
OOMs!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-03-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591412#comment-13591412
 ] 

Shalin Shekhar Mangar commented on SOLR-3755:
-

bq. How do we know what collection? I assume there will be a collection 
parameter?

Yes, a collection param will also be present.

bq. shard.keys is currently used in routing request (and the values are often 
not shard names), so we probably shouldn't overload it here. After all, it may 
make sense in the future to be able to use shard.keys to specify which shard 
you want to split!

Yes! That is exactly the thinking behind shard.keys here. It is not being 
overloaded but used to indicate which shard to split by specifying the key 
which resolves to a shard name.

bq. Related: SOLR-4503 - we now have the capability to use restlet, and should 
consider doing so for new APIs like this.

I'm not familiar with restlet. I'll take a look at it.

 shard splitting
 ---

 Key: SOLR-3755
 URL: https://issues.apache.org/jira/browse/SOLR-3755
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Yonik Seeley
 Attachments: SOLR-3755-CoreAdmin.patch, SOLR-3755.patch, 
 SOLR-3755.patch, SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch


 We can currently easily add replicas to handle increases in query volume, but 
 we should also add a way to add additional shards dynamically by splitting 
 existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4795) Add FacetsCollector based on SortedSetDocValues

2013-03-02 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-4795:
--

Assignee: Michael McCandless

 Add FacetsCollector based on SortedSetDocValues
 ---

 Key: LUCENE-4795
 URL: https://issues.apache.org/jira/browse/LUCENE-4795
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4795.patch, LUCENE-4795.patch, 
 pleaseBenchmarkMe.patch


 Recently (LUCENE-4765) we added multi-valued DocValues field
 (SortedSetDocValuesField), and this can be used for faceting in Solr
 (SOLR-4490).  I think we should also add support in the facet module?
 It'd be an option with different tradeoffs.  Eg, it wouldn't require
 the taxonomy index, since the main index handles label/ord resolving.
 There are at least two possible approaches:
   * On every reopen, build the seg - global ord map, and then on
 every collect, get the seg ord, map it to the global ord space,
 and increment counts.  This adds cost during reopen in proportion
 to number of unique terms ...
   * On every collect, increment counts based on the seg ords, and then
 do a merge in the end just like distributed faceting does.
 The first approach is much easier so I built a quick prototype using
 that.  The prototype does the counting, but it does NOT do the top K
 facets gathering in the end, and it doesn't know parent/child ord
 relationships, so there's tons more to do before this is real.  I also
 was unsure how to properly integrate it since the existing classes
 seem to expect that you use a taxonomy index to resolve ords.
 I ran a quick performance test.  base = trunk except I disabled the
 compute top-K in FacetsAccumulator to make the comparison fair; comp
 = using the prototype collector in the patch:
 {noformat}
 TaskQPS base  StdDevQPS comp  StdDev  
   Pct diff
OrHighLow   18.79  (2.5%)   14.36  (3.3%)  
 -23.6% ( -28% -  -18%)
 HighTerm   21.58  (2.4%)   16.53  (3.7%)  
 -23.4% ( -28% -  -17%)
OrHighMed   18.20  (2.5%)   13.99  (3.3%)  
 -23.2% ( -28% -  -17%)
  Prefix3   14.37  (1.5%)   11.62  (3.5%)  
 -19.1% ( -23% -  -14%)
  LowTerm  130.80  (1.6%)  106.95  (2.4%)  
 -18.2% ( -21% -  -14%)
   OrHighHigh9.60  (2.6%)7.88  (3.5%)  
 -17.9% ( -23% -  -12%)
  AndHighHigh   24.61  (0.7%)   20.74  (1.9%)  
 -15.7% ( -18% -  -13%)
   Fuzzy1   49.40  (2.5%)   43.48  (1.9%)  
 -12.0% ( -15% -   -7%)
  MedSloppyPhrase   27.06  (1.6%)   23.95  (2.3%)  
 -11.5% ( -15% -   -7%)
  MedTerm   51.43  (2.0%)   46.21  (2.7%)  
 -10.2% ( -14% -   -5%)
   IntNRQ4.02  (1.6%)3.63  (4.0%)   
 -9.7% ( -15% -   -4%)
 Wildcard   29.14  (1.5%)   26.46  (2.5%)   
 -9.2% ( -13% -   -5%)
 HighSloppyPhrase0.92  (4.5%)0.87  (5.8%)   
 -5.4% ( -15% -5%)
  MedSpanNear   29.51  (2.5%)   27.94  (2.2%)   
 -5.3% (  -9% -0%)
 HighSpanNear3.55  (2.4%)3.38  (2.0%)   
 -4.9% (  -9% -0%)
   AndHighMed  108.34  (0.9%)  104.55  (1.1%)   
 -3.5% (  -5% -   -1%)
  LowSloppyPhrase   20.50  (2.0%)   20.09  (4.2%)   
 -2.0% (  -8% -4%)
LowPhrase   21.60  (6.0%)   21.26  (5.1%)   
 -1.6% ( -11% -   10%)
   Fuzzy2   53.16  (3.9%)   52.40  (2.7%)   
 -1.4% (  -7% -5%)
  LowSpanNear8.42  (3.2%)8.45  (3.0%)
 0.3% (  -5% -6%)
  Respell   45.17  (4.3%)   45.38  (4.4%)
 0.5% (  -7% -9%)
MedPhrase  113.93  (5.8%)  115.02  (4.9%)
 1.0% (  -9% -   12%)
   AndHighLow  596.42  (2.5%)  617.12  (2.8%)
 3.5% (  -1% -8%)
   HighPhrase   17.30 (10.5%)   18.36  (9.1%)
 6.2% ( -12% -   28%)
 {noformat}
 I'm impressed that this approach is only ~24% slower in the worst
 case!  I think this means it's a good option to make available?  Yes
 it has downsides (NRT reopen more costly, small added RAM usage,
 slightly slower faceting), but it's also simpler (no taxo index to
 manage).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, 

[jira] [Updated] (LUCENE-4795) Add FacetsCollector based on SortedSetDocValues

2013-03-02 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4795:
---

Attachment: LUCENE-4795.patch

New patch ... I think it's close but there are still some nocommits.

 I switched to a FacetsAccumulator (SortedSetDVAccumulator) instead of
XXXCollector because:

  * It's more fair since it now does all counting in the end,
matching trunk, which was a bit faster than count-as-you-go when
we last tested.

  * It means you can use this class with DrillSideways ... I fixed
TestDrillSideways to test it (passes!).

I also got a custom topK impl working.

The facets are the same as trunk, except for tie-break differences.
The new collector is better in this regard: it breaks ties in an
understandable-to-the-end-user way (by ord = Unicode sort order),
unlike the taxo index which is order in which label was indexed into
taxo index (confusing to end user).

I first went down the road of making a TaxoReader that wraps a
SlowCompositeReaderWrapper ... but this became problematic because a
DV instance is not thread-safe, yet TaxoReader's APIs are supposed to
be thread-safe.  I also really didn't like making 3 int[maxOrd] to
handle hierarchy when SorteSetDV facets only support 2 level
hierarchy (dim + child).

So I backed off of that and made a separate State object, which you
must re-init after ever top-reader-reopen, and it does the heavyish
stuff.

Current results (base = trunk w/ allbutdim, comp = patch, full wikibig
index with 5 flat dims):

{noformat}
TaskQPS base  StdDevQPS comp  StdDev
Pct diff
HighTerm9.36  (1.9%)7.02  (3.6%)  
-25.0% ( -29% -  -19%)
 MedTerm   53.21  (1.5%)   40.65  (2.8%)  
-23.6% ( -27% -  -19%)
   OrHighLow   13.25  (2.1%)   10.55  (3.4%)  
-20.4% ( -25% -  -15%)
   OrHighMed   25.77  (1.9%)   20.90  (3.1%)  
-18.9% ( -23% -  -14%)
  OrHighHigh   13.03  (2.2%)   10.63  (3.2%)  
-18.4% ( -23% -  -13%)
 LowTerm  146.28  (1.7%)  120.22  (1.7%)  
-17.8% ( -20% -  -14%)
{noformat}


 Add FacetsCollector based on SortedSetDocValues
 ---

 Key: LUCENE-4795
 URL: https://issues.apache.org/jira/browse/LUCENE-4795
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4795.patch, LUCENE-4795.patch, LUCENE-4795.patch, 
 pleaseBenchmarkMe.patch


 Recently (LUCENE-4765) we added multi-valued DocValues field
 (SortedSetDocValuesField), and this can be used for faceting in Solr
 (SOLR-4490).  I think we should also add support in the facet module?
 It'd be an option with different tradeoffs.  Eg, it wouldn't require
 the taxonomy index, since the main index handles label/ord resolving.
 There are at least two possible approaches:
   * On every reopen, build the seg - global ord map, and then on
 every collect, get the seg ord, map it to the global ord space,
 and increment counts.  This adds cost during reopen in proportion
 to number of unique terms ...
   * On every collect, increment counts based on the seg ords, and then
 do a merge in the end just like distributed faceting does.
 The first approach is much easier so I built a quick prototype using
 that.  The prototype does the counting, but it does NOT do the top K
 facets gathering in the end, and it doesn't know parent/child ord
 relationships, so there's tons more to do before this is real.  I also
 was unsure how to properly integrate it since the existing classes
 seem to expect that you use a taxonomy index to resolve ords.
 I ran a quick performance test.  base = trunk except I disabled the
 compute top-K in FacetsAccumulator to make the comparison fair; comp
 = using the prototype collector in the patch:
 {noformat}
 TaskQPS base  StdDevQPS comp  StdDev  
   Pct diff
OrHighLow   18.79  (2.5%)   14.36  (3.3%)  
 -23.6% ( -28% -  -18%)
 HighTerm   21.58  (2.4%)   16.53  (3.7%)  
 -23.4% ( -28% -  -17%)
OrHighMed   18.20  (2.5%)   13.99  (3.3%)  
 -23.2% ( -28% -  -17%)
  Prefix3   14.37  (1.5%)   11.62  (3.5%)  
 -19.1% ( -23% -  -14%)
  LowTerm  130.80  (1.6%)  106.95  (2.4%)  
 -18.2% ( -21% -  -14%)
   OrHighHigh9.60  (2.6%)7.88  (3.5%)  
 -17.9% ( -23% -  -12%)
  AndHighHigh   24.61  (0.7%)   20.74  (1.9%)  
 -15.7% 

[jira] [Commented] (SOLR-2996) make q=* not suck in the lucene and edismax parsers

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591452#comment-13591452
 ] 

Commit Tag Bot commented on SOLR-2996:
--

[trunk commit] Yonik Seeley
http://svn.apache.org/viewvc?view=revisionrevision=1451906

SOLR-2996: treat un-fielded * as *:*


 make q=* not suck in the lucene and edismax parsers
 -

 Key: SOLR-2996
 URL: https://issues.apache.org/jira/browse/SOLR-2996
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Hoss Man
 Attachments: SOLR-2996.patch


 More then a few users have gotten burned by thinking that {{\*}} is the 
 appropriate syntax for match all docs when what it really does (unless i'm 
 mistaken) is create a prefix query on the default search field using a blank 
 string as the prefix.
 since it seems very unlikely that anyone has a genuine usecase for making a 
 prefix query with a blank prefix, we should change the default behavior of 
 the LuceneQParser and EDismaxQParsers (and any other Qparsers that respect 
 {{\*:\*}} if i'm forgetting them) to treat this situation the same as 
 {{\*:\*}}.  we can offer a (local)param to force the old behavior if someone 
 really wants it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3798) copyField logic in LukeRequestHandler is primitive, doesn't work well with dynamicFields

2013-03-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591456#comment-13591456
 ] 

Steve Rowe commented on SOLR-3798:
--

After [chatting with Hoss on #lucene-dev 
IRC|http://colabti.org/irclogger/irclogger_log/lucene-dev?date=2013-03-01#l9], 
I understand copyFields a little better.  Hoss argued that undeclared explicit 
field is an inaccurate description of the third kind of thing I was 
referring to, and I agree.

A hopefully better characterization - something like this should be on the wiki:

{panel}
{{copyField}} source or dest values can be either field names or dynamic 
field references.

A dynamic field reference is either an exact {{dynamicField}} name, or a 
pattern that accepts a subset of the language accepted by the pattern for a 
referenced dynamic field.  Subset pattern syntax is the same as that for 
dynamic field names (*\*string* or *string\**), with the additional 
possibility of excluding the asterisk (*string*).

A {{copyField *source*}} subset pattern operates as a filter: instead of 
triggering a field copy for all field names matched by the referenced dynamic 
field pattern, only those that match the subset pattern will trigger a field 
copy.

A {{copyField *dest*}} subset pattern operates in two ways: the target 
field's type is drawn from the referenced {{dynamicField}}; and the target 
field name is generated using the subset pattern as a template, unless the 
subset pattern excludes the asterisk, in which case the subset pattern itself 
becomes the target field name.
{panel}


 copyField logic in LukeRequestHandler is primitive, doesn't work well with 
 dynamicFields
 

 Key: SOLR-3798
 URL: https://issues.apache.org/jira/browse/SOLR-3798
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-3798.patch


 looking into SOLR-3795 i realized there is a much bigger problem with how 
 LukeRequestHandler tries to get copyfield info for fields and dynamicFields 
 the same way, and it just doesn't work.
 see the patch in SOLR-3795 for a commented out example of a test that still 
 fails (ie: trying to get the copySource info for a dynamicField)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2996) make q=* not suck in the lucene and edismax parsers

2013-03-02 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-2996.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2

 make q=* not suck in the lucene and edismax parsers
 -

 Key: SOLR-2996
 URL: https://issues.apache.org/jira/browse/SOLR-2996
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-2996.patch


 More then a few users have gotten burned by thinking that {{\*}} is the 
 appropriate syntax for match all docs when what it really does (unless i'm 
 mistaken) is create a prefix query on the default search field using a blank 
 string as the prefix.
 since it seems very unlikely that anyone has a genuine usecase for making a 
 prefix query with a blank prefix, we should change the default behavior of 
 the LuceneQParser and EDismaxQParsers (and any other Qparsers that respect 
 {{\*:\*}} if i'm forgetting them) to treat this situation the same as 
 {{\*:\*}}.  we can offer a (local)param to force the old behavior if someone 
 really wants it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2996) make q=* not suck in the lucene and edismax parsers

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591460#comment-13591460
 ] 

Commit Tag Bot commented on SOLR-2996:
--

[branch_4x commit] Yonik Seeley
http://svn.apache.org/viewvc?view=revisionrevision=1451910

SOLR-2996: treat un-fielded * as *:*


 make q=* not suck in the lucene and edismax parsers
 -

 Key: SOLR-2996
 URL: https://issues.apache.org/jira/browse/SOLR-2996
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-2996.patch


 More then a few users have gotten burned by thinking that {{\*}} is the 
 appropriate syntax for match all docs when what it really does (unless i'm 
 mistaken) is create a prefix query on the default search field using a blank 
 string as the prefix.
 since it seems very unlikely that anyone has a genuine usecase for making a 
 prefix query with a blank prefix, we should change the default behavior of 
 the LuceneQParser and EDismaxQParsers (and any other Qparsers that respect 
 {{\*:\*}} if i'm forgetting them) to treat this situation the same as 
 {{\*:\*}}.  we can offer a (local)param to force the old behavior if someone 
 really wants it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4492) Please add support for Collection API CREATE method to evenly distribute leader roles among instances

2013-03-02 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13584936#comment-13584936
 ] 

Hoss Man edited comment on SOLR-4492 at 3/2/13 6:25 PM:


I don't know a lot about how leader election/creation currently works, but as a 
novice in this area, here are some suggestions i havne't fully thought 
through...

* it seems like picking the leaders on collection creation should use the same 
algo as leader election when a leader goes down
* instead of picking leaders purely randomly, it seems like the leader election 
algo could be a lottery, with the number of tickets each node has inversely 
proportionate to the amount of load that node is handling
* load could be an generic numeric concept contributed by various things 
along the lines of (num_local_shards + (17 * num_shards_we_are_leader_for) + 
$some_admin_configurable_property) * 
OperatingSystemMXBean.html.getAvailableProcessors() / 
OperatingSystemMXBean.html.getSystemLoadAverage() 

(where $some_admin_configurable_property gives users the ability to say this 
machine is N times beefier then that machine)

  was (Author: hossman):
poorly thought through suggestion from the peanut gallery...

* it seems like picking the leaders on collection creation should use the same 
algo as leader election when a leader goes down
* instead of picking leaders purely randomly, it seems like the leader election 
algo could be a lottery, with the number of tickets each node has inversely 
proportionate to the amount of load that node is handling
* load could be an generic numeric concept contributed by various things 
along the lines of (num_local_shards + (17 * num_shards_we_are_leader_for) + 
$some_admin_configurable_property) * 
OperatingSystemMXBean.html.getAvailableProcessors() / 
OperatingSystemMXBean.html.getSystemLoadAverage() 

(where $some_admin_configurable_property gives users the ability to say this 
machine is N times beefier then that machine)
  
 Please add support for Collection API CREATE method to evenly distribute 
 leader roles among instances
 -

 Key: SOLR-4492
 URL: https://issues.apache.org/jira/browse/SOLR-4492
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Tim Vaillancourt
Priority: Minor

 Currently in SolrCloud 4.1, a CREATE call to the Collection API will cause 
 the server receiving the CREATE call to become the leader of all shards.
 I would like to ask for the ability for the CREATE call to evenly distribute 
 the leader role across all instances, ie: if I create 3 shards over 3 SOLR 
 4.1 instances, each instance/node would only be the leader of 1 shard.
 This would be logically consistent with the way replicas are randomly 
 distributed by this same call across instances/nodes.
 Currently, this CREATE call will cause the server receiving the call to 
 become the leader of 3 shards.
 curl -v 
 'http://HOST:8983/solr/admin/collections?action=CREATEname=testnumShards=3replicationFactor=2maxShardsPerNode=2'
 PS: Thank you SOLR developers for your contributions!
 Tim Vaillancourt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #262: POMs out of sync

2013-03-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/262/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
Test Setup Failure: shard1 should have just been set up to be inconsistent - 
but it's still consistent. Leader:http://127.0.0.1:61582/yrg/s/collection1skip 
list:[CloudJettyRunner [url=http://127.0.0.1:61590/yrg/s/collection1], 
CloudJettyRunner [url=http://127.0.0.1:61586/yrg/s/collection1]]

Stack Trace:
java.lang.AssertionError: Test Setup Failure: shard1 should have just been set 
up to be inconsistent - but it's still consistent. 
Leader:http://127.0.0.1:61582/yrg/s/collection1skip list:[CloudJettyRunner 
[url=http://127.0.0.1:61590/yrg/s/collection1], CloudJettyRunner 
[url=http://127.0.0.1:61586/yrg/s/collection1]]
at 
__randomizedtesting.SeedInfo.seed([61FB53E2E6EB6785:E01DDDFA91B407B9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:212)




Build Log:
[...truncated 22886 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-4518) CurrencyField needs better errors when encountering a currency that java.util.Currency doesn't support

2013-03-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4518:
---

Attachment: SOLR-4518.patch

 CurrencyField needs better errors when encountering a currency that 
 java.util.Currency doesn't support
 --

 Key: SOLR-4518
 URL: https://issues.apache.org/jira/browse/SOLR-4518
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-4518.patch


 Got a generic IllegalArgumentException when trying to use a currency.xml 
 file refering to ZWL on java6 ... need to wrap those errors with more 
 context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4518) CurrencyField needs better errors when encountering a currency that java.util.Currency doesn't support

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591498#comment-13591498
 ] 

Commit Tag Bot commented on SOLR-4518:
--

[trunk commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451931

SOLR-4518: Improved CurrencyField error messages when attempting to use a 
Currency that is not supported by the current JVM


 CurrencyField needs better errors when encountering a currency that 
 java.util.Currency doesn't support
 --

 Key: SOLR-4518
 URL: https://issues.apache.org/jira/browse/SOLR-4518
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-4518.patch


 Got a generic IllegalArgumentException when trying to use a currency.xml 
 file refering to ZWL on java6 ... need to wrap those errors with more 
 context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4518) CurrencyField needs better errors when encountering a currency that java.util.Currency doesn't support

2013-03-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4518.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2

Committed revision 1451931.
Committed revision 1451940.


 CurrencyField needs better errors when encountering a currency that 
 java.util.Currency doesn't support
 --

 Key: SOLR-4518
 URL: https://issues.apache.org/jira/browse/SOLR-4518
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4518.patch


 Got a generic IllegalArgumentException when trying to use a currency.xml 
 file refering to ZWL on java6 ... need to wrap those errors with more 
 context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4518) CurrencyField needs better errors when encountering a currency that java.util.Currency doesn't support

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591512#comment-13591512
 ] 

Commit Tag Bot commented on SOLR-4518:
--

[branch_4x commit] Chris M. Hostetter
http://svn.apache.org/viewvc?view=revisionrevision=1451940

SOLR-4518: Improved CurrencyField error messages when attempting to use a 
Currency that is not supported by the current JVM (merge r1451931)


 CurrencyField needs better errors when encountering a currency that 
 java.util.Currency doesn't support
 --

 Key: SOLR-4518
 URL: https://issues.apache.org/jira/browse/SOLR-4518
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4518.patch


 Got a generic IllegalArgumentException when trying to use a currency.xml 
 file refering to ZWL on java6 ... need to wrap those errors with more 
 context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



JettySolrRunner and oddly created indexes

2013-03-02 Thread Erick Erickson
Vague question, but I'm wondering if it rings any bells. I'm creating a
unit stress test for the opening/closing cores. It's gone pretty well
except Under some conditions (when I'm testing discovery-based code, so
it's something new) I'm getting a bunch of indexes created like:

index26132624tmp

that are in a strange place, i.e. not in the data dir like I expect. Index
directories are _also_ created in the data dirs.

It feels like the JettySolrRunner is somehow making decisions I don't
expect about where to create indexes, anyone got any pointers as to where?

Or is it simpler. Is there an easy way to force JettySolrRunner to be
really stupid and just use the file system?

Thanks,
Erick


[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build # 4514 - Failure!

2013-03-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4514/
Java: 32bit/jdk1.8.0-ea-b78 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 6072 lines...]
[junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
/var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/jre/bin/java -client 
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=6262FBE886BA7902 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.2 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.2-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/classes/test:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/common/lucene-analyzers-common-4.2-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/src/test-files:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/Tagger-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/WhitespaceTokenizer-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/analysis/uima/lib/uimaj-core-2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/analysis/uima/classes/java:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.0.8.jar
 -ea:org.apache.lucene... -ea:org.apache.solr... 

RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build # 4514 - Failure!

2013-03-02 Thread Uwe Schindler
JDK 8 b78 seems to hang 30% of the runs in this UIMA test. The JVM is 
completely irresponsible and can only be killed with kill -9.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Saturday, March 02, 2013 11:24 PM
 To: dev@lucene.apache.org; hoss...@apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b78) - Build #
 4514 - Failure!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4514/
 Java: 32bit/jdk1.8.0-ea-b78 -client -XX:+UseG1GC
 
 All tests passed
 
 Build Log:
 [...truncated 6072 lines...]
 [junit4:junit4] ERROR: JVM J0 ended with an exception, command line:
 /var/lib/jenkins/tools/java/32bit/jdk1.8.0-ea-b78/jre/bin/java -client -
 XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -
 XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/heapdumps -Dtests.prefix=tests -Dtests.seed=6262FBE886BA7902 -
 Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -
 Dtests.codec=random -Dtests.postingsformat=random -
 Dtests.docvaluesformat=random -Dtests.locale=random -
 Dtests.timezone=random -Dtests.directory=random -
 Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.2 -
 Dtests.cleanthreads=perMethod -
 Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -
 Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -
 Dtests.multiplier=3 -DtempDir=. -Djava.io.tmpdir=. -
 Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/test/temp -
 Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/clover/db -
 Djava.security.manager=org.apache.lucene.util.TestSecurityManager -
 Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/tools/junit4/tests.policy -Dlucene.version=4.2-SNAPSHOT -
 Djetty.testMode=1 -Djetty.insecurerandom=1 -
 Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -
 Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/classes/test:/mnt/ssd/jenkins/workspace
 /Lucene-Solr-4.x-Linux/lucene/build/test-
 framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucen
 e-Solr-4.x-Linux/lucene/build/analysis/common/lucene-analyzers-common-
 4.2-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/src/test-
 files:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/Tagger-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/WhitespaceTokenizer-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/analysis/uima/lib/uimaj-core-
 2.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-
 Solr-4.x-Linux/lucene/test-framework/lib/junit-
 4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-
 framework/lib/randomizedtesting-runner-
 2.0.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
 Linux/lucene/build/analysis/uima/classes/java:/var/lib/jenkins/tools/hudson
 .tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-
 launcher.jar:/var/lib/jenkins/.ant/lib/ivy-
 2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib
 /ant-
 swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
 /lib/ant-apache-
 oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-
 jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-apache-
 xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-
 javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
 8.2/lib/ant-apache-
 resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
 8.2/lib/ant-
 testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-commons-
 logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
 2/lib/ant-apache-
 log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
 lib/ant-
 jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-commons-
 net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-apache-
 bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
 b/ant-
 jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
 8.2/lib/ant-
 

[jira] [Created] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Robert Muir (JIRA)
Robert Muir created SOLR-4523:
-

 Summary: Show if fields have docvalues or highlighting offsets in 
admin UI
 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Attachments: SOLR-4523.patch

Currently its not reported if a field has docvalues or stores offsets with 
positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-4523:
--

Attachment: SOLR-4523.patch

 Show if fields have docvalues or highlighting offsets in admin UI
 -

 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Attachments: SOLR-4523.patch


 Currently its not reported if a field has docvalues or stores offsets with 
 positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JettySolrRunner and oddly created indexes

2013-03-02 Thread Erick Erickson
OK, this seems to work, doing it in @BeforeClass etc., any better ways to
do this?

savedFactory = System.getProperty(solr.DirectoryFactory);


On Sat, Mar 2, 2013 at 4:55 PM, Erick Erickson erickerick...@gmail.comwrote:

 Vague question, but I'm wondering if it rings any bells. I'm creating a
 unit stress test for the opening/closing cores. It's gone pretty well
 except Under some conditions (when I'm testing discovery-based code, so
 it's something new) I'm getting a bunch of indexes created like:

 index26132624tmp

 that are in a strange place, i.e. not in the data dir like I expect. Index
 directories are _also_ created in the data dirs.

 It feels like the JettySolrRunner is somehow making decisions I don't
 expect about where to create indexes, anyone got any pointers as to where?

 Or is it simpler. Is there an easy way to force JettySolrRunner to be
 really stupid and just use the file system?

 Thanks,
 Erick



[jira] [Updated] (SOLR-4138) currency field doesn't work with functions (ie: isn't compatible with frange query)

2013-03-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4138:
---

Attachment: SOLR-4138.patch

Updated previous patch to work with trunk, and to include a new 
{{currency(field_name,[currency_code])}} function like the one i hypothesized 
in my previous comment.

still several nocommit's related to the docs to make it clear what's what -- 
but the tests all pass and the code seems complete to me.

 currency field doesn't work with functions (ie: isn't compatible with frange 
 query)
 ---

 Key: SOLR-4138
 URL: https://issues.apache.org/jira/browse/SOLR-4138
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0
Reporter: Grzegorz Sobczyk
 Attachments: SOLR-4135-test.patch, SOLR-4138.patch, SOLR-4138.patch


 In general, using CurrencyField with FunctionQueries doesn't work
 In particular, as originally reported...
 Filtering using {!frange} syntax isn't work properly. (rather at all)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4138) currency field doesn't work with functions (ie: isn't compatible with frange query)

2013-03-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-4138:
--

Assignee: Hoss Man

 currency field doesn't work with functions (ie: isn't compatible with frange 
 query)
 ---

 Key: SOLR-4138
 URL: https://issues.apache.org/jira/browse/SOLR-4138
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.0
Reporter: Grzegorz Sobczyk
Assignee: Hoss Man
 Attachments: SOLR-4135-test.patch, SOLR-4138.patch, SOLR-4138.patch


 In general, using CurrencyField with FunctionQueries doesn't work
 In particular, as originally reported...
 Filtering using {!frange} syntax isn't work properly. (rather at all)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #788: POMs out of sync

2013-03-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/788/

3 tests failed.
FAILED:  
org.apache.solr.cloud.RecoveryZkTest.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=1199, name=coreLoadExecutor-861-thread-1, state=RUNNABLE, 
group=TGRP-RecoveryZkTest], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=1199, 
name=coreLoadExecutor-861-thread-1, state=RUNNABLE, group=TGRP-RecoveryZkTest], 
registration stack trace below.
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.util.CloseableDirectory.close(CloseableDirectory.java:47)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:602)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:599)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeResources(RandomizedContext.java:167)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.afterAlways(RandomizedRunner.java:615)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)


FAILED:  
org.apache.solr.cloud.RecoveryZkTest.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=1201, name=RecoveryThread, state=RUNNABLE, 
group=TGRP-RecoveryZkTest], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=1201, 
name=RecoveryThread, state=RUNNABLE, group=TGRP-RecoveryZkTest], registration 
stack trace below.
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.util.CloseableDirectory.close(CloseableDirectory.java:47)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:602)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:599)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeResources(RandomizedContext.java:167)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.afterAlways(RandomizedRunner.java:615)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)


FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch

Error Message:
No registered leader was found, collection:collection1 slice:shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found, 
collection:collection1 slice:shard1
at 
__randomizedtesting.SeedInfo.seed([2C58C15991FF11F6:ADBE4F41E6A071CA]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:430)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:295)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:116)




Build Log:
[...truncated 22656 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591578#comment-13591578
 ] 

Steve Rowe commented on SOLR-4523:
--

+1

 Show if fields have docvalues or highlighting offsets in admin UI
 -

 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Attachments: SOLR-4523.patch


 Currently its not reported if a field has docvalues or stores offsets with 
 positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3798) copyField logic in LukeRequestHandler is primitive, doesn't work well with dynamicFields

2013-03-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591583#comment-13591583
 ] 

Steve Rowe commented on SOLR-3798:
--

Returning to the subject of this issue :) ... with the previously attached 
patch, I can see dynamic field copySource info in the response from 
{{/admin/luke?show=shema}}, but not in all combinations of possible 
{{copyField source}} and {{dest}} value types.  

The current situation, with the patch applied:

||case #||{{source}} value type||{{dest}} value type||Example||In 
{{/admin/luke?show=schema}} reponse?||Schema parse succeeds?||
|1|{color:red}{{field name}}{color}|{color:red}{{field 
name}}{color}|{{copyField {color:red}source=title{color} 
{color:red}dest=text{color}/}}|Yes|Yes|
|2|{color:red}{{field name}}{color}|{color:green}{{dynamicField 
name}}{color}|{{copyField {color:red}source=title{color} 
{color:green}dest=\*_s{color}/}}|N/A|No: copyField only supports a dynamic 
destination if the source is also dynamic|
|3|{color:red}{{field name}}{color}|{color:blue}subset 
pattern{color}|{{copyField {color:red}source=title{color} 
{color:blue}dest=\*_dest_sub_s{color}/}}|N/A|No: copyField only supports a 
dynamic destination if the source is also dynamic|
|4|{color:red}{{field name}}{color}|{color:orange}subset pattern no 
asterisk{color}|{{copyField {color:red}source=title{color} 
{color:orange}dest=dest_sub_no_ast_s{color}/}}|Yes|Yes|
| |
|5|{color:green}{{dynamicField name}}{color}|{color:red}{{field 
name}}{color}|{{copyField {color:green}source=\*_i{color} 
{color:red}dest=title{color}/}}|Yes|Yes|
|6|{color:green}{{dynamicField name}}{color}|{color:green}{{dynamicField 
name}}{color}|{{copyField {color:green}source=\*_i{color} 
{color:green}dest=\*_s{color}/}}|Yes|Yes|
|7|{color:green}{{dynamicField name}}{color}|{color:blue}subset 
pattern{color}|{{copyField {color:green}source=\*_i{color} 
{color:blue}dest=\*_dest_sub_s{color}/}}|N/A|No: copyField dynamic 
destination must match a dynamicField.|
|8|{color:green}{{dynamicField name}}{color}|{color:orange}subset pattern no 
asterisk{color}|{{copyField {color:green}source=\*_i{color} 
{color:orange}dest=dest_sub_no_ast_s{color}/}}|Yes|Yes|
| |
|9|{color:blue}subset pattern{color}|{color:red}{{field 
name}}{color}|{{copyField {color:blue}source=\*_src_sub_i{color} 
{color:red}dest=title{color}/}}|Yes|Yes|
|10|{color:blue}subset pattern{color}|{color:green}{{dynamicField 
name}}{color}|{{copyField {color:blue}source=\*_src_sub_i{color} 
{color:green}dest=\*_s{color}/}}|Yes|Yes|
|11|{color:blue}subset pattern{color}|{color:blue}subset 
pattern{color}|{{copyField {color:blue}source=\*_src_sub_i{color} 
{color:blue}dest=\*_dest_sub_s{color}/}}|N/A|No: copyField dynamic 
destination must match a dynamicField.|
|12|{color:blue}subset pattern{color}|{color:orange}subset pattern no 
asterisk{color}|{{copyField {color:blue}source=\*_src_sub_i{color} 
{color:orange}dest=dest_sub_no_ast_s{color}/}}|No|Yes|
| |
|13|{color:orange}subset pattern no asterisk{color}|{color:red}{{field 
name}}{color}|{{copyField {color:orange}source=src_sub_no_ast_i{color} 
{color:red}dest=title{color}/}}|Yes|Yes|
|14|{color:orange}subset pattern no 
asterisk{color}|{color:green}{{dynamicField name}}{color}|{{copyField 
{color:orange}source=src_sub_no_ast_i{color} 
{color:green}dest=\*_s{color}/}}|N/A|No: copyField only supports a dynamic 
destination if the source is also dynamic|
|15|{color:orange}subset pattern no asterisk{color}|{color:blue}subset 
pattern{color}|{{copyField {color:orange}source=src_sub_no_ast_i{color} 
{color:blue}dest=\*_dest_sub_s{color}/}}|N/A|No: copyField only supports a 
dynamic destination if the source is also dynamic|
|16|{color:orange}subset pattern no asterisk{color}|{color:orange}subset 
pattern no asterisk{color}|{{copyField 
{color:orange}source=src_sub_no_ast_i{color} 
{color:orange}dest=dest_sub_no_ast_s{color}/}}|No|Yes|

Hoss pointed out that cases 2 and 3 are expected failures, since Solr doesn't 
have a source name template to use when generating the destination field name.

However, I think it's a bug that cases 7, 11, 14 and 15 cause Solr to puke - 
there's no reason I can see to disallow them.

Cases 12 and 16 are directly relevant to this issue, since they are 
successfully parsed, but aren't returned in LukeRequestHandler's report.



 copyField logic in LukeRequestHandler is primitive, doesn't work well with 
 dynamicFields
 

 Key: SOLR-3798
 URL: https://issues.apache.org/jira/browse/SOLR-3798
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-3798.patch


 looking into SOLR-3795 i realized there is a much bigger problem with how 
 LukeRequestHandler tries to get copyfield info for fields and 

[jira] [Commented] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591584#comment-13591584
 ] 

Commit Tag Bot commented on SOLR-4523:
--

[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451970

SOLR-4523: Show if fields have docvalues or highlighting offsets in admin UI


 Show if fields have docvalues or highlighting offsets in admin UI
 -

 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Attachments: SOLR-4523.patch


 Currently its not reported if a field has docvalues or stores offsets with 
 positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-4523.
---

   Resolution: Fixed
Fix Version/s: 5.0
   4.2

 Show if fields have docvalues or highlighting offsets in admin UI
 -

 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Fix For: 4.2, 5.0

 Attachments: SOLR-4523.patch


 Currently its not reported if a field has docvalues or stores offsets with 
 positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4523) Show if fields have docvalues or highlighting offsets in admin UI

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591590#comment-13591590
 ] 

Commit Tag Bot commented on SOLR-4523:
--

[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1451971

SOLR-4523: Show if fields have docvalues or highlighting offsets in admin UI


 Show if fields have docvalues or highlighting offsets in admin UI
 -

 Key: SOLR-4523
 URL: https://issues.apache.org/jira/browse/SOLR-4523
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Robert Muir
 Fix For: 4.2, 5.0

 Attachments: SOLR-4523.patch


 Currently its not reported if a field has docvalues or stores offsets with 
 positions, we should show these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4210) Requests to a Collection that does not exist on the receiving node should be proxied to a suitable node.

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591593#comment-13591593
 ] 

Commit Tag Bot commented on SOLR-4210:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451987

SOLR-4210: Fix it


 Requests to a Collection that does not exist on the receiving node should be 
 proxied to a suitable node.
 

 Key: SOLR-4210
 URL: https://issues.apache.org/jira/browse/SOLR-4210
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: SOLR-4210.patch, SOLR-4210.patch, SOLR-4210.patch, 
 SOLR-4210.patch


 It only check the local collection or core  when searching, doesn't look on 
 other nodes. e.g. a cluster have 4 nodes. 1th 2th 3th contribute to 
 collection1. 2th 3th 4th contribute to collection2. now send query to 4th 
 to searching collection1 will failed. 
 It is an imperfect feature for searching. it is a TODO part in 
 SolrDispatchFilter-line 220.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4210) Requests to a Collection that does not exist on the receiving node should be proxied to a suitable node.

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591594#comment-13591594
 ] 

Commit Tag Bot commented on SOLR-4210:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1451989

SOLR-4210: Fix it


 Requests to a Collection that does not exist on the receiving node should be 
 proxied to a suitable node.
 

 Key: SOLR-4210
 URL: https://issues.apache.org/jira/browse/SOLR-4210
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: SOLR-4210.patch, SOLR-4210.patch, SOLR-4210.patch, 
 SOLR-4210.patch


 It only check the local collection or core  when searching, doesn't look on 
 other nodes. e.g. a cluster have 4 nodes. 1th 2th 3th contribute to 
 collection1. 2th 3th 4th contribute to collection2. now send query to 4th 
 to searching collection1 will failed. 
 It is an imperfect feature for searching. it is a TODO part in 
 SolrDispatchFilter-line 220.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #788: POMs out of sync

2013-03-02 Thread Mark Miller
Intermittent. Looking into it.

- Mark

On Mar 2, 2013, at 7:13 PM, Apache Jenkins Server jenk...@builds.apache.org 
wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/788/
 
 3 tests failed.
 FAILED:  
 org.apache.solr.cloud.RecoveryZkTest.org.apache.solr.cloud.RecoveryZkTest
 
 Error Message:
 Resource in scope SUITE failed to close. Resource was registered from thread 
 Thread[id=1199, name=coreLoadExecutor-861-thread-1, state=RUNNABLE, 
 group=TGRP-RecoveryZkTest], registration stack trace below.
 
 Stack Trace:
 com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
 SUITE failed to close. Resource was registered from thread Thread[id=1199, 
 name=coreLoadExecutor-861-thread-1, state=RUNNABLE, 
 group=TGRP-RecoveryZkTest], registration stack trace below.
   at org.junit.Assert.fail(Assert.java:93)
   at 
 org.apache.lucene.util.CloseableDirectory.close(CloseableDirectory.java:47)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:602)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:599)
   at 
 com.carrotsearch.randomizedtesting.RandomizedContext.closeResources(RandomizedContext.java:167)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2.afterAlways(RandomizedRunner.java:615)
   at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
   at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
   at java.lang.Thread.run(Thread.java:679)
 
 
 FAILED:  
 org.apache.solr.cloud.RecoveryZkTest.org.apache.solr.cloud.RecoveryZkTest
 
 Error Message:
 Resource in scope SUITE failed to close. Resource was registered from thread 
 Thread[id=1201, name=RecoveryThread, state=RUNNABLE, 
 group=TGRP-RecoveryZkTest], registration stack trace below.
 
 Stack Trace:
 com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
 SUITE failed to close. Resource was registered from thread Thread[id=1201, 
 name=RecoveryThread, state=RUNNABLE, group=TGRP-RecoveryZkTest], registration 
 stack trace below.
   at org.junit.Assert.fail(Assert.java:93)
   at 
 org.apache.lucene.util.CloseableDirectory.close(CloseableDirectory.java:47)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:602)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:599)
   at 
 com.carrotsearch.randomizedtesting.RandomizedContext.closeResources(RandomizedContext.java:167)
   at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$2.afterAlways(RandomizedRunner.java:615)
   at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
   at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
   at java.lang.Thread.run(Thread.java:679)
 
 
 FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch
 
 Error Message:
 No registered leader was found, collection:collection1 slice:shard1
 
 Stack Trace:
 org.apache.solr.common.SolrException: No registered leader was found, 
 collection:collection1 slice:shard1
   at 
 __randomizedtesting.SeedInfo.seed([2C58C15991FF11F6:ADBE4F41E6A071CA]:0)
   at 
 org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:430)
   at 
 org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:295)
   at 
 org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:116)
 
 
 
 
 Build Log:
 [...truncated 22656 lines...]
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4511) Repeater doesn't return correct index version to slaves

2013-03-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591598#comment-13591598
 ] 

Mark Miller commented on SOLR-4511:
---

Great, thanks for the info.

 Repeater doesn't return correct index version to slaves
 ---

 Key: SOLR-4511
 URL: https://issues.apache.org/jira/browse/SOLR-4511
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.1
Reporter: Raúl Grande
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: o8uzad.jpg, SOLR-4511.patch


 Related to SOLR-4471. I have a master-repeater-2slaves architecture. The 
 replication between master and repeater is working fine but slaves aren't 
 able to replicate because their master (repeater node) is returning an old 
 index version, but in admin UI the version that repeater have is correct.
 When I do http://localhost:17045/solr/replication?command=indexversion 
 response is: long name=generation29037/long when the version should be 
 29042
 If I restart the repeater node this URL returns the correct index version, 
 but after a while it fails again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4401) Move the stress test in SOLR-4196 into a junit test

2013-03-02 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4401:
-

Attachment: SOLR-4401.patch

 Move the stress test in SOLR-4196 into a junit test
 ---

 Key: SOLR-4401
 URL: https://issues.apache.org/jira/browse/SOLR-4401
 Project: Solr
  Issue Type: Test
Affects Versions: 4.2, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4401.patch, StressTest.zip


 As part of SOLR-4196, I created a stress test proces for rapidly opening and 
 closing cores. It'd probably be useful to make this into a junit test that 
 ran nightly (it needs some time in order to show anything, as in minutes). 
 Typical failures are 20 minutes into the run, but occasionally they're faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4401) Move the stress test in SOLR-4196 into a junit test

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591617#comment-13591617
 ] 

Commit Tag Bot commented on SOLR-4401:
--

[trunk commit] Erick Erickson
http://svn.apache.org/viewvc?view=revisionrevision=1451997

SOLR-4401 adding rapidly opening/closing cores to unit tests


 Move the stress test in SOLR-4196 into a junit test
 ---

 Key: SOLR-4401
 URL: https://issues.apache.org/jira/browse/SOLR-4401
 Project: Solr
  Issue Type: Test
Affects Versions: 4.2, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4401.patch, StressTest.zip


 As part of SOLR-4196, I created a stress test proces for rapidly opening and 
 closing cores. It'd probably be useful to make this into a junit test that 
 ran nightly (it needs some time in order to show anything, as in minutes). 
 Typical failures are 20 minutes into the run, but occasionally they're faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4401) Move the stress test in SOLR-4196 into a junit test

2013-03-02 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13591619#comment-13591619
 ] 

Commit Tag Bot commented on SOLR-4401:
--

[trunk commit] Erick Erickson
http://svn.apache.org/viewvc?view=revisionrevision=1451998

took out soe debugging info I left in by mistake when checking in SOLR-4401


 Move the stress test in SOLR-4196 into a junit test
 ---

 Key: SOLR-4401
 URL: https://issues.apache.org/jira/browse/SOLR-4401
 Project: Solr
  Issue Type: Test
Affects Versions: 4.2, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.2, 5.0

 Attachments: SOLR-4401.patch, StressTest.zip


 As part of SOLR-4196, I created a stress test proces for rapidly opening and 
 closing cores. It'd probably be useful to make this into a junit test that 
 ran nightly (it needs some time in order to show anything, as in minutes). 
 Typical failures are 20 minutes into the run, but occasionally they're faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org