[jira] [Created] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5063:


 Summary: Allow GrowableWriter to store negative values
 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4


For some use-cases, it would be convenient to be able to store negative values 
in a GrowableWriter, for example to use it in FieldCache: The first term is the 
minimum value and one could use a GrowableWriter to store deltas between this 
minimum value and the current value. (The need for negative values comes from 
the fact that maxValue - minValue might be larger than Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5030) FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work correctly for 1-byte (like English) and multi-byte (non-Latin) letters

2013-06-18 Thread Artem Lukanin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686447#comment-13686447
 ] 

Artem Lukanin commented on LUCENE-5030:
---

you already have
private static final int PAYLOAD_SEP = '\u001f';
in AnalyzingSuggester

 FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work 
 correctly for 1-byte (like English) and multi-byte (non-Latin) letters
 

 Key: LUCENE-5030
 URL: https://issues.apache.org/jira/browse/LUCENE-5030
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Artem Lukanin
 Attachments: nonlatin_fuzzySuggester1.patch, 
 nonlatin_fuzzySuggester2.patch, nonlatin_fuzzySuggester3.patch, 
 nonlatin_fuzzySuggester.patch


 There is a limitation in the current FuzzySuggester implementation: it 
 computes edits in UTF-8 space instead of Unicode character (code point) 
 space. 
 This should be fixable: we'd need to fix TokenStreamToAutomaton to work in 
 Unicode character space, then fix FuzzySuggester to do the same steps that 
 FuzzyQuery does: do the LevN expansion in Unicode character space, then 
 convert that automaton to UTF-8, then intersect with the suggest FST.
 See the discussion here: 
 http://lucene.472066.n3.nabble.com/minFuzzyLength-in-FuzzySuggester-behaves-differently-for-English-and-Russian-td4067018.html#none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.3.1 Release notes: review requested

2013-06-18 Thread Adrien Grand
+1, release notes look good to me too.

On Mon, Jun 17, 2013 at 10:23 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 All steps of the release process except for website updates and
 announcements have been completed. I'll grab some sleep and continue the
 rest of the steps after 8-9 hours.

Thanks for doing this!

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5063:
-

Attachment: LUCENE-5063.patch

Here is a patch which makes GrowableWriter able to store negative values and 
makes FieldCache.DEFAULT.get(Ints|Longs) use it. In order to not make field 
cache loading too slow, the GrowableWriters are created with an acceptable 
overhead ratio of 50% so that they can grow the number of bits per value 
quickly in order not to perform too much resizing.

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5030) FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work correctly for 1-byte (like English) and multi-byte (non-Latin) letters

2013-06-18 Thread Artem Lukanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Lukanin updated LUCENE-5030:
--

Attachment: nonlatin_fuzzySuggester4.patch

I have fixed testRandom, which repeats the logic of FuzzySuggester.
Now all the tests pass.
Please, review.

 FuzzySuggester has to operate FSTs of Unicode-letters, not UTF-8, to work 
 correctly for 1-byte (like English) and multi-byte (non-Latin) letters
 

 Key: LUCENE-5030
 URL: https://issues.apache.org/jira/browse/LUCENE-5030
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.3
Reporter: Artem Lukanin
 Attachments: nonlatin_fuzzySuggester1.patch, 
 nonlatin_fuzzySuggester2.patch, nonlatin_fuzzySuggester3.patch, 
 nonlatin_fuzzySuggester4.patch, nonlatin_fuzzySuggester.patch


 There is a limitation in the current FuzzySuggester implementation: it 
 computes edits in UTF-8 space instead of Unicode character (code point) 
 space. 
 This should be fixable: we'd need to fix TokenStreamToAutomaton to work in 
 Unicode character space, then fix FuzzySuggester to do the same steps that 
 FuzzyQuery does: do the LevN expansion in Unicode character space, then 
 convert that automaton to UTF-8, then intersect with the suggest FST.
 See the discussion here: 
 http://lucene.472066.n3.nabble.com/minFuzzyLength-in-FuzzySuggester-behaves-differently-for-English-and-Russian-td4067018.html#none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3076) Solr(Cloud) should support block joins

2013-06-18 Thread Vadim Kirilchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686519#comment-13686519
 ] 

Vadim Kirilchuk commented on SOLR-3076:
---

Yonik, it's great!

Just keep in mind several improvements:
* make _childDocuments inside SolrInputDocument lazy instead of new ArrayList() 
in constructor.
* at JavaBinCodec there is no need in SOLRINPUTDOC_CHILDS tag, it is easier to 
write SOLRINPITDOC docFieldsSize childrenNum instead of SOLRINPUTDOC 
docFieldsSize SOLRINPUTDOC_CHILDS childrenNum

There is also a blueprint of dih support for this: 
https://issues.apache.org/jira/secure/attachment/12576960/dih-3076.patch 
Maybe it will be better to move it to it's own jira ticket.

There are no support for:
* delete block
* overwrite/update block
* JSON

I hope it helps.




 Solr(Cloud) should support block joins
 --

 Key: SOLR-3076
 URL: https://issues.apache.org/jira/browse/SOLR-3076
 Project: Solr
  Issue Type: New Feature
Reporter: Grant Ingersoll
Assignee: Yonik Seeley
 Fix For: 5.0, 4.4

 Attachments: 27M-singlesegment-histogram.png, 27M-singlesegment.png, 
 bjq-vs-filters-backward-disi.patch, bjq-vs-filters-illegal-state.patch, 
 child-bjqparser.patch, dih-3076.patch, dih-config.xml, 
 parent-bjq-qparser.patch, parent-bjq-qparser.patch, Screen Shot 2012-07-17 at 
 1.12.11 AM.png, SOLR-3076-childDocs.patch, SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-7036-childDocs-solr-fork-trunk-patched, 
 solrconf-bjq-erschema-snippet.xml, solrconfig.xml.patch, 
 tochild-bjq-filtered-search-fix.patch


 Lucene has the ability to do block joins, we should add it to Solr.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4059) Custom Sharding

2013-06-18 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-4059.
--

Resolution: Duplicate

duplicate of SOLR-4221

 Custom Sharding
 ---

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-trunk - Build # 2220 - Failure

2013-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2220/

No tests ran.

Build Log:
[...truncated 17833 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/solr/build.xml:377:
 java.net.ConnectException: Operation timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
at sun.net.www.http.HttpClient.init(HttpClient.java:203)
at sun.net.www.http.HttpClient.New(HttpClient.java:290)
at sun.net.www.http.HttpClient.New(HttpClient.java:306)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 9 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Per Steffensen

Hi

Scenario:
* 1) You have a Solr cloud cluster running - several Solr nodes across 
several machine - many collections with many replica and documents 
indexed into them
* 2) One of the machines running a Solr node completely crashes - 
totally gone including local disk with data/config etc. of the Solr node
* 3) You want to be able to insert a new empty machine, 
install/configure Solr on this new machine, give it the same IP and 
hostname as the crashed machine had, and then we want to be able to 
start this new Solr node and have it take the place of the crashed Solr 
node, making the Solr cloud cluster work again
* 4) No replication (only one replica per shard), so we will accept that 
the data on the crashed machine is gone forever, but of course we want 
the Solr cloud cluster to continue running with the documents indexed on 
the other Solr nodes


At my company we are establishing a procedure for what to do in 3) above.

Basically we use our install script to install/configure the new Solr 
node on the new machine as it was originally installed/configured on the 
crashed machine back when the system was originally set up - this 
includes an empty solr.xml file (no cores mentioned). Now starting all 
the Solr nodes (including the new reestablished one) again. They all 
start successfully but the Solr cloud cluster does not work - at least 
when doing distributed searches touching replica that used to run on the 
crashed Solr node, because those replica in not loaded on the 
reestablished node.


How to make sure a reestablished Solr node on a machine with same IP and 
hostname as on the machine that crashed will load all the replica that 
the old Solr node used to run?


Potential solutions
* We have tried to make sure that the solr.xml on the reestablished Solr 
node is containing the same core-list as on the crashed one. Then 
everything works as we want. But this is a little fragile and it is a 
solution outside Solr - you need to figure out how to reestablish the 
solr.xml yourself - probably something like looking into 
clusterstate.json and generate the solr.xml from that
* Untested by us: Maybe we will also succeed just running Core API LOAD 
operations against the new reestablished Solr node - a LOAD operation 
for each replica that used to run on the Solr node. But this is also a 
little fragile and it is also (partly) a solution outside Solr - you 
need to figure out which cores to load yourself.


I have to say that we do not use the latest Solr version - we use a 
version of Solr based on 4.0.0. So there might be a solution already in 
Solr, but I would be surprised.


Any thoughts about how this ought to be done? Support in Solr? E.g. an 
operation to tell a Solr node to load all the replica that used to run 
on a machine with the same IP and hostname? Or...?


Regards, Per Steffensen

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686561#comment-13686561
 ] 

Robert Muir commented on LUCENE-5063:
-

On one hand we pay the price of an add:
{code}
 @Override
 public long get(int docID) {
-  return values[docID];
+  return minValue + values.get(docID);
 }
{code}

But we get no benefit...
{code}
+ * pBeware that this class will accept to set negative values but in order
+ * to do this, it will grow the number of bits per value to 64.
{code}

This doesn't seem right...

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686594#comment-13686594
 ] 

Robert Muir commented on LUCENE-5063:
-

i see, so we only need negatives in growablewriter for the case where we'd use 
64 bpv for longs anyway.
Can we add a comment?

Also, we start at 4bpv here, but we don't bitpack for byte/short too? it could 
be a little unintuitive that using long takes less ram than byte :)

Or, maybe FC should only have a 'long' API to better match DV?

{quote}
In order to not make field cache loading too slow, the GrowableWriters are 
created with an acceptable overhead ratio of 50% so that they can grow the 
number of bits per value quickly in order not to perform too much resizing.
{quote}

This is consistent with SortedDocValuesImpl, except SortedDocValuesImpl has a 
'startBPV' of 1, whereas its 4 here. Maybe we should use 1 here too?

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Resolved] (SOLR-4932) Persisting solr.xml still adds some elements that aren't in the original

2013-06-18 Thread Erick Erickson
Not a problem. I love that the bot does its tricks, but put in the
revisions manually as a back-up anyway

Erick

On Tue, Jun 18, 2013 at 12:09 AM, Mark Miller markrmil...@gmail.com wrote:
 Sorry bout that - I briefly shutdown the commit bot earlier today to fix an 
 issue I noticed (multiple jira tags would tag the same issue multiple times), 
 and I changed the folder I start out of, didn't realize I hadn't fixed the 
 classpath and that it didn't actually start. I just noticed this and started 
 it again. Output goes to nohup.out and I had forgot to check that file after 
 starting up.

 (A while back I also updated it to retry on IOExceptions, so future drops 
 offs will likely be human error)

 - Mark

 On Jun 17, 2013, at 10:27 PM, Erick Erickson (JIRA) j...@apache.org wrote:


 [ 
 https://issues.apache.org/jira/browse/SOLR-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
  ]

 Erick Erickson resolved SOLR-4932.
 --

   Resolution: Fixed
Fix Version/s: 4.4
   5.0

 trunk: r-1493982
 4x:r-1493986

 Persisting solr.xml still adds some elements that aren't in the original
 

Key: SOLR-4932
URL: https://issues.apache.org/jira/browse/SOLR-4932
Project: Solr
 Issue Type: Bug
   Affects Versions: 4.3, 5.0
   Reporter: Erick Erickson
   Assignee: Erick Erickson
   Priority: Minor
Fix For: 5.0, 4.4

Attachments: SOLR-4932.patch


 From elyograg: distribUpdateSoTimeout=0 distribUpdateConnTimeout=0 are 
 added to the cores element when persisted.
 The problem with the current test is it have _everything_ I could think of 
 in it, and they are all preserved. Adding a test that has only the minimal 
 solr.xml should flush out persisting the attribs not in the original.

 --
 This message is automatically generated by JIRA.
 If you think it was sent incorrectly, please contact your JIRA administrators
 For more information on JIRA, see: http://www.atlassian.com/software/jira

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686622#comment-13686622
 ] 

Adrien Grand commented on LUCENE-5063:
--

bq. i see, so we only need negatives in growablewriter for the case where we'd 
use 64 bpv for longs anyway.

Exactly. Negative values in a GrowableWriter are more 64-bits unsigned values 
than actual negative values.

bq. Or, maybe FC should only have a 'long' API to better match DV?

Are you talking about removing all get(Bytes|Shorts|Ints|Floats|Doubles) and 
only have getLongs which would return a NumericDocValues instance? Indeed I 
think it would make things simpler and more consistent (eg. comparators and 
FieldCacheRangeFilter) but this looks like a big change!

bq. This is consistent with SortedDocValuesImpl, except SortedDocValuesImpl has 
a 'startBPV' of 1, whereas its 4 here. Maybe we should use 1 here too?

Agreed.

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686627#comment-13686627
 ] 

Robert Muir commented on LUCENE-5063:
-

{quote}
Indeed I think it would make things simpler and more consistent (eg. 
comparators and FieldCacheRangeFilter) but this looks like a big change!
{quote}

It doesnt need to hold up this issue. we can make a followup issue for that. 
Maybe we should do something about the Bytes/Shorts though here...

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2880) SpanQuery scoring inconsistencies

2013-06-18 Thread Adam Ringel (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686631#comment-13686631
 ] 

Adam Ringel commented on LUCENE-2880:
-

I subclassed DefaultSimilarity to work around this.
Seemed simple enough.

{code}
public class LUCENE2880_SloppyFreqDistanceAdjuster {
private static Logger logger = 
Logger.getLogger(LUCENE2880_SloppyFreqDistanceAdjuster.class);

public int distance(int distance) {
if(distance  2) {
logger.warn(distance - distacne is , 2, has 
LUCENE-2880 been resolved?);
return 0;
}

return distance - 2;
}

}

public class LUCENE2880_DefaultSimilarity extends DefaultSimilarity {
private static final long serialVersionUID = 1L;
private static final LUCENE2880_SloppyFreqDistanceAdjuster ADJUSTER = 
new LUCENE2880_SloppyFreqDistanceAdjuster();

@Override
public float sloppyFreq(int distance) {
return super.sloppyFreq(ADJUSTER.distance(distance));
}

}
{code}



 SpanQuery scoring inconsistencies
 -

 Key: LUCENE-2880
 URL: https://issues.apache.org/jira/browse/LUCENE-2880
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.4

 Attachments: LUCENE-2880.patch


 Spinoff of LUCENE-2879.
 You can see a full description there, but the gist is that SpanQuery sums up 
 freqs with sloppyFreq.
 However this slop is simply spans.end() - spans.start()
 For a SpanTermQuery for example, this means its scoring 0.5 for TF versus 
 TermQuery's 1.0.
 As you can imagine, I think in practical situations this would make it 
 difficult for SpanQuery users to
 really use SpanQueries for effective ranking, especially in combination with 
 non-Spanqueries (maybe via DisjunctionMaxQuery, etc)
 The problem is more general than this simple example: for example 
 SpanNearQuery should be consistent with PhraseQuery's slop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Mark Miller
I don't know what the best method to use now is, but the slightly longer term 
plan is to:

* Have a new mode where you cannot preconfigure cores, only use the 
collection's API.
* ZK becomes the cluster state truth.
* The Overseer takes actions to ensure cores live/die in different places based 
on the truth in ZK.

- Mark

On Jun 18, 2013, at 6:03 AM, Per Steffensen st...@designware.dk wrote:

 Hi
 
 Scenario:
 * 1) You have a Solr cloud cluster running - several Solr nodes across 
 several machine - many collections with many replica and documents indexed 
 into them
 * 2) One of the machines running a Solr node completely crashes - totally 
 gone including local disk with data/config etc. of the Solr node
 * 3) You want to be able to insert a new empty machine, install/configure 
 Solr on this new machine, give it the same IP and hostname as the crashed 
 machine had, and then we want to be able to start this new Solr node and have 
 it take the place of the crashed Solr node, making the Solr cloud cluster 
 work again
 * 4) No replication (only one replica per shard), so we will accept that the 
 data on the crashed machine is gone forever, but of course we want the Solr 
 cloud cluster to continue running with the documents indexed on the other 
 Solr nodes
 
 At my company we are establishing a procedure for what to do in 3) above.
 
 Basically we use our install script to install/configure the new Solr node 
 on the new machine as it was originally installed/configured on the crashed 
 machine back when the system was originally set up - this includes an empty 
 solr.xml file (no cores mentioned). Now starting all the Solr nodes 
 (including the new reestablished one) again. They all start successfully but 
 the Solr cloud cluster does not work - at least when doing distributed 
 searches touching replica that used to run on the crashed Solr node, because 
 those replica in not loaded on the reestablished node.
 
 How to make sure a reestablished Solr node on a machine with same IP and 
 hostname as on the machine that crashed will load all the replica that the 
 old Solr node used to run?
 
 Potential solutions
 * We have tried to make sure that the solr.xml on the reestablished Solr node 
 is containing the same core-list as on the crashed one. Then everything works 
 as we want. But this is a little fragile and it is a solution outside Solr 
 - you need to figure out how to reestablish the solr.xml yourself - probably 
 something like looking into clusterstate.json and generate the solr.xml from 
 that
 * Untested by us: Maybe we will also succeed just running Core API LOAD 
 operations against the new reestablished Solr node - a LOAD operation for 
 each replica that used to run on the Solr node. But this is also a little 
 fragile and it is also (partly) a solution outside Solr - you need to 
 figure out which cores to load yourself.
 
 I have to say that we do not use the latest Solr version - we use a version 
 of Solr based on 4.0.0. So there might be a solution already in Solr, but I 
 would be surprised.
 
 Any thoughts about how this ought to be done? Support in Solr? E.g. an 
 operation to tell a Solr node to load all the replica that used to run on a 
 machine with the same IP and hostname? Or...?
 
 Regards, Per Steffensen
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686639#comment-13686639
 ] 

Mark Miller commented on SOLR-4221:
---

It's only a dupe by text name afaict - really it should be a part tracked as 
part of this issue IMO.

 Custom sharding
 ---

 Key: SOLR-4221
 URL: https://issues.apache.org/jira/browse/SOLR-4221
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Noble Paul
 Attachments: SOLR-4221.patch


 Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4059) Custom Sharding

2013-06-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-4059:
---


 Custom Sharding
 ---

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4059) Allow forwarding to updates based on the shard updates arrive at rather than hashing.

2013-06-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4059:
--

Summary: Allow forwarding to updates based on the shard updates arrive at 
rather than hashing.  (was: Custom Sharding)

 Allow forwarding to updates based on the shard updates arrive at rather than 
 hashing.
 -

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4059) Allow forwarding updates to replicas based on the shard updates arrive at rather than hashing.

2013-06-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4059:
--

Summary: Allow forwarding updates to replicas based on the shard updates 
arrive at rather than hashing.  (was: Allow forwarding to updates based on the 
shard updates arrive at rather than hashing.)

 Allow forwarding updates to replicas based on the shard updates arrive at 
 rather than hashing.
 --

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4059) Allow forwarding updates to replicas based on the shard updates arrive at rather than hashing.

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686642#comment-13686642
 ] 

Mark Miller commented on SOLR-4059:
---

I don't think it is - that's a large catch all issue and this one is much more 
specific - the only thing that's duped is the title. If anything, you might 
consider this feature part of SOLR-4221.

 Allow forwarding updates to replicas based on the shard updates arrive at 
 rather than hashing.
 --

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags

2013-06-18 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-4935:


 Summary: persisting solr.xml preserves extraneous values like 
wt=json in core tags
 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson


I'll be s happy when we stop supporting persistence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Per Steffensen
Ok, thanks. I think we will just reconstruct solr.xml (from 
clusterstate.json) ourselves for now.


On 6/18/13 2:15 PM, Mark Miller wrote:

I don't know what the best method to use now is, but the slightly longer term 
plan is to:

* Have a new mode where you cannot preconfigure cores, only use the 
collection's API.
* ZK becomes the cluster state truth.
* The Overseer takes actions to ensure cores live/die in different places based 
on the truth in ZK.

- Mark

On Jun 18, 2013, at 6:03 AM, Per Steffensen st...@designware.dk wrote:


Hi

Scenario:
* 1) You have a Solr cloud cluster running - several Solr nodes across several 
machine - many collections with many replica and documents indexed into them
* 2) One of the machines running a Solr node completely crashes - totally gone 
including local disk with data/config etc. of the Solr node
* 3) You want to be able to insert a new empty machine, install/configure Solr 
on this new machine, give it the same IP and hostname as the crashed machine 
had, and then we want to be able to start this new Solr node and have it take 
the place of the crashed Solr node, making the Solr cloud cluster work again
* 4) No replication (only one replica per shard), so we will accept that the 
data on the crashed machine is gone forever, but of course we want the Solr 
cloud cluster to continue running with the documents indexed on the other Solr 
nodes

At my company we are establishing a procedure for what to do in 3) above.

Basically we use our install script to install/configure the new Solr node on the new 
machine as it was originally installed/configured on the crashed machine back when the system was 
originally set up - this includes an empty solr.xml file (no cores mentioned). Now 
starting all the Solr nodes (including the new reestablished one) again. They all start 
successfully but the Solr cloud cluster does not work - at least when doing distributed searches 
touching replica that used to run on the crashed Solr node, because those replica in not loaded on 
the reestablished node.

How to make sure a reestablished Solr node on a machine with same IP and 
hostname as on the machine that crashed will load all the replica that the old 
Solr node used to run?

Potential solutions
* We have tried to make sure that the solr.xml on the reestablished Solr node is 
containing the same core-list as on the crashed one. Then everything works as we want. 
But this is a little fragile and it is a solution outside Solr - you need to 
figure out how to reestablish the solr.xml yourself - probably something like looking 
into clusterstate.json and generate the solr.xml from that
* Untested by us: Maybe we will also succeed just running Core API LOAD operations 
against the new reestablished Solr node - a LOAD operation for each replica that used to 
run on the Solr node. But this is also a little fragile and it is also (partly) a 
solution outside Solr - you need to figure out which cores to load yourself.

I have to say that we do not use the latest Solr version - we use a version 
of Solr based on 4.0.0. So there might be a solution already in Solr, but I would be 
surprised.

Any thoughts about how this ought to be done? Support in Solr? E.g. an 
operation to tell a Solr node to load all the replica that used to run on a machine 
with the same IP and hostname? Or...?

Regards, Per Steffensen

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4059) Allow forwarding updates to replicas based on the shard updates arrive at rather than hashing.

2013-06-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686652#comment-13686652
 ] 

Noble Paul commented on SOLR-4059:
--

It is still not very clear on what is the objective. Can you edit the 
description too

 Allow forwarding updates to replicas based on the shard updates arrive at 
 rather than hashing.
 --

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Question about lengthNorm(numTerms)

2013-06-18 Thread jiangwen jiang
I got it, thanks, Jack

2013/6/18 Jack Krupansky j...@basetechnology.com

   The length normalization gets compressed down to a single byte “norm”,
 stored in the “.nrm” files.

 See:
 norm(t,d)

 http://lucene.apache.org/core/4_3_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html

 -- Jack Krupansky

  *From:* jiangwen jiang jiangwen...@gmail.com
 *Sent:* Tuesday, June 18, 2013 12:35 AM
 *To:* dev@lucene.apache.org
 *Subject:* Question about lengthNorm(numTerms)

 Hi, guys:

 Is it suitable to send question in this mail list? There's a question
 about numTerms.

 http://www.lucenetutorial.com/advanced-topics/scoring.html, this website
 describes Lucene scoring.

 *4. lengthNorm*
 Implementation: 1/sqrt(numTerms)
 Implication: a term matched in fields with less terms have a higher score
 Rationale: a term in a field with less terms is more important than one with 
 more


 numTerms mentioned here, I think it means number of terms in field per 
 document. But the Lucene

 file format page doesn't mentioned it.

 http://lucene.apache.org/core/3_6_2/fileformats.html

 Does the numTerms really exists in Lucene index, if yes, how to get it?


 Regards




[jira] [Created] (LUCENE-5064) Add PagedMutable

2013-06-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5064:


 Summary: Add PagedMutable
 Key: LUCENE-5064
 URL: https://issues.apache.org/jira/browse/LUCENE-5064
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.4


In the same way that we now have a PagedGrowableWriter, we could have a 
PagedMutable which would behave just like PackedInts.Mutable but would support 
more than 2B values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Otis Gospodnetic
Hi,

Re ZK becomes the cluster state truth.

I thought that was already the case, no?  Who/what else holds (which)
bits of the total truth?

Thanks,
Otis





On Tue, Jun 18, 2013 at 8:15 AM, Mark Miller markrmil...@gmail.com wrote:
 I don't know what the best method to use now is, but the slightly longer term 
 plan is to:

 * Have a new mode where you cannot preconfigure cores, only use the 
 collection's API.
 * ZK becomes the cluster state truth.
 * The Overseer takes actions to ensure cores live/die in different places 
 based on the truth in ZK.

 - Mark

 On Jun 18, 2013, at 6:03 AM, Per Steffensen st...@designware.dk wrote:

 Hi

 Scenario:
 * 1) You have a Solr cloud cluster running - several Solr nodes across 
 several machine - many collections with many replica and documents indexed 
 into them
 * 2) One of the machines running a Solr node completely crashes - totally 
 gone including local disk with data/config etc. of the Solr node
 * 3) You want to be able to insert a new empty machine, install/configure 
 Solr on this new machine, give it the same IP and hostname as the crashed 
 machine had, and then we want to be able to start this new Solr node and 
 have it take the place of the crashed Solr node, making the Solr cloud 
 cluster work again
 * 4) No replication (only one replica per shard), so we will accept that the 
 data on the crashed machine is gone forever, but of course we want the Solr 
 cloud cluster to continue running with the documents indexed on the other 
 Solr nodes

 At my company we are establishing a procedure for what to do in 3) above.

 Basically we use our install script to install/configure the new Solr node 
 on the new machine as it was originally installed/configured on the crashed 
 machine back when the system was originally set up - this includes an 
 empty solr.xml file (no cores mentioned). Now starting all the Solr nodes 
 (including the new reestablished one) again. They all start successfully but 
 the Solr cloud cluster does not work - at least when doing distributed 
 searches touching replica that used to run on the crashed Solr node, because 
 those replica in not loaded on the reestablished node.

 How to make sure a reestablished Solr node on a machine with same IP and 
 hostname as on the machine that crashed will load all the replica that the 
 old Solr node used to run?

 Potential solutions
 * We have tried to make sure that the solr.xml on the reestablished Solr 
 node is containing the same core-list as on the crashed one. Then everything 
 works as we want. But this is a little fragile and it is a solution 
 outside Solr - you need to figure out how to reestablish the solr.xml 
 yourself - probably something like looking into clusterstate.json and 
 generate the solr.xml from that
 * Untested by us: Maybe we will also succeed just running Core API LOAD 
 operations against the new reestablished Solr node - a LOAD operation for 
 each replica that used to run on the Solr node. But this is also a little 
 fragile and it is also (partly) a solution outside Solr - you need to 
 figure out which cores to load yourself.

 I have to say that we do not use the latest Solr version - we use a 
 version of Solr based on 4.0.0. So there might be a solution already in 
 Solr, but I would be surprised.

 Any thoughts about how this ought to be done? Support in Solr? E.g. an 
 operation to tell a Solr node to load all the replica that used to run on 
 a machine with the same IP and hostname? Or...?

 Regards, Per Steffensen

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4059) Allow forwarding updates to replicas based on an update param rather than hashing.

2013-06-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4059:
--

Fix Version/s: 4.4
   5.0
 Assignee: Mark Miller
  Summary: Allow forwarding updates to replicas based on an update 
param rather than hashing.  (was: Allow forwarding updates to replicas based on 
the shard updates arrive at rather than hashing.)

 Allow forwarding updates to replicas based on an update param rather than 
 hashing.
 --

 Key: SOLR-4059
 URL: https://issues.apache.org/jira/browse/SOLR-4059
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4059.patch


 Had not fully thought through this one yet, but Yonik caught me up at 
 ApacheCon. We need to be able to skip hashing and let the client choose the 
 shard, but still send to replicas.
 Ideas for the interface? hash=false?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags

2013-06-18 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4935:
-

Attachment: SOLR-4935.patch

Very preliminary patch, haven't run full test suite on it yet, but it fixes 
this problem in my test case.

 persisting solr.xml preserves extraneous values like wt=json in core tags
 -

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags

2013-06-18 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4935:
-

Attachment: SOLR-4935.patch

Might fix the problem with not preserving the instance dir if it's not 
specified in the create admin action

 persisting solr.xml preserves extraneous values like wt=json in core tags
 -

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags when creating cores via the admin handler

2013-06-18 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-4935:
-

Description: 
I'll be s happy when we stop supporting persistence.

Two problems
1 if instanceDir is not specified on the create, it's not persisted. And 
subsequent starts of Solr will fail.
2 extraneous params are specified, made worse by SolrJ adding some stuff on 
the create request like wt=javabin etc.

  was:I'll be s happy when we stop supporting persistence.

Summary: persisting solr.xml preserves extraneous values like wt=json 
in core tags when creating cores via the admin handler  (was: persisting 
solr.xml preserves extraneous values like wt=json in core tags)

 persisting solr.xml preserves extraneous values like wt=json in core tags 
 when creating cores via the admin handler
 ---

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.
 Two problems
 1 if instanceDir is not specified on the create, it's not persisted. And 
 subsequent starts of Solr will fail.
 2 extraneous params are specified, made worse by SolrJ adding some stuff on 
 the create request like wt=javabin etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5064) Add PagedMutable

2013-06-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5064:
-

Attachment: LUCENE-5064.patch

Patch. Most of the code is shared with PagedGrowableWriter through 
AbstractPagedMutable.

 Add PagedMutable
 

 Key: LUCENE-5064
 URL: https://issues.apache.org/jira/browse/LUCENE-5064
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.4

 Attachments: LUCENE-5064.patch


 In the same way that we now have a PagedGrowableWriter, we could have a 
 PagedMutable which would behave just like PackedInts.Mutable but would 
 support more than 2B values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Mark Miller
With preconfigurable cores, each node with cores also holds some truth.

You might have a core registered in zk but it doesn't exist on a node. You 
might have a core that is not registered in zk, but does on a node. A core that 
comes up might be a really old node coming back or it might be a user that pre 
configured a new core.

Without preconfigurable cores, the Overseer can adjust for these things and 
make ZK the truth by fiat.

- Mark

On Jun 18, 2013, at 8:50 AM, Otis Gospodnetic otis.gospodne...@gmail.com 
wrote:

 Hi,
 
 Re ZK becomes the cluster state truth.
 
 I thought that was already the case, no?  Who/what else holds (which)
 bits of the total truth?
 
 Thanks,
 Otis
 
 
 
 
 
 On Tue, Jun 18, 2013 at 8:15 AM, Mark Miller markrmil...@gmail.com wrote:
 I don't know what the best method to use now is, but the slightly longer 
 term plan is to:
 
 * Have a new mode where you cannot preconfigure cores, only use the 
 collection's API.
 * ZK becomes the cluster state truth.
 * The Overseer takes actions to ensure cores live/die in different places 
 based on the truth in ZK.
 
 - Mark
 
 On Jun 18, 2013, at 6:03 AM, Per Steffensen st...@designware.dk wrote:
 
 Hi
 
 Scenario:
 * 1) You have a Solr cloud cluster running - several Solr nodes across 
 several machine - many collections with many replica and documents indexed 
 into them
 * 2) One of the machines running a Solr node completely crashes - totally 
 gone including local disk with data/config etc. of the Solr node
 * 3) You want to be able to insert a new empty machine, install/configure 
 Solr on this new machine, give it the same IP and hostname as the crashed 
 machine had, and then we want to be able to start this new Solr node and 
 have it take the place of the crashed Solr node, making the Solr cloud 
 cluster work again
 * 4) No replication (only one replica per shard), so we will accept that 
 the data on the crashed machine is gone forever, but of course we want the 
 Solr cloud cluster to continue running with the documents indexed on the 
 other Solr nodes
 
 At my company we are establishing a procedure for what to do in 3) above.
 
 Basically we use our install script to install/configure the new Solr 
 node on the new machine as it was originally installed/configured on the 
 crashed machine back when the system was originally set up - this includes 
 an empty solr.xml file (no cores mentioned). Now starting all the Solr 
 nodes (including the new reestablished one) again. They all start 
 successfully but the Solr cloud cluster does not work - at least when doing 
 distributed searches touching replica that used to run on the crashed Solr 
 node, because those replica in not loaded on the reestablished node.
 
 How to make sure a reestablished Solr node on a machine with same IP and 
 hostname as on the machine that crashed will load all the replica that the 
 old Solr node used to run?
 
 Potential solutions
 * We have tried to make sure that the solr.xml on the reestablished Solr 
 node is containing the same core-list as on the crashed one. Then 
 everything works as we want. But this is a little fragile and it is a 
 solution outside Solr - you need to figure out how to reestablish the 
 solr.xml yourself - probably something like looking into clusterstate.json 
 and generate the solr.xml from that
 * Untested by us: Maybe we will also succeed just running Core API LOAD 
 operations against the new reestablished Solr node - a LOAD operation for 
 each replica that used to run on the Solr node. But this is also a little 
 fragile and it is also (partly) a solution outside Solr - you need to 
 figure out which cores to load yourself.
 
 I have to say that we do not use the latest Solr version - we use a 
 version of Solr based on 4.0.0. So there might be a solution already in 
 Solr, but I would be surprised.
 
 Any thoughts about how this ought to be done? Support in Solr? E.g. an 
 operation to tell a Solr node to load all the replica that used to run on 
 a machine with the same IP and hostname? Or...?
 
 Regards, Per Steffensen
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4926) I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686775#comment-13686775
 ] 

Mark Miller commented on SOLR-4926:
---

It looks like replication thinks it's successful, then buffered replays are 
done - but only the buffered replays work - the replication is adding no docs. 
Somehow I think the compound file format stuff affected this, but no clue how 
yet.

 I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.
 -

 Key: SOLR-4926
 URL: https://issues.apache.org/jira/browse/SOLR-4926
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4936) Cannot run Solr with zookeeper on multiple IPs

2013-06-18 Thread Grzegorz Sobczyk (JIRA)
Grzegorz Sobczyk created SOLR-4936:
--

 Summary: Cannot run Solr with zookeeper on multiple IPs
 Key: SOLR-4936
 URL: https://issues.apache.org/jira/browse/SOLR-4936
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Grzegorz Sobczyk


This doesn't run solr with ZK:

{{java -DzkRun=192.168.1.169:9180 
-DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8180 -jar 
start.jar}}

{{java -DzkRun=192.168.1.169:9280 
-DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8280 -jar 
start.jar}}

And this does: 

{{java -DzkRun=localhost:9180 -DzkHost=localhost:9180,localhost:9280 
-Djetty.port=8180 -jar start.jar}}

{{java -DzkRun=localhost:9280 -DzkHost=localhost:9180,localhost:9280 
-Djetty.port=8280 -jar start.jar}}

SolrZkServerProps#getMyServerId() assumes that myHost is localhost rather 
than reads it from zkRun property.

(tested on example)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Reestablishing a Solr node that ran on a completely crashed machine

2013-06-18 Thread Otis Gospodnetic
I see.  Thanks for the explanation.  B, yeah, ZK should be the one
and only brain there, I think.  And forget Fiat, go for Mercedes.

Otis



On Tue, Jun 18, 2013 at 10:24 AM, Mark Miller markrmil...@gmail.com wrote:
 With preconfigurable cores, each node with cores also holds some truth.

 You might have a core registered in zk but it doesn't exist on a node. You 
 might have a core that is not registered in zk, but does on a node. A core 
 that comes up might be a really old node coming back or it might be a user 
 that pre configured a new core.

 Without preconfigurable cores, the Overseer can adjust for these things and 
 make ZK the truth by fiat.

 - Mark

 On Jun 18, 2013, at 8:50 AM, Otis Gospodnetic otis.gospodne...@gmail.com 
 wrote:

 Hi,

 Re ZK becomes the cluster state truth.

 I thought that was already the case, no?  Who/what else holds (which)
 bits of the total truth?

 Thanks,
 Otis





 On Tue, Jun 18, 2013 at 8:15 AM, Mark Miller markrmil...@gmail.com wrote:
 I don't know what the best method to use now is, but the slightly longer 
 term plan is to:

 * Have a new mode where you cannot preconfigure cores, only use the 
 collection's API.
 * ZK becomes the cluster state truth.
 * The Overseer takes actions to ensure cores live/die in different places 
 based on the truth in ZK.

 - Mark

 On Jun 18, 2013, at 6:03 AM, Per Steffensen st...@designware.dk wrote:

 Hi

 Scenario:
 * 1) You have a Solr cloud cluster running - several Solr nodes across 
 several machine - many collections with many replica and documents indexed 
 into them
 * 2) One of the machines running a Solr node completely crashes - totally 
 gone including local disk with data/config etc. of the Solr node
 * 3) You want to be able to insert a new empty machine, install/configure 
 Solr on this new machine, give it the same IP and hostname as the crashed 
 machine had, and then we want to be able to start this new Solr node and 
 have it take the place of the crashed Solr node, making the Solr cloud 
 cluster work again
 * 4) No replication (only one replica per shard), so we will accept that 
 the data on the crashed machine is gone forever, but of course we want the 
 Solr cloud cluster to continue running with the documents indexed on the 
 other Solr nodes

 At my company we are establishing a procedure for what to do in 3) above.

 Basically we use our install script to install/configure the new Solr 
 node on the new machine as it was originally installed/configured on the 
 crashed machine back when the system was originally set up - this includes 
 an empty solr.xml file (no cores mentioned). Now starting all the Solr 
 nodes (including the new reestablished one) again. They all start 
 successfully but the Solr cloud cluster does not work - at least when 
 doing distributed searches touching replica that used to run on the 
 crashed Solr node, because those replica in not loaded on the 
 reestablished node.

 How to make sure a reestablished Solr node on a machine with same IP and 
 hostname as on the machine that crashed will load all the replica that the 
 old Solr node used to run?

 Potential solutions
 * We have tried to make sure that the solr.xml on the reestablished Solr 
 node is containing the same core-list as on the crashed one. Then 
 everything works as we want. But this is a little fragile and it is a 
 solution outside Solr - you need to figure out how to reestablish the 
 solr.xml yourself - probably something like looking into clusterstate.json 
 and generate the solr.xml from that
 * Untested by us: Maybe we will also succeed just running Core API LOAD 
 operations against the new reestablished Solr node - a LOAD operation for 
 each replica that used to run on the Solr node. But this is also a little 
 fragile and it is also (partly) a solution outside Solr - you need to 
 figure out which cores to load yourself.

 I have to say that we do not use the latest Solr version - we use a 
 version of Solr based on 4.0.0. So there might be a solution already in 
 Solr, but I would be surprised.

 Any thoughts about how this ought to be done? Support in Solr? E.g. an 
 operation to tell a Solr node to load all the replica that used to run 
 on a machine with the same IP and hostname? Or...?

 Regards, Per Steffensen

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: 

[jira] [Updated] (SOLR-4926) I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.

2013-06-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4926:
---

Priority: Blocker  (was: Critical)

 I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.
 -

 Key: SOLR-4926
 URL: https://issues.apache.org/jira/browse/SOLR-4926
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686807#comment-13686807
 ] 

Adrien Grand commented on LUCENE-5063:
--

bq. Maybe we should do something about the Bytes/Shorts though here...

Given that we don't even have numeric support (they are just encoded/decoded as 
strings) for these types, maybe we should just remove or deprecate them?

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 557 - Still Failing!

2013-06-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/557/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestFieldsReader.testExceptions

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([3DDCA4155ACB41B0:4BDDF6BDF2B63F06]:0)
at org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
at 
org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
at 
org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
at 
org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
at 
org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
at org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3767)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3371)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1887)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1697)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
at 
org.apache.lucene.index.TestFieldsReader.testExceptions(TestFieldsReader.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)




Build Log:
[...truncated 1017 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestFieldsReader
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestFieldsReader -Dtests.method=testExceptions 
-Dtests.seed=3DDCA4155ACB41B0 -Dtests.slow=true -Dtests.locale=sr_RS_#Latn 
-Dtests.timezone=Asia/Dhaka -Dtests.file.encoding=ISO-8859-1
[junit4:junit4] ERROR   8.16s | TestFieldsReader.testExceptions 
[junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([3DDCA4155ACB41B0:4BDDF6BDF2B63F06]:0)
[junit4:junit4]at 
org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
[junit4:junit4]at 
org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
[junit4:junit4]at 
org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
[junit4:junit4]at 
org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
[junit4:junit4]at 
org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
[junit4:junit4]at 
org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
[junit4:junit4]at 
org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
[junit4:junit4]at 
org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
[junit4:junit4]at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
[junit4:junit4]at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
[junit4:junit4]at 
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
[junit4:junit4]at 

[jira] [Commented] (SOLR-4926) I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686814#comment-13686814
 ] 

Mark Miller commented on SOLR-4926:
---

Hmm...that may have just been what that one case looked like - looking at 
another case now that may not match. More digging...

 I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.
 -

 Key: SOLR-4926
 URL: https://issues.apache.org/jira/browse/SOLR-4926
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4926) I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.

2013-06-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686821#comment-13686821
 ] 

Yonik Seeley commented on SOLR-4926:


In some of the fails, I'm seeing some errors of this form:
{code}
  2 Caused by: org.apache.solr.common.SolrException: Error opening Reader
  2at 
org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:174)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:185)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:181)
  2at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1487)
  2... 15 more
  2 Caused by: java.lang.AssertionError: liveDocs.count()=4 info.docCount=6 
info.getDelCount()=6
  2at 
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:92)
{code}

 I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.
 -

 Key: SOLR-4926
 URL: https://issues.apache.org/jira/browse/SOLR-4926
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags when creating cores via the admin handler

2013-06-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686825#comment-13686825
 ] 

Erick Erickson commented on SOLR-4935:
--

BTW, I'm running tests now and I plan to commit this tonight. I want to give Al 
the chance to run it and be sure it cures what he's seeing.

But I'm confident this is better than the current behavior, so if he doesn't 
get a chance today I'll just check it in regardless and we can open new JIRAs 
if it's not complete yet.

 persisting solr.xml preserves extraneous values like wt=json in core tags 
 when creating cores via the admin handler
 ---

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.
 Two problems
 1 if instanceDir is not specified on the create, it's not persisted. And 
 subsequent starts of Solr will fail.
 2 extraneous params are specified, made worse by SolrJ adding some stuff on 
 the create request like wt=javabin etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Shalin Shekhar Mangar
June 2013, Apache Lucene™ 4.3.1 available

The Lucene PMC is pleased to announce the release of Apache Lucene 4.3.1

Apache Lucene is a high-performance, full-featured text search engine
library written entirely in Java. It is a technology suitable for
nearly any application that requires full-text search, especially
cross-platform.

The release is available for immediate download at:
http://lucene.apache.org/core/mirrors-core-latest-redir.html

Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including
fixes for a serious bug that can cause deadlock.

See the CHANGES.txt file included with the release for a full list of
changes and further details.

Please report any feedback to the mailing lists
(http://lucene.apache.org/core/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring
network for distributing releases. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.

Happy searching,
Lucene/Solr developers

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Solr 4.3.1 released

2013-06-18 Thread Shalin Shekhar Mangar
June 2013, Apache Solr™ 4.3.1 available

The Lucene PMC is pleased to announce the release of Apache Solr 4.3.1

Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted search, dynamic
clustering, database integration, rich document (e.g., Word, PDF)
handling, and geospatial search. Solr is highly scalable, providing
fault tolerant distributed search and indexing, and powers the search
and navigation features of many of the world's largest internet sites.

Solr 4.3.1 is available for immediate download at:
http://lucene.apache.org/solr/mirrors-solr-latest-redir.html

Solr 4.3.1 includes 24 bug fixes. The list includes a lot of SolrCloud
bug fixes around Shard Splitting as well as some fixes in other areas.

See the CHANGES.txt file included with the release for a full list of
changes and further details. Please note that the fix for SOLR-4791 is
*NOT* part of this release even though the CHANGES.txt mentions it.

Please report any feedback to the mailing lists
(http://lucene.apache.org/solr/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring
network for distributing releases. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.

Happy searching,
Lucene/Solr developers

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags when creating cores via the admin handler

2013-06-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686831#comment-13686831
 ] 

Shawn Heisey commented on SOLR-4935:


What exactly happens if you don't include instanceDir?  Does it just use 
solr.solr.home, or does it use the name of the core as instanceDir?  If it's 
the name of the core, then IMHO the inferred value should be explicitly 
persisted on RENAME/SWAP.

This will definitely be a lot better with core discovery.


 persisting solr.xml preserves extraneous values like wt=json in core tags 
 when creating cores via the admin handler
 ---

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.
 Two problems
 1 if instanceDir is not specified on the create, it's not persisted. And 
 subsequent starts of Solr will fail.
 2 extraneous params are specified, made worse by SolrJ adding some stuff on 
 the create request like wt=javabin etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Uwe Schindler
Nice work! You also managed to master the nice looking Java 7 Javadocs with 
bootclasspath :-)

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: shalinman...@gmail.com [mailto:shalinman...@gmail.com] On Behalf
 Of Shalin Shekhar Mangar
 Sent: Tuesday, June 18, 2013 5:28 PM
 To: Lucene mailing list; dev@lucene.apache.org; java-
 u...@lucene.apache.org; annou...@apache.org
 Subject: [ANNOUNCE] Apache Lucene 4.3.1 released
 
 June 2013, Apache Lucene™ 4.3.1 available
 
 The Lucene PMC is pleased to announce the release of Apache Lucene 4.3.1
 
 Apache Lucene is a high-performance, full-featured text search engine
 library written entirely in Java. It is a technology suitable for nearly any
 application that requires full-text search, especially cross-platform.
 
 The release is available for immediate download at:
 http://lucene.apache.org/core/mirrors-core-latest-redir.html
 
 Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including fixes for a
 serious bug that can cause deadlock.
 
 See the CHANGES.txt file included with the release for a full list of changes
 and further details.
 
 Please report any feedback to the mailing lists
 (http://lucene.apache.org/core/discussion.html)
 
 Note: The Apache Software Foundation uses an extensive mirroring network
 for distributing releases. It is possible that the mirror you are using may 
 not
 have replicated the release yet. If that is the case, please try another 
 mirror.
 This also goes for Maven access.
 
 Happy searching,
 Lucene/Solr developers
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Shalin Shekhar Mangar
Thanks! Though I did make a blunder -- I sent the solr release
announcement from my personal email address :(

On Tue, Jun 18, 2013 at 9:14 PM, Uwe Schindler u...@thetaphi.de wrote:
 Nice work! You also managed to master the nice looking Java 7 Javadocs with 
 bootclasspath :-)

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: shalinman...@gmail.com [mailto:shalinman...@gmail.com] On Behalf
 Of Shalin Shekhar Mangar
 Sent: Tuesday, June 18, 2013 5:28 PM
 To: Lucene mailing list; dev@lucene.apache.org; java-
 u...@lucene.apache.org; annou...@apache.org
 Subject: [ANNOUNCE] Apache Lucene 4.3.1 released

 June 2013, Apache Lucene™ 4.3.1 available

 The Lucene PMC is pleased to announce the release of Apache Lucene 4.3.1

 Apache Lucene is a high-performance, full-featured text search engine
 library written entirely in Java. It is a technology suitable for nearly any
 application that requires full-text search, especially cross-platform.

 The release is available for immediate download at:
 http://lucene.apache.org/core/mirrors-core-latest-redir.html

 Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including fixes for a
 serious bug that can cause deadlock.

 See the CHANGES.txt file included with the release for a full list of changes
 and further details.

 Please report any feedback to the mailing lists
 (http://lucene.apache.org/core/discussion.html)

 Note: The Apache Software Foundation uses an extensive mirroring network
 for distributing releases. It is possible that the mirror you are using may 
 not
 have replicated the release yet. If that is the case, please try another 
 mirror.
 This also goes for Maven access.

 Happy searching,
 Lucene/Solr developers

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




--
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Mark Miller
I did the same last time. It bounces from announce@ so I just resent it there.

- Mark

On Jun 18, 2013, at 11:47 AM, Shalin Shekhar Mangar shalinman...@gmail.com 
wrote:

 Thanks! Though I did make a blunder -- I sent the solr release
 announcement from my personal email address :(
 
 On Tue, Jun 18, 2013 at 9:14 PM, Uwe Schindler u...@thetaphi.de wrote:
 Nice work! You also managed to master the nice looking Java 7 Javadocs with 
 bootclasspath :-)
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
 -Original Message-
 From: shalinman...@gmail.com [mailto:shalinman...@gmail.com] On Behalf
 Of Shalin Shekhar Mangar
 Sent: Tuesday, June 18, 2013 5:28 PM
 To: Lucene mailing list; dev@lucene.apache.org; java-
 u...@lucene.apache.org; annou...@apache.org
 Subject: [ANNOUNCE] Apache Lucene 4.3.1 released
 
 June 2013, Apache Lucene™ 4.3.1 available
 
 The Lucene PMC is pleased to announce the release of Apache Lucene 4.3.1
 
 Apache Lucene is a high-performance, full-featured text search engine
 library written entirely in Java. It is a technology suitable for nearly any
 application that requires full-text search, especially cross-platform.
 
 The release is available for immediate download at:
 http://lucene.apache.org/core/mirrors-core-latest-redir.html
 
 Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including fixes for 
 a
 serious bug that can cause deadlock.
 
 See the CHANGES.txt file included with the release for a full list of 
 changes
 and further details.
 
 Please report any feedback to the mailing lists
 (http://lucene.apache.org/core/discussion.html)
 
 Note: The Apache Software Foundation uses an extensive mirroring network
 for distributing releases. It is possible that the mirror you are using may 
 not
 have replicated the release yet. If that is the case, please try another 
 mirror.
 This also goes for Maven access.
 
 Happy searching,
 Lucene/Solr developers
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 Regards,
 Shalin Shekhar Mangar.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Uwe Schindler
But it went through... I hope also to annou...@apache.org? If not, resend to 
this address with an apache email!

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
 Sent: Tuesday, June 18, 2013 5:48 PM
 To: dev@lucene.apache.org
 Subject: Re: [ANNOUNCE] Apache Lucene 4.3.1 released
 
 Thanks! Though I did make a blunder -- I sent the solr release announcement
 from my personal email address :(
 
 On Tue, Jun 18, 2013 at 9:14 PM, Uwe Schindler u...@thetaphi.de wrote:
  Nice work! You also managed to master the nice looking Java 7 Javadocs
  with bootclasspath :-)
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: shalinman...@gmail.com [mailto:shalinman...@gmail.com] On
  Behalf Of Shalin Shekhar Mangar
  Sent: Tuesday, June 18, 2013 5:28 PM
  To: Lucene mailing list; dev@lucene.apache.org; java-
  u...@lucene.apache.org; annou...@apache.org
  Subject: [ANNOUNCE] Apache Lucene 4.3.1 released
 
  June 2013, Apache Lucene™ 4.3.1 available
 
  The Lucene PMC is pleased to announce the release of Apache Lucene
  4.3.1
 
  Apache Lucene is a high-performance, full-featured text search engine
  library written entirely in Java. It is a technology suitable for
  nearly any application that requires full-text search, especially cross-
 platform.
 
  The release is available for immediate download at:
  http://lucene.apache.org/core/mirrors-core-latest-redir.html
 
  Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including
  fixes for a serious bug that can cause deadlock.
 
  See the CHANGES.txt file included with the release for a full list of
  changes and further details.
 
  Please report any feedback to the mailing lists
  (http://lucene.apache.org/core/discussion.html)
 
  Note: The Apache Software Foundation uses an extensive mirroring
  network for distributing releases. It is possible that the mirror you
  are using may not have replicated the release yet. If that is the case,
 please try another mirror.
  This also goes for Maven access.
 
  Happy searching,
  Lucene/Solr developers
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 Regards,
 Shalin Shekhar Mangar.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache Lucene 4.3.1 released

2013-06-18 Thread Shalin Shekhar Mangar
Yes, I re-sent the announcement to announce@ with my @apache email.

On Tue, Jun 18, 2013 at 9:21 PM, Mark Miller markrmil...@gmail.com wrote:
 I did the same last time. It bounces from announce@ so I just resent it there.

 - Mark

 On Jun 18, 2013, at 11:47 AM, Shalin Shekhar Mangar shalinman...@gmail.com 
 wrote:

 Thanks! Though I did make a blunder -- I sent the solr release
 announcement from my personal email address :(

 On Tue, Jun 18, 2013 at 9:14 PM, Uwe Schindler u...@thetaphi.de wrote:
 Nice work! You also managed to master the nice looking Java 7 Javadocs with 
 bootclasspath :-)

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: shalinman...@gmail.com [mailto:shalinman...@gmail.com] On Behalf
 Of Shalin Shekhar Mangar
 Sent: Tuesday, June 18, 2013 5:28 PM
 To: Lucene mailing list; dev@lucene.apache.org; java-
 u...@lucene.apache.org; annou...@apache.org
 Subject: [ANNOUNCE] Apache Lucene 4.3.1 released

 June 2013, Apache Lucene™ 4.3.1 available

 The Lucene PMC is pleased to announce the release of Apache Lucene 4.3.1

 Apache Lucene is a high-performance, full-featured text search engine
 library written entirely in Java. It is a technology suitable for nearly 
 any
 application that requires full-text search, especially cross-platform.

 The release is available for immediate download at:
 http://lucene.apache.org/core/mirrors-core-latest-redir.html

 Lucene 4.3.1 includes 12 bug fixes and 1 optimizations, including fixes 
 for a
 serious bug that can cause deadlock.

 See the CHANGES.txt file included with the release for a full list of 
 changes
 and further details.

 Please report any feedback to the mailing lists
 (http://lucene.apache.org/core/discussion.html)

 Note: The Apache Software Foundation uses an extensive mirroring network
 for distributing releases. It is possible that the mirror you are using 
 may not
 have replicated the release yet. If that is the case, please try another 
 mirror.
 This also goes for Maven access.

 Happy searching,
 Lucene/Solr developers

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Regards,
 Shalin Shekhar Mangar.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5062) Spatial CONTAINS is sometimes incorrect for overlapped indexed shapes

2013-06-18 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-5062:
-

Attachment: LUCENE-5062_Spatial_CONTAINS_with_overlapping_shapes.patch

This patch adds the flag as a boolean constructor parameter.  And adds equals  
hashCode based on it.

I also made this setting and hasPoints (inverse of hasIndexedLeaves) 
protected field members of RecursivePrefixTreeStrategy so that subclassers can 
tune them.

I'll commit this in a day or two.

 Spatial CONTAINS is sometimes incorrect for overlapped indexed shapes
 -

 Key: LUCENE-5062
 URL: https://issues.apache.org/jira/browse/LUCENE-5062
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 4.3
Reporter: David Smiley
Assignee: David Smiley
 Attachments: 
 LUCENE-5062_Spatial_CONTAINS_with_overlapping_shapes.patch, 
 LUCENE-5062_Spatial_CONTAINS_with_overlapping_shapes.patch


 If the spatial data for a document is comprised of multiple overlapping or 
 adjacent parts, it _might_ fail to match a query shape when doing the 
 CONTAINS predicate when the sum of those shapes contain the query shape but 
 none do individually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4929) (ChaosMonkey)ShardSplitTest fails often on jenkins

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-4929.
-

Resolution: Duplicate

 (ChaosMonkey)ShardSplitTest fails often on jenkins
 --

 Key: SOLR-4929
 URL: https://issues.apache.org/jira/browse/SOLR-4929
 Project: Solr
  Issue Type: Bug
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar

 ChaosMonkeyShardSplitTest and ShardSplitTest both fail on jenkins quite often 
 with the same message always:
 {code}
 Error Message:
 Server at http://127.0.0.1:20986 returned non ok status:500, message:Server 
 Error
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server 
 at http://127.0.0.1:20986 returned non ok status:500, message:Server Error
 at 
 __randomizedtesting.SeedInfo.seed([7262B9B042D2C205:F38437A8358DA239]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at 
 org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:228)
 at 
 org.apache.solr.cloud.ChaosMonkeyShardSplitTest.doTest(ChaosMonkeyShardSplitTest.java:136)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 The logs show that the WaitForState action fails saying I am not the leader:
 {code}
 [junit4:junit4]   2 1023092 T1943 oasc.SolrCore.registerSearcher 
 [collection1_shard1_1_replica1] Registered new searcher Searcher@4afb5184 
 main{StandardDirectoryReader(segments_1:1)}
 [junit4:junit4]   2 1023093 T1944 oasc.SolrCore.registerSearcher 
 [collection1_shard1_0_replica1] Registered new searcher Searcher@67d0de96 
 main{StandardDirectoryReader(segments_1:1)}
 [junit4:junit4]   2 1023095 T1939 oasu.UpdateLog.bufferUpdates Starting to 
 buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 [junit4:junit4]   2 1023095 T1869 oasu.UpdateLog.bufferUpdates Starting to 
 buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 [junit4:junit4]   2 1023095 T1939 oasc.CoreContainer.registerCore 
 registering core: collection1_shard1_1_replica1
 [junit4:junit4]   2 1023096 T1869 oasc.CoreContainer.registerCore 
 registering core: collection1_shard1_0_replica1
 [junit4:junit4]   2 1023096 T1939 oasc.ZkController.register Register 
 replica - core:collection1_shard1_1_replica1 address:http://127.0.0.1:41605 
 collection:collection1 shard:shard1_1
 [junit4:junit4]   2 1023096 T1869 oasc.ZkController.register Register 
 replica - core:collection1_shard1_0_replica1 address:http://127.0.0.1:41605 
 collection:collection1 shard:shard1_0
 [junit4:junit4]   2 1023097 T1939 oascc.SolrZkClient.makePath makePath: 
 /collections/collection1/leader_elect/shard1_1/election
 [junit4:junit4]   2 1023098 T1869 oascc.SolrZkClient.makePath makePath: 
 /collections/collection1/leader_elect/shard1_0/election
 [junit4:junit4]   2 1023129 T1939 
 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process.
 [junit4:junit4]   2 1023130 T1869 
 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process.
 [junit4:junit4]   2 1023147 T1871 oasc.SolrException.log ERROR 
 org.apache.solr.common.SolrException: We are not the leader
 [junit4:junit4]   2  at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleWaitForStateAction(CoreAdminHandler.java:914)
 [junit4:junit4]   2  at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:190)
 [junit4:junit4]   2  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 [junit4:junit4]   2  at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:608)
 [junit4:junit4]   2  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:206)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags when creating cores via the admin handler

2013-06-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686876#comment-13686876
 ] 

Erick Erickson commented on SOLR-4935:
--

bq: What exactly happens if you don't include instanceDir

it defaults to the name of the core, pretty much as you'd expect. For instance 
if I specify name=eoe, the instance dir is eoe/, relative to solr home.

The problem is that the core loading isn't smart enough to do the same default 
behavior if the instanceDir isn't specified in the core tag. One could easily 
argue that it _should_ be assuming the name is specified, but I'm not all that 
interested in changing functionality there and dealing with other places such a 
change might affect things when it's all going away...

But good point, I'll add a test with the following steps
1 create 2 cores
2 check persistence is good
3 swap one of the new cores with another core
4 insure persistence is good
5 rename the other core
6 insure that persistence is good

It's _probably_ OK, but there's no test that I know of that actually tries this 
kind of stuff...





 persisting solr.xml preserves extraneous values like wt=json in core tags 
 when creating cores via the admin handler
 ---

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.
 Two problems
 1 if instanceDir is not specified on the create, it's not persisted. And 
 subsequent starts of Solr will fail.
 2 extraneous params are specified, made worse by SolrJ adding some stuff on 
 the create request like wt=javabin etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4929) (ChaosMonkey)ShardSplitTest fails often on jenkins

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686883#comment-13686883
 ] 

Mark Miller commented on SOLR-4929:
---

Whoops - didn't know this one existed when I filed SOLR-4933

 (ChaosMonkey)ShardSplitTest fails often on jenkins
 --

 Key: SOLR-4929
 URL: https://issues.apache.org/jira/browse/SOLR-4929
 Project: Solr
  Issue Type: Bug
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar

 ChaosMonkeyShardSplitTest and ShardSplitTest both fail on jenkins quite often 
 with the same message always:
 {code}
 Error Message:
 Server at http://127.0.0.1:20986 returned non ok status:500, message:Server 
 Error
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server 
 at http://127.0.0.1:20986 returned non ok status:500, message:Server Error
 at 
 __randomizedtesting.SeedInfo.seed([7262B9B042D2C205:F38437A8358DA239]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at 
 org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:228)
 at 
 org.apache.solr.cloud.ChaosMonkeyShardSplitTest.doTest(ChaosMonkeyShardSplitTest.java:136)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:815)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 The logs show that the WaitForState action fails saying I am not the leader:
 {code}
 [junit4:junit4]   2 1023092 T1943 oasc.SolrCore.registerSearcher 
 [collection1_shard1_1_replica1] Registered new searcher Searcher@4afb5184 
 main{StandardDirectoryReader(segments_1:1)}
 [junit4:junit4]   2 1023093 T1944 oasc.SolrCore.registerSearcher 
 [collection1_shard1_0_replica1] Registered new searcher Searcher@67d0de96 
 main{StandardDirectoryReader(segments_1:1)}
 [junit4:junit4]   2 1023095 T1939 oasu.UpdateLog.bufferUpdates Starting to 
 buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 [junit4:junit4]   2 1023095 T1869 oasu.UpdateLog.bufferUpdates Starting to 
 buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
 [junit4:junit4]   2 1023095 T1939 oasc.CoreContainer.registerCore 
 registering core: collection1_shard1_1_replica1
 [junit4:junit4]   2 1023096 T1869 oasc.CoreContainer.registerCore 
 registering core: collection1_shard1_0_replica1
 [junit4:junit4]   2 1023096 T1939 oasc.ZkController.register Register 
 replica - core:collection1_shard1_1_replica1 address:http://127.0.0.1:41605 
 collection:collection1 shard:shard1_1
 [junit4:junit4]   2 1023096 T1869 oasc.ZkController.register Register 
 replica - core:collection1_shard1_0_replica1 address:http://127.0.0.1:41605 
 collection:collection1 shard:shard1_0
 [junit4:junit4]   2 1023097 T1939 oascc.SolrZkClient.makePath makePath: 
 /collections/collection1/leader_elect/shard1_1/election
 [junit4:junit4]   2 1023098 T1869 oascc.SolrZkClient.makePath makePath: 
 /collections/collection1/leader_elect/shard1_0/election
 [junit4:junit4]   2 1023129 T1939 
 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process.
 [junit4:junit4]   2 1023130 T1869 
 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process.
 [junit4:junit4]   2 1023147 T1871 oasc.SolrException.log ERROR 
 org.apache.solr.common.SolrException: We are not the leader
 [junit4:junit4]   2  at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleWaitForStateAction(CoreAdminHandler.java:914)
 [junit4:junit4]   2  at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:190)
 [junit4:junit4]   2  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 [junit4:junit4]   2  at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:608)
 [junit4:junit4]   2  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:206)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4933) org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 error.

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686888#comment-13686888
 ] 

Shalin Shekhar Mangar commented on SOLR-4933:
-

I marked SOLR-4929 as duplicate to have all comments in one issue.

It only happens on slow machines I think. I have never been able to reproduce 
it on my box.

If this happens in a real production environment then the leader may be on a 
different box so we'll need to go and create the sub shard cores again (on the 
new leader box) so failing the split is correct. The split itself will be 
retried by the Overseer Collection Processor again but the test does not take 
that into account.

 org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 
 error.
 

 Key: SOLR-4933
 URL: https://issues.apache.org/jira/browse/SOLR-4933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4933) org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 error.

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686892#comment-13686892
 ] 

Mark Miller commented on SOLR-4933:
---

bq. The split itself will be retried by the Overseer Collection Processor again 
but the test does not take that into account.

Oh, okay - so the fix is really just fixing the test.

Is it the same thing with the chaos monkey shard split test?



 org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 
 error.
 

 Key: SOLR-4933
 URL: https://issues.apache.org/jira/browse/SOLR-4933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686893#comment-13686893
 ] 

Michael McCandless commented on LUCENE-5063:


{quote}
bq. Maybe we should do something about the Bytes/Shorts though here...

Given that we don't even have numeric support (they are just encoded/decoded as 
strings) for these types, maybe we should just remove or deprecate them?
{quote}

+1

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4937) SolrCloud doesn't distribute null values

2013-06-18 Thread Steve Davids (JIRA)
Steve Davids created SOLR-4937:
--

 Summary: SolrCloud doesn't distribute null values
 Key: SOLR-4937
 URL: https://issues.apache.org/jira/browse/SOLR-4937
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 4.4


When trying to overwrite field values in SolrCloud using 
doc.setField(fieldName, null) it produces inconsistent behavior depending on 
the routing of the document to a specific shard. The binary format that is sent 
in preserves the null, but when the DistributedProcessor forwards the message 
to replicas it writes the message to XML using ClientUtils.writeVal(..) which 
drops any null value from the XML representation. This was especially 
problematic when a custom processor was initially placed after the distributed 
processor using the previously mentioned setField(null) approach but then moved 
ahead of the DistributedProcessor which no longer works as expected. It appears 
that I now need to updated the code to: doc.setField(fieldName, 
Collections.singletonMap(set, null)) for it to properly distribute throughout 
the cloud due to the XML restrictions. The fact that the custom processor needs 
to change depending on it's location in reference to the DistributedProcessor 
is a drag. I believe there should be a requirement that you can take a 
SolrInputDocument - toXml - toSolrInputDocument and assert that the two 
SolrInputDocuments are equivalent, instead of a lossy translation to XML.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4933) org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 error.

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686895#comment-13686895
 ] 

Shalin Shekhar Mangar commented on SOLR-4933:
-

bq. Is it the same thing with the chaos monkey shard split test?

Yes though there are other (separate) issues with the chaos monkey test. We 
need to start killing the overseer in there.

 org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 
 error.
 

 Key: SOLR-4933
 URL: https://issues.apache.org/jira/browse/SOLR-4933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4933) org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 error.

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-4933:
---

Assignee: Shalin Shekhar Mangar

 org.apache.solr.cloud.ShardSplitTest.testDistribSearch fails often with a 500 
 error.
 

 Key: SOLR-4933
 URL: https://issues.apache.org/jira/browse/SOLR-4933
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1726) Deep Paging and Large Results Improvements

2013-06-18 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686901#comment-13686901
 ] 

Otis Gospodnetic commented on SOLR-1726:


How ElasticSearch handles this: 
http://www.elasticsearch.org/guide/reference/api/search/scroll/
(and note how this can be used to reindex from old index to new index as 
mentioned at 
http://www.elasticsearch.org/blog/changing-mapping-with-zero-downtime/ )

 Deep Paging and Large Results Improvements
 --

 Key: SOLR-1726
 URL: https://issues.apache.org/jira/browse/SOLR-1726
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Fix For: 4.4

 Attachments: CommonParams.java, QParser.java, QueryComponent.java, 
 ResponseBuilder.java, SOLR-1726.patch, SOLR-1726.patch, 
 SolrIndexSearcher.java, TopDocsCollector.java, TopScoreDocCollector.java


 There are possibly ways to improve collections of deep paging by passing 
 Solr/Lucene more information about the last page of results seen, thereby 
 saving priority queue operations.   See LUCENE-2215.
 There may also be better options for retrieving large numbers of rows at a 
 time that are worth exploring.  LUCENE-2127.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4787) Join Contrib

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4787:


Fix Version/s: (was: 4.2.1)
   4.4

 Join Contrib
 

 Key: SOLR-4787
 URL: https://issues.apache.org/jira/browse/SOLR-4787
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.2.1
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
 SOLR-4787.patch, SOLR-4787.patch


 This contrib provides a place where different join implementations can be 
 contributed to Solr. This contrib currently includes 2 join implementations. 
 The initial patch was generated from the Solr 4.3 tag. Because of changes in 
 the FieldCache API this patch will only build with Solr 4.2 or above.
 *PostFilterJoinQParserPlugin aka pjoin*
 The pjoin provides a join implementation that filters results in one core 
 based on the results of a search in another core. This is similar in 
 functionality to the JoinQParserPlugin but the implementation differs in a 
 couple of important ways.
 The first way is that the pjoin is designed to work with integer join keys 
 only. So, in order to use pjoin, integer join keys must be included in both 
 the to and from core.
 The second difference is that the pjoin builds memory structures that are 
 used to quickly connect the join keys. It also uses a custom SolrCache named 
 join to hold intermediate DocSets which are needed to build the join memory 
 structures. So, the pjoin will need more memory then the JoinQParserPlugin to 
 perform the join.
 The main advantage of the pjoin is that it can scale to join millions of keys 
 between cores.
 Because it's a PostFilter, it only needs to join records that match the main 
 query.
 The syntax of the pjoin is the same as the JoinQParserPlugin except that the 
 plugin is referenced by the string pjoin rather then join.
 fq=\{!pjoin fromCore=collection2 from=id_i to=id_i\}user:customer1
 The example filter query above will search the fromCore (collection2) for 
 user:customer1. This query will generate a list of values from the from 
 field that will be used to filter the main query. Only records from the main 
 query, where the to field is present in the from list will be included in 
 the results.
 The solrconfig.xml in the main query core must contain the reference to the 
 pjoin.
 queryParser name=pjoin 
 class=org.apache.solr.joins.PostFilterJoinQParserPlugin/
 And the join contrib jars must be registed in the solrconfig.xml.
 lib dir=../../../dist/ regex=solr-joins-\d.*\.jar /
 The solrconfig.xml in the fromcore must have the join SolrCache configured.
  cache name=join
   class=solr.LRUCache
   size=4096
   initialSize=1024
   /
 *ValueSourceJoinParserPlugin aka vjoin*
 The second implementation is the ValueSourceJoinParserPlugin aka vjoin. 
 This implements a ValueSource function query that can return a value from a 
 second core based on join keys and limiting query. The limiting query can be 
 used to select a specific subset of data from the join core. This allows 
 customer specific relevance data to be stored in a separate core and then 
 joined in the main query.
 The vjoin is called using the vjoin function query. For example:
 bf=vjoin(fromCore, fromKey, fromVal, toKey, query)
 This example shows vjoin being called by the edismax boost function 
 parameter. This example will return the fromVal from the fromCore. The 
 fromKey and toKey are used to link the records from the main query to the 
 records in the fromCore. The query is used to select a specific set of 
 records to join with in fromCore.
 Currently the fromKey and toKey must be longs but this will change in future 
 versions. Like the pjoin, the join SolrCache is used to hold the join 
 memory structures.
 To configure the vjoin you must register the ValueSource plugin in the 
 solrconfig.xml as follows:
 valueSourceParser name=vjoin 
 class=org.apache.solr.joins.ValueSourceJoinParserPlugin /

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4783) Rollback is not working in SolrCloud

2013-06-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4783:


Fix Version/s: (was: 4.2.1)
   4.4

 Rollback is not working in SolrCloud
 

 Key: SOLR-4783
 URL: https://issues.apache.org/jira/browse/SOLR-4783
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2.1
 Environment: AWS Instance
 Linux OS
Reporter: Shekar R
  Labels: features
 Fix For: 4.4


 I have 4 Solr 4.2.1 and 3 zookeeper cluster with frontend haproxy to Solr 
 instances.
 1. Add a doc (without inline commit and autocommit is disabled in 
 solrconfig.xml)
 2. Issue rollback.
 3. Again add another doc.
 4. Issue commit.
 Both docs will be committed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686915#comment-13686915
 ] 

Michael McCandless commented on LUCENE-5063:


+1, patch looks good!

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2013-06-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686941#comment-13686941
 ] 

Alejandro Abdelnur commented on SOLR-4792:
--

I'd like to provide unsolicited feedback on going from a WAR to an embedded 
deployment model. The following is based on my experience with Oozie (WAR) and 
Hadoop (embedded).

WAR model:

* W1. it runs in any servlet container
* W2. it requires bundling servlet container to run out of the box
* W3. configuration of the webserver is independent of the application 
configuration

Embedded model:

* E1. it runs in a bundled servlet container
* E2. it bundles servlet container container code
* E3. the hosting application must configure the webserver

W1 gives the flexibility of choosing a servlet container, and upgrading it 
independently of the application. E1 requires a new release of the application.

W2 makes the binary packaging of the application fatter and more complex. E2 
streamlines the binary and simplifies the packaging.

W3 leaves completely out of scope webserver configuration from the application. 
For example: memory, threadpool serving incoming HTTP connections, security 
configuration (HTTPS). E3 requires the application to take care of all 
configuration of the 'webserver'.

Also, depending how the webapp components are declared (servlets, filters) 
things can get messy. For example, how Hadoop HttpServer class registers 
servlets and filters programmatically is a big mess. If you go this path, I 
would strongly suggest keeping web.xml around as the place where you define 
your webapp components (embedded Jetty can load that).

I don't know the trigger motivation from moving away from a WAR, but in my 
experience the WAR model has always worked well.

Hope this helps one way or the other.

 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 5.0

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1726) Deep Paging and Large Results Improvements

2013-06-18 Thread Dmitry Kan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686964#comment-13686964
 ] 

Dmitry Kan commented on SOLR-1726:
--

Scrolling is not intended for real time user requests, it is intended for 
cases like scrolling over large portions of data that exists within 
elasticsearch to reindex it for example.

are there any other applications for this except re-indexing?

Also, is it known, how internally the scrolling is implemented, i.e. is it 
efficient in transferring to the client of only what is needed?

 Deep Paging and Large Results Improvements
 --

 Key: SOLR-1726
 URL: https://issues.apache.org/jira/browse/SOLR-1726
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Fix For: 4.4

 Attachments: CommonParams.java, QParser.java, QueryComponent.java, 
 ResponseBuilder.java, SOLR-1726.patch, SOLR-1726.patch, 
 SolrIndexSearcher.java, TopDocsCollector.java, TopScoreDocCollector.java


 There are possibly ways to improve collections of deep paging by passing 
 Solr/Lucene more information about the last page of results seen, thereby 
 saving priority queue operations.   See LUCENE-2215.
 There may also be better options for retrieving large numbers of rows at a 
 time that are worth exploring.  LUCENE-2127.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5064) Add PagedMutable

2013-06-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686977#comment-13686977
 ] 

Michael McCandless commented on LUCENE-5064:


+1

 Add PagedMutable
 

 Key: LUCENE-5064
 URL: https://issues.apache.org/jira/browse/LUCENE-5064
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.4

 Attachments: LUCENE-5064.patch


 In the same way that we now have a PagedGrowableWriter, we could have a 
 PagedMutable which would behave just like PackedInts.Mutable but would 
 support more than 2B values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2013-06-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686984#comment-13686984
 ] 

Noble Paul commented on SOLR-4792:
--

bq. W1 gives the flexibility of choosing a servlet container, and upgrading it 
independently of the application

it gives flexibility as well as it adds complexity to deployment. Every shop 
has a preference on the web container they use and this leads to an explosion 
of the permutations possible. This leads to confusion on how to upgrade things, 
how to add libs to classpath etc. When someone posts a question as to how to do 
'x' in appserver 'y' we will be scrambling to get a copy of that appserver and 
reproducing the situation

bq. W3 leaves completely out of scope webserver configuration from the 
application.
This is a valid point. But the Solr is not a regular webserver. It just needs a 
subset of properties which webservers normally expose. We probably will only 
have a handful of configurable properties for Solr (from the app/webserver side)


bq.Also, depending how the webapp components are declared (servlets, filters) 
things can get messy.

We always don't need a webserver. The entire play web framework is written 
without a servlet container. Even Solr does not need a servlet container. It is 
happy as long as it can expose itself through an HTTP interface. I personally 
believe Servlet Container is an overkill for Solr.

AFAIK ElasticSearch does not ship as a .war . Not shipping a .war adds to a lot 
of simplicity . 



 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 5.0

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4792) stop shipping a war in 5.0

2013-06-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686996#comment-13686996
 ] 

Shawn Heisey commented on SOLR-4792:


[~tucu00], thank you for taking a look and giving your input.

The issue description says to see the vote on the developer list.  If you were 
not a subscriber to the dev list in early May of this year when that vote 
happened, then here is where you can look at it:

http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/browser

Make sure you are on the Thread view.  Click Next to go to page 2.  Search 
the page for webapp and then you can see/read the many emails with the 
subject VOTE: solr no longer webapp.  Looking back through the thread, I 
don't recall seeing a -1 vote from any Lucene/Solr committers.  The few -1 
votes came from users/contributors.

It is completely expected that this change will be unpopular with some 
experienced users.  I was initially very skeptical.  [~markrmil...@gmail.com] 
put forth a rather large list of compelling reasons for switching, so I changed 
my vote from -0 to +1.  Many of good reasons seem like they are developer-only 
things ... but when the developers find themselves seriously constrained, the 
users will not see rapid development of the features that they want, so 
reducing developer pain is good for end users.

Since that vote, at least once a week I seem to come across a problem that 
would not exist (or would be very easy to fix) if Solr were a self-contained 
application.  I can't think of any examples right now.


 stop shipping a war in 5.0
 --

 Key: SOLR-4792
 URL: https://issues.apache.org/jira/browse/SOLR-4792
 Project: Solr
  Issue Type: Task
  Components: Build
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 5.0

 Attachments: SOLR-4792.patch


 see the vote on the developer list.
 This is the first step: if we stop shipping a war then we are free to do 
 anything we want. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5065) Refactor TestGrouping.java to break TestRandom into separate tests

2013-06-18 Thread Tom Burton-West (JIRA)
Tom Burton-West created LUCENE-5065:
---

 Summary: Refactor TestGrouping.java to break TestRandom into 
separate tests
 Key: LUCENE-5065
 URL: https://issues.apache.org/jira/browse/LUCENE-5065
 Project: Lucene - Core
  Issue Type: Test
  Components: modules/grouping
Affects Versions: 4.3.1
Reporter: Tom Burton-West
Priority: Minor


 lucene/grouping/src/test/org/apache/lucene/search/grouping
TestGrouping.java  combines multiple tests inside of one test: TestRandom(). 
This makes it difficult to understand or for new users to use the 
TestGrouping.java as an entry to understanding grouping functionality.

Either break TestRandom into separate tests or add small separate tests for the 
most important parts of TestRandom.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5063) Allow GrowableWriter to store negative values

2013-06-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5063:
-

Attachment: LUCENE-5063.patch

Same patch with added deprecation warnings:
 - FieldCache.get(Byte|Short)s
 - FieldCache.DEFAULT_*_PARSER (because they assume numeric data is encoded as 
strings)
 - SortField.Type.(Byte|Short)
 - (Byte|Short)FieldSource
 - Solr's ByteField and ShortField

 Allow GrowableWriter to store negative values
 -

 Key: LUCENE-5063
 URL: https://issues.apache.org/jira/browse/LUCENE-5063
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 4.4

 Attachments: LUCENE-5063.patch, LUCENE-5063.patch


 For some use-cases, it would be convenient to be able to store negative 
 values in a GrowableWriter, for example to use it in FieldCache: The first 
 term is the minimum value and one could use a GrowableWriter to store deltas 
 between this minimum value and the current value. (The need for negative 
 values comes from the fact that maxValue - minValue might be larger than 
 Long.MAX_VALUE.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4938) Solr should be able to use Lucene's BlockGroupingCollector for field-collapsing

2013-06-18 Thread Tom Burton-West (JIRA)
Tom Burton-West created SOLR-4938:
-

 Summary: Solr should be able to use Lucene's 
BlockGroupingCollector for field-collapsing
 Key: SOLR-4938
 URL: https://issues.apache.org/jira/browse/SOLR-4938
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.3.1
Reporter: Tom Burton-West
Priority: Minor


In Lucene it is possible to use the BlockGroupingCollector  for grouping in 
order to take advantage of indexing document blocks: 
IndexWriter.addDocuments().   With SOLR-3076 and SOLR-3535, it is possible to 
index document blocks.   It would be nice to have an option to use the 
BlockGroupingCollector with Solr field-collapsing/grouping.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-06-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687031#comment-13687031
 ] 

Mark Miller commented on SOLR-4916:
---

I think this is a pretty solid base to iterate on, so I'd like to commit before 
long to minimize the cost of keeping this set of changes in sync. I'll upload a 
patch updated to trunk in a bit.

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-06-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4916:
--

Attachment: SOLR-4916.patch

Patch to trunk.

 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-4916.patch, SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add document routing to CloudSolrServer

2013-06-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

New patch, added setter to turn off and on threaded updates, default off.

Added a thread pool for threaded updates.

Removed the javabin transport. We can add this when Shawn wraps up his work. 

 Add document routing to CloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds the following enhancements to CloudSolrServer's update logic:
 1) Document routing: Updates are routed directly to the correct shard leader 
 eliminating document routing at the server.
 2) Parallel update execution: Updates for each shard are executed in a 
 separate thread so parallel indexing can occur across the cluster.
 3) Javabin transport: Update requests are sent via javabin transport.
 These enhancements should allow for near linear scalability on indexing 
 throughput.
 Usage:
 CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
 SolrInputDocument doc1 = new SolrInputDocument();
 doc1.addField(id, 0);
 doc1.addField(a_t, hello1);
 SolrInputDocument doc2 = new SolrInputDocument();
 doc2.addField(id, 2);
 doc2.addField(a_t, hello2);
 UpdateRequest request = new UpdateRequest();
 request.add(doc1);
 request.add(doc2);
 request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);
 NamedList response = cloudClient.request(request); // Returns a backwards 
 compatible condensed response.
 //To get more detailed response down cast to RouteResponse:
 CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;
 NamedList responses = rr.getRouteResponse(); 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add document routing to CloudSolrServer

2013-06-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Description: 
This issue adds the following enhancements to CloudSolrServer's update logic:

1) Document routing: Updates are routed directly to the correct shard leader 
eliminating document routing at the server.

2) Optional parallel update execution: Updates for each shard are executed in a 
separate thread so parallel indexing can occur across the cluster.


These enhancements should allow for near linear scalability on indexing 
throughput.

Usage:

CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
cloudClient.setParallelUpdates(true); 
SolrInputDocument doc1 = new SolrInputDocument();
doc1.addField(id, 0);
doc1.addField(a_t, hello1);
SolrInputDocument doc2 = new SolrInputDocument();
doc2.addField(id, 2);
doc2.addField(a_t, hello2);

UpdateRequest request = new UpdateRequest();
request.add(doc1);
request.add(doc2);
request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);

NamedList response = cloudClient.request(request); // Returns a backwards 
compatible condensed response.

//To get more detailed response down cast to RouteResponse:
CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;


  was:
This issue adds the following enhancements to CloudSolrServer's update logic:

1) Document routing: Updates are routed directly to the correct shard leader 
eliminating document routing at the server.

2) Parallel update execution: Updates for each shard are executed in a separate 
thread so parallel indexing can occur across the cluster.

3) Javabin transport: Update requests are sent via javabin transport.

These enhancements should allow for near linear scalability on indexing 
throughput.

Usage:

CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
SolrInputDocument doc1 = new SolrInputDocument();
doc1.addField(id, 0);
doc1.addField(a_t, hello1);
SolrInputDocument doc2 = new SolrInputDocument();
doc2.addField(id, 2);
doc2.addField(a_t, hello2);

UpdateRequest request = new UpdateRequest();
request.add(doc1);
request.add(doc2);
request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);

NamedList response = cloudClient.request(request); // Returns a backwards 
compatible condensed response.

//To get more detailed response down cast to RouteResponse:
CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;
NamedList responses = rr.getRouteResponse(); 


 Add document routing to CloudSolrServer
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816-sriesenberg.patch


 This issue adds the following enhancements to CloudSolrServer's update logic:
 1) Document routing: Updates are routed directly to the correct shard leader 
 eliminating document routing at the server.
 2) Optional parallel update execution: Updates for each shard are executed in 
 a separate thread so parallel indexing can occur across the cluster.
 These enhancements should allow for near linear scalability on indexing 
 throughput.
 Usage:
 CloudSolrServer cloudClient = new CloudSolrServer(zkAddress);
 cloudClient.setParallelUpdates(true); 
 SolrInputDocument doc1 = new SolrInputDocument();
 doc1.addField(id, 0);
 doc1.addField(a_t, hello1);
 SolrInputDocument doc2 = new SolrInputDocument();
 doc2.addField(id, 2);
 doc2.addField(a_t, hello2);
 UpdateRequest request = new UpdateRequest();
 request.add(doc1);
 request.add(doc2);
 request.setAction(AbstractUpdateRequest.ACTION.OPTIMIZE, false, false);
 NamedList response = cloudClient.request(request); // Returns a backwards 
 compatible condensed response.
 //To get more detailed response down cast to RouteResponse:
 CloudSolrServer.RouteResponse rr = (CloudSolrServer.RouteResponse)response;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b93) - Build # 6125 - Failure!

2013-06-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6125/
Java: 64bit/jdk1.8.0-ea-b93 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.util.packed.TestPackedInts.testPagedGrowableWriter

Error Message:
expected:7013336 but was:7013328

Stack Trace:
java.lang.AssertionError: expected:7013336 but was:7013328
at 
__randomizedtesting.SeedInfo.seed([75B3938E6BFFE272:928D659AF5DC8D73]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.util.packed.TestPackedInts.testPagedGrowableWriter(TestPackedInts.java:689)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 897 lines...]
[junit4:junit4] Suite: org.apache.lucene.util.packed.TestPackedInts
[junit4:junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestPackedInts 
-Dtests.method=testPagedGrowableWriter -Dtests.seed=75B3938E6BFFE272 
-Dtests.multiplier=3 -Dtests.slow=true 

Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b93) - Build # 6125 - Failure!

2013-06-18 Thread Adrien Grand
PagedMutable.ramBytesUsed is wrong when compressed oops are off. I'm
looking into it...

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.

2013-06-18 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687167#comment-13687167
 ] 

Jack Krupansky commented on SOLR-4916:
--

Is this intended to be a 5.0-only feature, or 4.x or maybe 4.5 or maybe even 
4.4?

Aren't there a lot of different distributions of Hadoop? So, when Andrzej 
mentions this patch adding Hadoop as a core Solr dependency, what exactly will 
that dependency be? 1.2.0? Or, will the Hadoop release be 
pluggable/configurable?



 Add support to write and read Solr index files and transaction log files to 
 and from HDFS.
 --

 Key: SOLR-4916
 URL: https://issues.apache.org/jira/browse/SOLR-4916
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-4916.patch, SOLR-4916.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b93) - Build # 6125 - Failure!

2013-06-18 Thread Adrien Grand
Sorry I meant PagedGrowableWriter, not PagedMutable. The reason why
RamUsageEstimator gives a different result is that it relies on Unsafe
to compute field offsets (RamUsageEstimator.JvmFeature.FIELD_OFFSETS,
when this feature is disabled, RamUsageEstimator agrees with
PagedGrowableWriter.ramBytesUsed), so it seems to me that there is no
real need to fix this method without using Unsafe and reflection.
Since constant deltas don't matter for this method, I will just relax
the test a bit and just ensure it is not too far from the actual
result.

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4935) persisting solr.xml preserves extraneous values like wt=json in core tags when creating cores via the admin handler

2013-06-18 Thread Al Wold (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687246#comment-13687246
 ] 

Al Wold commented on SOLR-4935:
---

After applying the patch to branch_4x, everything seems to be working well for 
me. I'll continue to test with this and update if I see any more problems.

 persisting solr.xml preserves extraneous values like wt=json in core tags 
 when creating cores via the admin handler
 ---

 Key: SOLR-4935
 URL: https://issues.apache.org/jira/browse/SOLR-4935
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.4
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-4935.patch, SOLR-4935.patch


 I'll be s happy when we stop supporting persistence.
 Two problems
 1 if instanceDir is not specified on the create, it's not persisted. And 
 subsequent starts of Solr will fail.
 2 extraneous params are specified, made worse by SolrJ adding some stuff on 
 the create request like wt=javabin etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4934) Prevent runtime failure if users use initargs useCompoundFile setting on LogMergePolicy or TieredMergePolicy

2013-06-18 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687310#comment-13687310
 ] 

Hoss Man commented on SOLR-4934:


Committed revision 1494348.

The fix itself was fairly small, and the bulk of the change was svn copying of 
test configs so i just went ahead and committed to trunk instead of attaching a 
patch.

If there are no objections, i'll backport to 4x later tonight or early tommorow.

 Prevent runtime failure if users use initargs useCompoundFile setting on 
 LogMergePolicy or TieredMergePolicy
 --

 Key: SOLR-4934
 URL: https://issues.apache.org/jira/browse/SOLR-4934
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 5.0, 4.4


 * LUCENE-5038 eliminated setUseCompoundFile(boolean) from the built in 
 MergePolicies
 * existing users may have configs that use mergePolicy init args to try and 
 call that setter
 * we already do some explicit checks for these MergePolices in 
 SolrIndexConfig to deal with legacy syntax
 * update the existing logic to remove useCompoundFile from the MergePolicy 
 initArgs for these known policies if found, and log a warning.
 (NOTE: i don't want to arbitrarily remove useCompoundFile from the initArgs 
 regardless of class in case someone has a custom MergePolicy that implements 
 that logic -- that would suck)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b93) - Build # 6128 - Still Failing!

2013-06-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6128/
Java: 32bit/jdk1.8.0-ea-b93 -server -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestFieldsReader

Error Message:
Captured an uncaught exception in thread: Thread[id=298, name=Lucene Merge 
Thread #0, state=RUNNABLE, group=TGRP-TestFieldsReader]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=298, name=Lucene Merge Thread #0, 
state=RUNNABLE, group=TGRP-TestFieldsReader]
Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.OutOfMemoryError: Java heap space
at __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
at 
org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
at 
org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
at 
org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
at 
org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
at org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3767)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3371)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:401)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)


REGRESSION:  org.apache.lucene.index.TestFieldsReader.testExceptions

Error Message:
this writer hit an OutOfMemoryError; cannot complete forceMerge

Stack Trace:
java.lang.IllegalStateException: this writer hit an OutOfMemoryError; cannot 
complete forceMerge
at 
__randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B:58E17816BF8B40FD]:0)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1704)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
at 
org.apache.lucene.index.TestFieldsReader.testExceptions(TestFieldsReader.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b93) - Build # 6128 - Still Failing!

2013-06-18 Thread Robert Muir
This has failed several times, since LUCENE-5038 was committed. But that
commit didn't really change the test, except to setCFSRatio(0.0)

and this test indexes only one document!!

I'll look at this later tonight if nobody beats me to it, seems like
something is really wrong.

On Tue, Jun 18, 2013 at 4:21 PM, Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6128/
 Java: 32bit/jdk1.8.0-ea-b93 -server -XX:+UseG1GC

 2 tests failed.
 FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestFieldsReader

 Error Message:
 Captured an uncaught exception in thread: Thread[id=298, name=Lucene Merge
 Thread #0, state=RUNNABLE, group=TGRP-TestFieldsReader]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an
 uncaught exception in thread: Thread[id=298, name=Lucene Merge Thread #0,
 state=RUNNABLE, group=TGRP-TestFieldsReader]
 Caused by: org.apache.lucene.index.MergePolicy$MergeException:
 java.lang.OutOfMemoryError: Java heap space
 at __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
 at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
 at
 org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
 at
 org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
 at
 org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
 at
 org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
 at
 org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
 at
 org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
 at
 org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
 at
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
 at
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
 at
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3767)
 at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3371)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:401)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)


 REGRESSION:  org.apache.lucene.index.TestFieldsReader.testExceptions

 Error Message:
 this writer hit an OutOfMemoryError; cannot complete forceMerge

 Stack Trace:
 java.lang.IllegalStateException: this writer hit an OutOfMemoryError;
 cannot complete forceMerge
 at
 __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B:58E17816BF8B40FD]:0)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1704)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
 at
 org.apache.lucene.index.TestFieldsReader.testExceptions(TestFieldsReader.java:204)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:491)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b93) - Build # 6128 - Still Failing!

2013-06-18 Thread Robert Muir
somehow this relates to the delegation in faultyindexinput (NOT its
exception-generation, which isnt yet happening)

and the surrogates dance is somehow tickling the delegator-bug in such a
way that it thinks a single term is 262MB. thats why it only fails on 3.x

On Tue, Jun 18, 2013 at 4:38 PM, Robert Muir rcm...@gmail.com wrote:

 This has failed several times, since LUCENE-5038 was committed. But that
 commit didn't really change the test, except to setCFSRatio(0.0)

 and this test indexes only one document!!

 I'll look at this later tonight if nobody beats me to it, seems like
 something is really wrong.

 On Tue, Jun 18, 2013 at 4:21 PM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6128/
 Java: 32bit/jdk1.8.0-ea-b93 -server -XX:+UseG1GC

 2 tests failed.
 FAILED:
  junit.framework.TestSuite.org.apache.lucene.index.TestFieldsReader

 Error Message:
 Captured an uncaught exception in thread: Thread[id=298, name=Lucene
 Merge Thread #0, state=RUNNABLE, group=TGRP-TestFieldsReader]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an
 uncaught exception in thread: Thread[id=298, name=Lucene Merge Thread #0,
 state=RUNNABLE, group=TGRP-TestFieldsReader]
 Caused by: org.apache.lucene.index.MergePolicy$MergeException:
 java.lang.OutOfMemoryError: Java heap space
 at __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
 at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
 at
 org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
 at
 org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
 at
 org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
 at
 org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
 at
 org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
 at
 org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
 at
 org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
 at
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
 at
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
 at
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3767)
 at
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3371)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:401)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)


 REGRESSION:  org.apache.lucene.index.TestFieldsReader.testExceptions

 Error Message:
 this writer hit an OutOfMemoryError; cannot complete forceMerge

 Stack Trace:
 java.lang.IllegalStateException: this writer hit an OutOfMemoryError;
 cannot complete forceMerge
 at
 __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B:58E17816BF8B40FD]:0)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1704)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
 at
 org.apache.lucene.index.TestFieldsReader.testExceptions(TestFieldsReader.java:204)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:491)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 

[jira] [Comment Edited] (SOLR-4926) I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.

2013-06-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686821#comment-13686821
 ] 

Yonik Seeley edited comment on SOLR-4926 at 6/19/13 12:16 AM:
--

In some of the fails, I'm seeing some errors of this form:
{code}
  2 Caused by: org.apache.solr.common.SolrException: Error opening Reader
  2at 
org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:174)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:185)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:181)
  2at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1487)
  2... 15 more
  2 Caused by: java.lang.AssertionError: liveDocs.count()=4 info.docCount=6 
info.getDelCount()=6
  2at 
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:92)
{code}

edit: it looks like this type of error is appearing in about 20% of my fails.

  was (Author: ysee...@gmail.com):
In some of the fails, I'm seeing some errors of this form:
{code}
  2 Caused by: org.apache.solr.common.SolrException: Error opening Reader
  2at 
org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:174)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:185)
  2at 
org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:181)
  2at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1487)
  2... 15 more
  2 Caused by: java.lang.AssertionError: liveDocs.count()=4 info.docCount=6 
info.getDelCount()=6
  2at 
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:92)
{code}
  
 I am seeing RecoveryZkTest and ChaosMonkeySafeLeaderTest fail often on trunk.
 -

 Key: SOLR-4926
 URL: https://issues.apache.org/jira/browse/SOLR-4926
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #884: POMs out of sync

2013-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/884/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
shard3 is not consistent.  Got 870 from 
http://127.0.0.1:39122/jbx/collection1lastClient and got 868 from 
http://127.0.0.1:16727/jbx/collection1

Stack Trace:
java.lang.AssertionError: shard3 is not consistent.  Got 870 from 
http://127.0.0.1:39122/jbx/collection1lastClient and got 868 from 
http://127.0.0.1:16727/jbx/collection1
at 
__randomizedtesting.SeedInfo.seed([4C57DDF8E236F645:CDB153E095699679]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1018)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:137)




Build Log:
[...truncated 23644 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 575 - Still Failing!

2013-06-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/575/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestBatchUpdate.testWithBinary

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:51727/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:51727/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([77D8AE3379F9AC8E:25AEE02CFE16769D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:435)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at 
org.apache.solr.client.solrj.TestBatchUpdate.doIt(TestBatchUpdate.java:130)
at 
org.apache.solr.client.solrj.TestBatchUpdate.testWithBinary(TestBatchUpdate.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Created] (LUCENE-5066) TestFieldsReader fails in 4.x with OOM

2013-06-18 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5066:
---

 Summary: TestFieldsReader fails in 4.x with OOM
 Key: LUCENE-5066
 URL: https://issues.apache.org/jira/browse/LUCENE-5066
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5066.patch

Its FaultyIndexInput is broken (doesn't implement seek/clone correctly).

This causes it to read bogus data and try to allocate an enormous byte[] for a 
term.

The bug was previously hidden:
FaultyDirectory doesnt override openSlice, so CFS must not be used at flush if 
you want to trigger the bug.
FailtyIndexInput's clone is broken, it uses new but doesn't seek the clone to 
the right place. This causes a disaster with BufferedIndexInput (which it 
extends), because BufferedIndexInput (not just the delegate) must know its 
position since it has seek-within-block etc code...

It seems with this test (very simple one), that only 3.x codec triggers it 
because its term dict relies upon clone()'s being seek'd to right place. 

I'm not sure what other codecs rely upon this, but imo we should also add a 
low-level test for directories that does something like this to ensure its 
really tested:

{code}
dir.createOutput(x);
dir.openInput(x);
input.seek(somewhere);
clone = input.clone();
assertEquals(somewhere, clone.getFilePointer());
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5066) TestFieldsReader fails in 4.x with OOM

2013-06-18 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5066:


Attachment: LUCENE-5066.patch

here's a patch against 4.x

 TestFieldsReader fails in 4.x with OOM
 --

 Key: LUCENE-5066
 URL: https://issues.apache.org/jira/browse/LUCENE-5066
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5066.patch


 Its FaultyIndexInput is broken (doesn't implement seek/clone correctly).
 This causes it to read bogus data and try to allocate an enormous byte[] for 
 a term.
 The bug was previously hidden:
 FaultyDirectory doesnt override openSlice, so CFS must not be used at flush 
 if you want to trigger the bug.
 FailtyIndexInput's clone is broken, it uses new but doesn't seek the clone 
 to the right place. This causes a disaster with BufferedIndexInput (which it 
 extends), because BufferedIndexInput (not just the delegate) must know its 
 position since it has seek-within-block etc code...
 It seems with this test (very simple one), that only 3.x codec triggers it 
 because its term dict relies upon clone()'s being seek'd to right place. 
 I'm not sure what other codecs rely upon this, but imo we should also add a 
 low-level test for directories that does something like this to ensure its 
 really tested:
 {code}
 dir.createOutput(x);
 dir.openInput(x);
 input.seek(somewhere);
 clone = input.clone();
 assertEquals(somewhere, clone.getFilePointer());
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b93) - Build # 6128 - Still Failing!

2013-06-18 Thread Robert Muir
this is an ugly test bug: i opened an issue for it (
https://issues.apache.org/jira/browse/LUCENE-5066)

I think the patch is ok to fix this test fail, but as noted the issue, we
should probably add a simple low-level test for this to all real
directories.

also, i think it would be good to move
DocumentsWriterPerThread.MAX_TERM_LENGTH_UTF8 somewhere else (e.g.
IndexWriter), so we can add a similar assert to all the codecs: This way
instead of OOM we know stuff is really jacked up.

On Tue, Jun 18, 2013 at 5:13 PM, Robert Muir rcm...@gmail.com wrote:

 somehow this relates to the delegation in faultyindexinput (NOT its
 exception-generation, which isnt yet happening)

 and the surrogates dance is somehow tickling the delegator-bug in such a
 way that it thinks a single term is 262MB. thats why it only fails on 3.x


 On Tue, Jun 18, 2013 at 4:38 PM, Robert Muir rcm...@gmail.com wrote:

 This has failed several times, since LUCENE-5038 was committed. But that
 commit didn't really change the test, except to setCFSRatio(0.0)

 and this test indexes only one document!!

 I'll look at this later tonight if nobody beats me to it, seems like
 something is really wrong.

 On Tue, Jun 18, 2013 at 4:21 PM, Policeman Jenkins Server 
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6128/
 Java: 32bit/jdk1.8.0-ea-b93 -server -XX:+UseG1GC

 2 tests failed.
 FAILED:
  junit.framework.TestSuite.org.apache.lucene.index.TestFieldsReader

 Error Message:
 Captured an uncaught exception in thread: Thread[id=298, name=Lucene
 Merge Thread #0, state=RUNNABLE, group=TGRP-TestFieldsReader]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an
 uncaught exception in thread: Thread[id=298, name=Lucene Merge Thread #0,
 state=RUNNABLE, group=TGRP-TestFieldsReader]
 Caused by: org.apache.lucene.index.MergePolicy$MergeException:
 java.lang.OutOfMemoryError: Java heap space
 at __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B]:0)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at org.apache.lucene.util.BytesRef.copyBytes(BytesRef.java:196)
 at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:343)
 at
 org.apache.lucene.codecs.lucene3x.TermBuffer.toTerm(TermBuffer.java:113)
 at
 org.apache.lucene.codecs.lucene3x.SegmentTermEnum.term(SegmentTermEnum.java:184)
 at
 org.apache.lucene.codecs.lucene3x.Lucene3xFields$PreTermsEnum.next(Lucene3xFields.java:863)
 at
 org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:292)
 at
 org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:318)
 at
 org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:103)
 at
 org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
 at
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:365)
 at
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:98)
 at
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3767)
 at
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3371)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:401)
 at
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)


 REGRESSION:  org.apache.lucene.index.TestFieldsReader.testExceptions

 Error Message:
 this writer hit an OutOfMemoryError; cannot complete forceMerge

 Stack Trace:
 java.lang.IllegalStateException: this writer hit an OutOfMemoryError;
 cannot complete forceMerge
 at
 __randomizedtesting.SeedInfo.seed([2EE02ABE17F63E4B:58E17816BF8B40FD]:0)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1704)
 at
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
 at
 org.apache.lucene.index.TestFieldsReader.testExceptions(TestFieldsReader.java:204)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:491)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 

[jira] [Created] (LUCENE-5067) add a BaseDirectoryTestCase

2013-06-18 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5067:
---

 Summary: add a BaseDirectoryTestCase
 Key: LUCENE-5067
 URL: https://issues.apache.org/jira/browse/LUCENE-5067
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir


Currently most directory code is tested indirectly. But there are still corner 
cases like LUCENE-5066, NRCachingDirectory.testNoDir, 
TestRAMDirectory.testSeekToEOFThenBack, that only target specific directories 
where some user reported the bug. If one of our other directories has these 
bugs, the best we can hope for is some other lucene test will trip it 
indirectly and we will find it after lots of debugging...

Instead we should herd up all these tests into a base class and test every 
directory explicitly and directly with it (like we do with the codec API).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5066) TestFieldsReader fails in 4.x with OOM

2013-06-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687581#comment-13687581
 ] 

Robert Muir commented on LUCENE-5066:
-

I opened LUCENE-5067 for a way to add tests for this (and other existing ones) 
for all of our directories.

We should also open another issue to improve codecs so that they don't OOM 
trying to allocate absurdly-huge datastructures on bugs, but instead trip a 
check or an assert.

We discussed some of these ideas at berlin buzzwords:

instead of:
{code}
int size = readVint();
byte something[] = new byte[size]; // gives OOM if 'size' is corrupt: no chance 
for read past EOF
readBytes(something, size);
{code}

we can do:
{code}
int size = readVint();
assert size  MAX_SIZE; // for something like terms with a bounded limit
assert getFilePointer() + size  length(); // for something unbounded, at least 
it must not exceed the file's length
byte something[] = new byte[size];
readBytes(something, size);
{code}



 TestFieldsReader fails in 4.x with OOM
 --

 Key: LUCENE-5066
 URL: https://issues.apache.org/jira/browse/LUCENE-5066
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5066.patch


 Its FaultyIndexInput is broken (doesn't implement seek/clone correctly).
 This causes it to read bogus data and try to allocate an enormous byte[] for 
 a term.
 The bug was previously hidden:
 FaultyDirectory doesnt override openSlice, so CFS must not be used at flush 
 if you want to trigger the bug.
 FailtyIndexInput's clone is broken, it uses new but doesn't seek the clone 
 to the right place. This causes a disaster with BufferedIndexInput (which it 
 extends), because BufferedIndexInput (not just the delegate) must know its 
 position since it has seek-within-block etc code...
 It seems with this test (very simple one), that only 3.x codec triggers it 
 because its term dict relies upon clone()'s being seek'd to right place. 
 I'm not sure what other codecs rely upon this, but imo we should also add a 
 low-level test for directories that does something like this to ensure its 
 really tested:
 {code}
 dir.createOutput(x);
 dir.openInput(x);
 input.seek(somewhere);
 clone = input.clone();
 assertEquals(somewhere, clone.getFilePointer());
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org