Re: [VOTE] Release PyLucene 4.3.0-1

2013-05-13 Thread Andi Vajda


On Sun, 12 May 2013, Andi Vajda wrote:


On May 12, 2013, at 22:21, Barry Wark ba...@physion.us wrote:


On Mon, May 13, 2013 at 1:06 AM, Andi Vajda va...@apache.org wrote:


On May 12, 2013, at 21:04, Barry Wark ba...@physion.us wrote:


Hi all,

I'm new to the pylucene-dev list, so please forgive me if I'm stepping out of 
line in the voting process.

We're using sucesfully JCC 2.15 to generate a wrapper for our Java API. JCC 
2.16 from SVN HEAD produces the following error (output log attached) when 
using the new --use_full_names option. Python 2.7, OS X 10.8:

build/_ovation/com/__init__.cpp:15:14: error: use of undeclared identifier
  'getJavaModule'
module = getJavaModule(module, , com);
 ^
build/_ovation/com/__init__.cpp:22:14: error: use of undeclared identifier
  'getJavaModule'
module = getJavaModule(module, , com);


You might be mixing in headers from an old version here...


I'm back at my computer and checked where getJavaModule() is declared and 
it's declared up there in the same __init__.cpp file, just below initVM(), 
line 7. The function itself, is defined in jcc.cpp.


The compiler error you're encountering is rather unexpected...
Did you change anything else other than upgrading to trunk's 2.16 version ?

Andi..



Hi Andi,

This was run in a virtualenv with no previous JCC installation. The system 
python site-packages does not have JCC either (we always use a virtualenv to 
build our wrapper, installing JCC at build time).

How can I confirm that we don't have headers from an older JCC?


Look for where this function is defined and see why it's not picked up.
(I'm not at my computer right now to verify that this is the right hypothese)

Andi..



Cheers,
Barry




Andi..


 ^
2 errors generated.
error: command 'clang' failed with exit status 1


The generated _ovation/__init__.cpp and _ovation/com/__init__.cpp are also 
attached.

Cheers,
Barry


On Mon, May 6, 2013 at 8:27 PM, Andi Vajda va...@apache.org wrote:


It looks like the time has finally come for a PyLucene 4.x release !

The PyLucene 4.3.0-1 release tracking the recent release of Apache Lucene 4.3.0 
is ready.

A release candidate is available from:
http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_3/CHANGES

PyLucene 4.3.0 is built with JCC 2.16 included in these release artifacts:
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_3_0/lucene/CHANGES.txt

Please vote to release these artifacts as PyLucene 4.3.0-1.

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1


jcc_2.16_osx_10.8_py2.7.out.txt
__init__.cpp
__init__.cpp






Re: [VOTE] Release PyLucene 4.3.0-1

2013-05-13 Thread Thomas Koch
I was able to build PyLucene 4.3.0-1 on Mac OS X ( Darwin Kernel Version 
12.3.0) with Python 2.7.2.

All tests did pass.

 import lucene
 lucene.VERSION
'4.3.0'
 

regards,
Thomas

Am 07.05.2013 um 02:27 schrieb Andi Vajda va...@apache.org:

 
 It looks like the time has finally come for a PyLucene 4.x release !
 
 The PyLucene 4.3.0-1 release tracking the recent release of Apache Lucene 
 4.3.0 is ready.
 
 A release candidate is available from:
 http://people.apache.org/~vajda/staging_area/
 
 A list of changes in this release can be seen at:
 http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_3/CHANGES
 
 PyLucene 4.3.0 is built with JCC 2.16 included in these release artifacts:
 http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES
 
 A list of Lucene Java changes can be seen at:
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_3_0/lucene/CHANGES.txt
 
 Please vote to release these artifacts as PyLucene 4.3.0-1.
 
 Thanks !
 
 Andi..
 
 ps: the KEYS file for PyLucene release signing is at:
 http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
 http://people.apache.org/~vajda/staging_area/KEYS
 
 pps: here is my +1



Re: VOTE: solr no longer webapp

2013-05-13 Thread Toke Eskildsen
On Wed, 2013-05-08 at 15:46 +0200, Mark Miller wrote:
 I want Solr out of the box to be a scaling monster. I have very little
 concern for the guy that want's to run Solr with 5 other webapps. That
 guy has to suffer for the greater good.

I do not really have any objections to that, I would just like to know
if this is the general consensus.

My impression of Solr up until 3.5 was that is was intended for
everyone that wanted to get started with search, big or small. It
makes sense to narrow the focus, but that should be followed by a clear
statement like Solr is primarily intended for large scale projects but
can also be used for small scale

- Toke Eskildsen, State and University Library, Denmark


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-13 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655846#comment-13655846
 ] 

Christopher commented on SOLR-1913:
---

Hi Deepthi Sigireddi,
Thank you for your help but I can't create the jar :/ could it be that you 
attach it please ?

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4751) The replication problem of the file in a subdirectory.

2013-05-13 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655849#comment-13655849
 ] 

Minoru Osuka commented on SOLR-4751:


Sorry for the inconvenience.

I'll modify test code.


 The replication problem of the file in a subdirectory.
 --

 Key: SOLR-4751
 URL: https://issues.apache.org/jira/browse/SOLR-4751
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.2.1
Reporter: Minoru Osuka
Assignee: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-4751.patch


 When set lang/stopwords_ja.txt to confFiles in replication settings,
 {code:xml}
   requestHandler name=/replication class=solr.ReplicationHandler 
lst name=master
  str name=replicateAftercommit/str
  str name=replicateAfterstartup/str
  str 
 name=confFilesschema.xml,stopwords.txt,lang/stopwords_ja.txt/str
/lst
   /requestHandler
 {code}
 Only stopwords_ja.txt exists in solr/collection1/conf/lang directory on slave 
 node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: solr no longer webapp

2013-05-13 Thread Mark Miller

On May 13, 2013, at 3:13 AM, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 My impression of Solr up until 3.5 was that is was intended for
 everyone that wanted to get started with search, big or small. It
 makes sense to narrow the focus, but that should be followed by a clear
 statement like Solr is primarily intended for large scale projects but
 can also be used for small scale


Meh - I just see Solr as a search engine. People will still be able to use for 
whatever scale they wish. I'm not personally going to tag line it as intended 
for anything. I don't think a 'webapp' is pro small scale either. When 
developing software, I have never once thought, oh, this is for small scale 
stuff, I should put it in a webapp!

If I were making a search engine, whatever my intentions for it, I'd personally 
never start with the idea that it's a webapp even if I used that for the 
implementation.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4796) zkcli.sh should honor JAVA_HOME

2013-05-13 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655871#comment-13655871
 ] 

Commit Tag Bot commented on SOLR-4796:
--

[trunk commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1481753

SOLR-4796: zkcli.sh should honor JAVA_HOME

 zkcli.sh should honor JAVA_HOME
 ---

 Key: SOLR-4796
 URL: https://issues.apache.org/jira/browse/SOLR-4796
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.2
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4796.patch.txt


 On a system with GNU java installed the fact that zkcli.sh doesn't honor 
 JAVA_HOME could lead to hard to diagnose failure:
 {noformat}
 Exception in thread main java.lang.NoClassDefFoundError: 
 org.apache.solr.cloud.ZkCLI
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 Caused by: java.lang.ClassNotFoundException: org.apache.solr.cloud.ZkCLI not 
 found in gnu.gcj.runtime.SystemClassLoader{urls=[], 
 parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
at java.net.URLClassLoader.findClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4796) zkcli.sh should honor JAVA_HOME

2013-05-13 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655873#comment-13655873
 ] 

Commit Tag Bot commented on SOLR-4796:
--

[branch_4x commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1481757

SOLR-4796: zkcli.sh should honor JAVA_HOME

 zkcli.sh should honor JAVA_HOME
 ---

 Key: SOLR-4796
 URL: https://issues.apache.org/jira/browse/SOLR-4796
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.2
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4796.patch.txt


 On a system with GNU java installed the fact that zkcli.sh doesn't honor 
 JAVA_HOME could lead to hard to diagnose failure:
 {noformat}
 Exception in thread main java.lang.NoClassDefFoundError: 
 org.apache.solr.cloud.ZkCLI
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 Caused by: java.lang.ClassNotFoundException: org.apache.solr.cloud.ZkCLI not 
 found in gnu.gcj.runtime.SystemClassLoader{urls=[], 
 parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
at java.net.URLClassLoader.findClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4796) zkcli.sh should honor JAVA_HOME

2013-05-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4796.
---

Resolution: Fixed

Thanks Roman!

 zkcli.sh should honor JAVA_HOME
 ---

 Key: SOLR-4796
 URL: https://issues.apache.org/jira/browse/SOLR-4796
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.2
Reporter: Roman Shaposhnik
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4796.patch.txt


 On a system with GNU java installed the fact that zkcli.sh doesn't honor 
 JAVA_HOME could lead to hard to diagnose failure:
 {noformat}
 Exception in thread main java.lang.NoClassDefFoundError: 
 org.apache.solr.cloud.ZkCLI
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 Caused by: java.lang.ClassNotFoundException: org.apache.solr.cloud.ZkCLI not 
 found in gnu.gcj.runtime.SystemClassLoader{urls=[], 
 parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
at java.net.URLClassLoader.findClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
at gnu.java.lang.MainThread.run(libgcj.so.7rh)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4814) If a SolrCore cannot be created it should remove any information it published about itself from ZooKeeper.

2013-05-13 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4814:
-

 Summary: If a SolrCore cannot be created it should remove any 
information it published about itself from ZooKeeper.
 Key: SOLR-4814
 URL: https://issues.apache.org/jira/browse/SOLR-4814
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer's parameter.

2013-05-13 Thread Shingo Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shingo Sasaki updated SOLR-4813:


Attachment: SOLR-4813.patch

I posted the patch of bug fix and improvement. Please review it. 

The improvement is following.

{quote}
If TokenizerFactory's parameter name is the same as SynonymFilterFactory's 
parameter name, you must add prefix tokenizerFactory. to TokenizerFactory's 
parameter name.
{quote}

 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Priority: Critical
  Labels: SynonymFilterFactory
 Attachments: SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2013-05-13 Thread Sivan Yogev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivan Yogev updated LUCENE-4258:


Attachment: LUCENE-4258.branch.5.patch

New patch handling another scenario of an update relating to previous update in 
the same segment.

 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
 Fix For: 4.4

 Attachments: IncrementalFieldUpdates.odp, 
 LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
 LUCENE-4258.branch.2.patch, LUCENE-4258.branch3.patch, 
 LUCENE-4258.branch.4.patch, LUCENE-4258.branch.5.patch, 
 LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
 LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
 LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch

   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4975) Add Replication module to Lucene

2013-05-13 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655915#comment-13655915
 ] 

Commit Tag Bot commented on LUCENE-4975:


[trunk commit] shaie
http://svn.apache.org/viewvc?view=revisionrevision=1481804

LUCENE-4975: Add Replication module to Lucene

 Add Replication module to Lucene
 

 Key: LUCENE-4975
 URL: https://issues.apache.org/jira/browse/LUCENE-4975
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch


 I wrote a replication module which I think will be useful to Lucene users who 
 want to replicate their indexes for e.g high-availability, taking hot backups 
 etc.
 I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr no longer webapp

2013-05-13 Thread Jack Krupansky
Maybe somebody could supply a little history about how Solr actually became 
a webapp as opposed to a standalone web server. If the only/main/primary 
reason was because it's easier, then it may indeed be time to move on. If 
it was some other reason or requirement, then let's revisit a list of 
those reasons/requirements


I have no problem with multiple packagings of Solr - embedded, webapp, raw 
SolrCloud node server, etc. It might be simply the question of whether to 
drop any of the Solr modes/packagings.


-- Jack Krupansky

-Original Message- 
From: Mark Miller

Sent: Monday, May 13, 2013 5:02 AM
To: dev@lucene.apache.org
Subject: Re: VOTE: solr no longer webapp


On May 13, 2013, at 3:13 AM, Toke Eskildsen t...@statsbiblioteket.dk wrote:


My impression of Solr up until 3.5 was that is was intended for
everyone that wanted to get started with search, big or small. It
makes sense to narrow the focus, but that should be followed by a clear
statement like Solr is primarily intended for large scale projects but
can also be used for small scale



Meh - I just see Solr as a search engine. People will still be able to use 
for whatever scale they wish. I'm not personally going to tag line it as 
intended for anything. I don't think a 'webapp' is pro small scale either. 
When developing software, I have never once thought, oh, this is for small 
scale stuff, I should put it in a webapp!


If I were making a search engine, whatever my intentions for it, I'd 
personally never start with the idea that it's a webapp even if I used that 
for the implementation.


- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4815) DIH: Let commit be checked by default

2013-05-13 Thread JIRA
Jan Høydahl created SOLR-4815:
-

 Summary: DIH: Let commit be checked by default
 Key: SOLR-4815
 URL: https://issues.apache.org/jira/browse/SOLR-4815
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Jan Høydahl
Priority: Trivial
 Fix For: 4.4


The new DIH GUI should have commit checked by default.

According to http://wiki.apache.org/solr/DataImportHandler#Commands the REST 
API has commit=true by default, so it makes sense that the GUI has the same.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #325: POMs out of sync

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/325/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk

Error Message:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/maven-build/solr/core/src/test/target/test/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368447096179/xslt/.svn/prop-base/updateXml.xsl.svn-base
 does not exist 
source:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/solr/example/solr/collection1/conf/xslt/.svn/prop-base/updateXml.xsl.svn-base

Stack Trace:
java.lang.AssertionError: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/maven-build/solr/core/src/test/target/test/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368447096179/xslt/.svn/prop-base/updateXml.xsl.svn-base
 does not exist 
source:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/solr/example/solr/collection1/conf/xslt/.svn/prop-base/updateXml.xsl.svn-base
at 
__randomizedtesting.SeedInfo.seed([926489E35AB026F5:B60224EDD07F9093]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk(ZkCLITest.java:198)


FAILED:  
org.apache.solr.search.QueryEqualityTest.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
maxscore

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: maxscore
at __randomizedtesting.SeedInfo.seed([DE35EDD524DA0BA6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:61)




Build Log:
[...truncated 23627 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-4975) Add Replication module to Lucene

2013-05-13 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-4975.


   Resolution: Fixed
Fix Version/s: 4.4
   5.0
Lucene Fields: New,Patch Available  (was: New)

Committed to trunk and 4x. Thanks Mike!

 Add Replication module to Lucene
 

 Key: LUCENE-4975
 URL: https://issues.apache.org/jira/browse/LUCENE-4975
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 5.0, 4.4

 Attachments: LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, LUCENE-4975.patch, 
 LUCENE-4975.patch, LUCENE-4975.patch


 I wrote a replication module which I think will be useful to Lucene users who 
 want to replicate their indexes for e.g high-availability, taking hot backups 
 etc.
 I will upload a patch soon where I'll describe in general how it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4922) A SpatialPrefixTree based on the Hilbert Curve and variable grid sizes

2013-05-13 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655944#comment-13655944
 ] 

David Smiley commented on LUCENE-4922:
--

Thanks for your interest in this John. You've gotten the ball rolling. I 
confess I don't do any byte or bit manipulation work so it'll take some time to 
fully digest what's going on in this code which uses a lot of it.  More 
comments would have helped. It might be interesting to look at how Lucene Trie 
floating point fields work.  Thanks for the reference to efficient ways to 
interleave bits; I'll re-post here for convenient reference: 
http://www-graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN
 (there are 2 approaches there).  I'm not sure when I'll have significant time 
to contribute more directly other than add to the conversation.

 A SpatialPrefixTree based on the Hilbert Curve and variable grid sizes
 --

 Key: LUCENE-4922
 URL: https://issues.apache.org/jira/browse/LUCENE-4922
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
  Labels: gsoc2013, mentor, newdev
 Attachments: HilbertConverter.zip


 My wish-list for an ideal SpatialPrefixTree has these properties:
 * Hilbert Curve ordering
 * Variable grid size per level (ex: 256 at the top, 64 at the bottom, 16 for 
 all in-between)
 * Compact binary encoding (so-called Morton number)
 * Works for geodetic (i.e. lat  lon) and non-geodetic
 Some bonus wishes for use in geospatial:
 * Use an equal-area projection such that each cell has an equal area to all 
 others at the same level.
 * When advancing a grid level, if a cell's width is less than half its 
 height. then divide it as 4 vertically stacked instead of 2 by 2. The point 
 is to avoid super-skinny cells which occurs towards the poles and degrades 
 performance.
 All of this requires some basic performance benchmarks to measure the effects 
 of these characteristics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer's parameter.

2013-05-13 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655951#comment-13655951
 ] 

Jack Krupansky commented on SOLR-4813:
--

Yes, you are right, this is a regression - I checked 4.2 - all the original 
args get passed to the tokenizer factory. And your suggested improvement 
makes sense.

You should lobby to have this fix in 4.3.1.

Some enhancement to the synonym filter factory Javadoc is also needed - to 
explain how tokenizer factory args are passed.


 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Priority: Critical
  Labels: SynonymFilterFactory
 Attachments: SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2013-05-13 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655957#comment-13655957
 ] 

Shai Erera commented on LUCENE-4258:


Committed this to the branch + upgraded to trunk. Sivan, there are tests still 
failing -- is this expected?

 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
 Fix For: 4.4

 Attachments: IncrementalFieldUpdates.odp, 
 LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
 LUCENE-4258.branch.2.patch, LUCENE-4258.branch3.patch, 
 LUCENE-4258.branch.4.patch, LUCENE-4258.branch.5.patch, 
 LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
 LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
 LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch

   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-trunk - Build # 2187 - Failure

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2187/

No tests ran.

Build Log:
[...truncated 17904 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/solr/build.xml:376:
 Can't get http://people.apache.org/keys/group/lucene.asc to 
/usr/home/hudson/hudson-slave/workspace/Solr-Artifacts-trunk/solr/package/KEYS

Total time: 7 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 1239 - Still Failing

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-java7/1239/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.replicator.http.HttpReplicatorTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.lucene.replicator.http.HttpReplicatorTest: 1) Thread[id=13, 
name=qtp711776018-13 Acceptor0 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)2) Thread[id=17, 
name=qtp711776018-17 Acceptor1 SelectChannelConnector@0.0.0.0:7000, 
state=RUNNABLE, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)3) Thread[id=19, 
name=qtp711776018-19 Acceptor3 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)4) Thread[id=18, 
name=qtp711776018-18 Acceptor2 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.lucene.replicator.http.HttpReplicatorTest: 
   1) Thread[id=13, name=qtp711776018-13 Acceptor0 
SelectChannelConnector@0.0.0.0:7000, state=BLOCKED, 
group=TGRP-HttpReplicatorTest]
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
   2) Thread[id=17, name=qtp711776018-17 Acceptor1 
SelectChannelConnector@0.0.0.0:7000, state=RUNNABLE, 
group=TGRP-HttpReplicatorTest]
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
   3) Thread[id=19, name=qtp711776018-19 Acceptor3 
SelectChannelConnector@0.0.0.0:7000, state=BLOCKED, 
group=TGRP-HttpReplicatorTest]
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 

[JENKINS] Lucene-Solr-Tests-trunk-java7 - Build # 3976 - Still Failing

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3976/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.replicator.http.HttpReplicatorTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.lucene.replicator.http.HttpReplicatorTest: 1) Thread[id=17, 
name=qtp1367387251-17 Acceptor0 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)2) Thread[id=13, 
name=qtp1367387251-13 Acceptor3 SelectChannelConnector@0.0.0.0:7000, 
state=RUNNABLE, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)3) Thread[id=19, 
name=qtp1367387251-19 Acceptor2 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)4) Thread[id=18, 
name=qtp1367387251-18 Acceptor1 SelectChannelConnector@0.0.0.0:7000, 
state=BLOCKED, group=TGRP-HttpReplicatorTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210) 
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.lucene.replicator.http.HttpReplicatorTest: 
   1) Thread[id=17, name=qtp1367387251-17 Acceptor0 
SelectChannelConnector@0.0.0.0:7000, state=BLOCKED, 
group=TGRP-HttpReplicatorTest]
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
   2) Thread[id=13, name=qtp1367387251-13 Acceptor3 
SelectChannelConnector@0.0.0.0:7000, state=RUNNABLE, 
group=TGRP-HttpReplicatorTest]
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
   3) Thread[id=19, name=qtp1367387251-19 Acceptor2 
SelectChannelConnector@0.0.0.0:7000, state=BLOCKED, 
group=TGRP-HttpReplicatorTest]
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:210)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)
at 

[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes 32k

2013-05-13 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13655998#comment-13655998
 ] 

David Smiley commented on LUCENE-4583:
--

Quoting Mike:
bq. I don't think we should change the limit for sorted/set nor terms: I
think we should raise the limit ONLY for BINARY, and declare that DV
BINARY is for these abuse cases. So if you really really want
sorted set with a higher limit then you will have to encode yourself
into DV BINARY.

+1.  DV Binary is generic for applications to use as it might see fit.  _There 
is no use case to abuse._  If this issue passes, I'm not going to then ask for 
terms  32k or something silly like that.

 StraightBytesDocValuesField fails if bytes  32k
 

 Key: LUCENE-4583
 URL: https://issues.apache.org/jira/browse/LUCENE-4583
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0, 4.1, 5.0
Reporter: David Smiley
Priority: Critical
 Fix For: 4.4

 Attachments: LUCENE-4583.patch, LUCENE-4583.patch, LUCENE-4583.patch


 I didn't observe any limitations on the size of a bytes based DocValues field 
 value in the docs.  It appears that the limit is 32k, although I didn't get 
 any friendly error telling me that was the limit.  32k is kind of small IMO; 
 I suspect this limit is unintended and as such is a bug.The following 
 test fails:
 {code:java}
   public void testBigDocValue() throws IOException {
 Directory dir = newDirectory();
 IndexWriter writer = new IndexWriter(dir, writerConfig(false));
 Document doc = new Document();
 BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
 bytes.length = bytes.bytes.length;//byte data doesn't matter
 doc.add(new StraightBytesDocValuesField(dvField, bytes));
 writer.addDocument(doc);
 writer.commit();
 writer.close();
 DirectoryReader reader = DirectoryReader.open(dir);
 DocValues docValues = MultiDocValues.getDocValues(reader, dvField);
 //FAILS IF BYTES IS BIG!
 docValues.getSource().getBytes(0, bytes);
 reader.close();
 dir.close();
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1481880 - in /lucene/dev/trunk: dev-tools/idea/.idea/ dev-tools/idea/lucene/replicator/ lucene/replicator/ lucene/replicator/src/test/org/apache/lucene/replicator/

2013-05-13 Thread Shai Erera
Thanks Steve!

Shai


On Mon, May 13, 2013 at 5:36 PM, sar...@apache.org wrote:

 Author: sarowe
 Date: Mon May 13 14:36:58 2013
 New Revision: 1481880

 URL: http://svn.apache.org/r1481880
 Log:
 LUCENE-4975: replicator module: IntelliJ configuration

 Added:
 lucene/dev/trunk/dev-tools/idea/lucene/replicator/
 lucene/dev/trunk/dev-tools/idea/lucene/replicator/replicator.iml
 Modified:
 lucene/dev/trunk/dev-tools/idea/.idea/ant.xml
 lucene/dev/trunk/dev-tools/idea/.idea/modules.xml
 lucene/dev/trunk/dev-tools/idea/.idea/workspace.xml
 lucene/dev/trunk/lucene/replicator/   (props changed)

 lucene/dev/trunk/lucene/replicator/src/test/org/apache/lucene/replicator/ReplicatorTestCase.java

 Modified: lucene/dev/trunk/dev-tools/idea/.idea/ant.xml
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/idea/.idea/ant.xml?rev=1481880r1=1481879r2=1481880view=diff

 ==
 --- lucene/dev/trunk/dev-tools/idea/.idea/ant.xml (original)
 +++ lucene/dev/trunk/dev-tools/idea/.idea/ant.xml Mon May 13 14:36:58 2013
 @@ -27,6 +27,7 @@
  buildFile url=file://$PROJECT_DIR$/lucene/misc/build.xml /
  buildFile url=file://$PROJECT_DIR$/lucene/queries/build.xml /
  buildFile url=file://$PROJECT_DIR$/lucene/queryparser/build.xml /
 +buildFile url=file://$PROJECT_DIR$/lucene/replicator/build.xml /
  buildFile url=file://$PROJECT_DIR$/lucene/sandbox/build.xml /
  buildFile url=file://$PROJECT_DIR$/lucene/spatial/build.xml /
  buildFile url=file://$PROJECT_DIR$/lucene/suggest/build.xml /

 Modified: lucene/dev/trunk/dev-tools/idea/.idea/modules.xml
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/idea/.idea/modules.xml?rev=1481880r1=1481879r2=1481880view=diff

 ==
 --- lucene/dev/trunk/dev-tools/idea/.idea/modules.xml (original)
 +++ lucene/dev/trunk/dev-tools/idea/.idea/modules.xml Mon May 13 14:36:58
 2013
 @@ -31,6 +31,7 @@
module filepath=$PROJECT_DIR$/lucene/misc/misc.iml /
module filepath=$PROJECT_DIR$/lucene/queries/queries.iml /
module filepath=$PROJECT_DIR$/lucene/queryparser/queryparser.iml
 /
 +  module filepath=$PROJECT_DIR$/lucene/replicator/replicator.iml /
module filepath=$PROJECT_DIR$/lucene/sandbox/sandbox.iml /
module filepath=$PROJECT_DIR$/lucene/spatial/spatial.iml /
module filepath=$PROJECT_DIR$/lucene/suggest/suggest.iml /

 Modified: lucene/dev/trunk/dev-tools/idea/.idea/workspace.xml
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/idea/.idea/workspace.xml?rev=1481880r1=1481879r2=1481880view=diff

 ==
 --- lucene/dev/trunk/dev-tools/idea/.idea/workspace.xml (original)
 +++ lucene/dev/trunk/dev-tools/idea/.idea/workspace.xml Mon May 13
 14:36:58 2013
 @@ -144,6 +144,13 @@
option name=VM_PARAMETERS value=-ea -DtempDir=temp /
option name=TEST_SEARCH_SCOPEvalue defaultName=singleModule
 //option
  /configuration
 +configuration default=false name=Module replicator type=JUnit
 factoryName=JUnit
 +  module name=replicator /
 +  option name=TEST_OBJECT value=package /
 +  option name=WORKING_DIRECTORY
 value=file://$PROJECT_DIR$/idea-build/lucene/replicator /
 +  option name=VM_PARAMETERS value=-ea -DtempDir=temp /
 +  option name=TEST_SEARCH_SCOPEvalue defaultName=singleModule
 //option
 +/configuration
  configuration default=false name=Module sandbox type=JUnit
 factoryName=JUnit
module name=sandbox /
option name=TEST_OBJECT value=package /
 @@ -235,7 +242,7 @@
option name=VM_PARAMETERS value=-ea /
option name=TEST_SEARCH_SCOPEvalue defaultName=singleModule
 //option
  /configuration
 -list size=33
 +list size=34
item index=0 class=java.lang.String itemvalue=JUnit.Lucene
 core /
item index=1 class=java.lang.String itemvalue=JUnit.Module
 analyzers-common /
item index=2 class=java.lang.String itemvalue=JUnit.Module
 analyzers-icu /
 @@ -256,19 +263,20 @@
item index=17 class=java.lang.String itemvalue=JUnit.Module
 misc /
item index=18 class=java.lang.String itemvalue=JUnit.Module
 queries /
item index=19 class=java.lang.String itemvalue=JUnit.Module
 queryparser /
 -  item index=20 class=java.lang.String itemvalue=JUnit.Module
 sandbox /
 -  item index=21 class=java.lang.String itemvalue=JUnit.Module
 spatial /
 -  item index=22 class=java.lang.String itemvalue=JUnit.Module
 suggest /
 -  item index=23 class=java.lang.String itemvalue=JUnit.Solr
 core /
 -  item index=24 class=java.lang.String itemvalue=JUnit.Solr
 analysis-extras contrib /
 -  item index=25 class=java.lang.String itemvalue=JUnit.Solr
 clustering contrib /
 -  item index=26 class=java.lang.String itemvalue=JUnit.Solr
 

[jira] [Updated] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-13 Thread Deepthi Sigireddi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepthi Sigireddi updated SOLR-1913:


Attachment: solr-bitwise-plugin.jar

Hi Christopher,
I'm attaching the jar from my local build. Note that I have named it a bit 
differently - solr-bitwise-plugin.jar


 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-13 Thread Deepthi Sigireddi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656012#comment-13656012
 ] 

Deepthi Sigireddi edited comment on SOLR-1913 at 5/13/13 2:47 PM:
--

Hi Christopher,
I'm attaching the jar from my local build. Note that I have named it a bit 
differently - solr-bitwise-plugin.jar.


  was (Author: dsigireddi):
Hi Christopher,
I'm attaching the jar from my local build. Note that I have named it a bit 
differently - solr-bitwise-plugin.jar

  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 256 - Failure

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/256/

1 tests failed.
REGRESSION:  
org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat.testBigDocuments

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
at 
org.apache.lucene.store.RAMOutputStream.writeByte(RAMOutputStream.java:127)
at 
org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:42)
at 
org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:33)
at 
org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.write(SimpleTextStoredFieldsWriter.java:186)
at 
org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.writeField(SimpleTextStoredFieldsWriter.java:98)
at 
org.apache.lucene.index.StoredFieldsProcessor.finishDocument(StoredFieldsProcessor.java:125)
at 
org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument(TwoStoredFieldsConsumers.java:65)
at 
org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor.java:273)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:274)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:376)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1508)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1183)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:152)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:114)
at 
org.apache.lucene.index.BaseStoredFieldsFormatTestCase.testBigDocuments(BaseStoredFieldsFormatTestCase.java:634)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)




Build Log:
[...truncated 6058 lines...]
[junit4:junit4] Suite: 
org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat
[junit4:junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestSimpleTextStoredFieldsFormat -Dtests.method=testBigDocuments 
-Dtests.seed=D28C83E49F7A7B13 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=mt -Dtests.timezone=Asia/Dili -Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   19.1s J1 | 
TestSimpleTextStoredFieldsFormat.testBigDocuments 
[junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: GC overhead 
limit exceeded
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
[junit4:junit4]at 
org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
[junit4:junit4]at 
org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
[junit4:junit4]at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
[junit4:junit4]at 
org.apache.lucene.store.RAMOutputStream.writeByte(RAMOutputStream.java:127)
[junit4:junit4]at 
org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:42)
[junit4:junit4]at 

Re: svn commit: r1481912 - in /lucene/dev/trunk/dev-tools/maven/lucene: pom.xml.template replicator/pom.xml.template

2013-05-13 Thread Shai Erera
Thanks Steve. But you may want to rename this comment: !-- HttpSolrServer
requires this dependency. -- to !-- HttpReplicator requires this
dependency. --?

Shai


On Mon, May 13, 2013 at 6:15 PM, sar...@apache.org wrote:

 Author: sarowe
 Date: Mon May 13 15:15:01 2013
 New Revision: 1481912

 URL: http://svn.apache.org/r1481912
 Log:
 LUCENE-4975: replicator module: make Maven configuration functional

 Modified:
 lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
 lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template

 Modified: lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff

 ==
 --- lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template (original)
 +++ lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template Mon May 13
 15:15:01 2013
 @@ -55,6 +55,7 @@
  modulemisc/module
  modulequeries/module
  modulequeryparser/module
 +modulereplicator/module
  modulesandbox/module
  modulespatial/module
  modulesuggest/module

 Modified:
 lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff

 ==
 --- lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 (original)
 +++ lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 Mon May 13 15:15:01 2013
 @@ -59,6 +59,33 @@
artifactIdlucene-facet/artifactId
version${project.version}/version
  /dependency
 +dependency
 +  groupIdorg.apache.httpcomponents/groupId
 +  artifactIdhttpclient/artifactId
 +  !-- HttpSolrServer requires this dependency. --
 +  exclusions
 +exclusion
 +  groupIdcommons-logging/groupId
 +  artifactIdcommons-logging/artifactId
 +/exclusion
 +  /exclusions
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-server/artifactId
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-servlet/artifactId
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-util/artifactId
 +/dependency
 +dependency
 +  groupIdorg.slf4j/groupId
 +  artifactIdjcl-over-slf4j/artifactId
 +/dependency
/dependencies
build
  sourceDirectory${module-path}/src/java/sourceDirectory





Re: svn commit: r1481912 - in /lucene/dev/trunk/dev-tools/maven/lucene: pom.xml.template replicator/pom.xml.template

2013-05-13 Thread Steve Rowe
Thanks for the review, Shai, I'll just remove the comment altogether 
(copy-paste-o from Solrj). - Steve

On May 13, 2013, at 11:15 AM, sar...@apache.org wrote:

 Author: sarowe
 Date: Mon May 13 15:15:01 2013
 New Revision: 1481912
 
 URL: http://svn.apache.org/r1481912
 Log:
 LUCENE-4975: replicator module: make Maven configuration functional
 
 Modified:
lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 
 Modified: lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff
 ==
 --- lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template (original)
 +++ lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template Mon May 13 
 15:15:01 2013
 @@ -55,6 +55,7 @@
 modulemisc/module
 modulequeries/module
 modulequeryparser/module
 +modulereplicator/module
 modulesandbox/module
 modulespatial/module
 modulesuggest/module
 
 Modified: lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff
 ==
 --- lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template 
 (original)
 +++ lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template Mon 
 May 13 15:15:01 2013
 @@ -59,6 +59,33 @@
   artifactIdlucene-facet/artifactId
   version${project.version}/version
 /dependency
 +dependency
 +  groupIdorg.apache.httpcomponents/groupId
 +  artifactIdhttpclient/artifactId
 +  !-- HttpSolrServer requires this dependency. --
 +  exclusions
 +exclusion
 +  groupIdcommons-logging/groupId
 +  artifactIdcommons-logging/artifactId
 +/exclusion
 +  /exclusions
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-server/artifactId
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-servlet/artifactId
 +/dependency
 +dependency
 +  groupIdorg.eclipse.jetty/groupId
 +  artifactIdjetty-util/artifactId
 +/dependency
 +dependency
 +  groupIdorg.slf4j/groupId
 +  artifactIdjcl-over-slf4j/artifactId
 +/dependency
   /dependencies
   build
 sourceDirectory${module-path}/src/java/sourceDirectory
 
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer factory's parameter.

2013-05-13 Thread Shingo Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shingo Sasaki updated SOLR-4813:


Summary: Unavoidable IllegalArgumentException occurs when 
SynonymFilterFactory's setting has tokenizer factory's parameter.  (was: 
Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting 
has tokenizer's parameter.)

 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer factory's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Priority: Critical
  Labels: SynonymFilterFactory
 Attachments: SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 256 - Failure

2013-05-13 Thread Robert Muir
I can reproduce this, ill try to see whats going on.

On Mon, May 13, 2013 at 11:07 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/256/

 1 tests failed.
 REGRESSION:
  
 org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat.testBigDocuments

 Error Message:
 GC overhead limit exceeded

 Stack Trace:
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 at
 __randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
 at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
 at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
 at
 org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
 at
 org.apache.lucene.store.RAMOutputStream.writeByte(RAMOutputStream.java:127)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:42)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:33)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.write(SimpleTextStoredFieldsWriter.java:186)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.writeField(SimpleTextStoredFieldsWriter.java:98)
 at
 org.apache.lucene.index.StoredFieldsProcessor.finishDocument(StoredFieldsProcessor.java:125)
 at
 org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument(TwoStoredFieldsConsumers.java:65)
 at
 org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor.java:273)
 at
 org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:274)
 at
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:376)
 at
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1508)
 at
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1183)
 at
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:152)
 at
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:114)
 at
 org.apache.lucene.index.BaseStoredFieldsFormatTestCase.testBigDocuments(BaseStoredFieldsFormatTestCase.java:634)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)




 Build Log:
 [...truncated 6058 lines...]
 [junit4:junit4] Suite:
 org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat
 [junit4:junit4]   2 NOTE: download the large Jenkins line-docs file by
 running 'ant get-jenkins-line-docs' in the lucene directory.
 [junit4:junit4]   2 NOTE: reproduce with: ant test
  -Dtestcase=TestSimpleTextStoredFieldsFormat
 -Dtests.method=testBigDocuments -Dtests.seed=D28C83E49F7A7B13
 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true
 -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt -Dtests.timezone=Asia/Dili -Dtests.file.encoding=UTF-8
 [junit4:junit4] ERROR   19.1s J1 |
 TestSimpleTextStoredFieldsFormat.testBigDocuments 
 [junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: GC overhead
 limit exceeded
 [junit4:junit4]at
 __randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
 [junit4:junit4]at
 org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
 [junit4:junit4]at
 org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
 [junit4:junit4]at
 org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
 [junit4:junit4]at
 

Re: svn commit: r1481912 - in /lucene/dev/trunk/dev-tools/maven/lucene: pom.xml.template replicator/pom.xml.template

2013-05-13 Thread Shai Erera
ok

I just committed a fix to the template following the build failure.
Appreciate if you can review the change too!

Shai


On Mon, May 13, 2013 at 6:21 PM, Steve Rowe sar...@gmail.com wrote:

 Thanks for the review, Shai, I'll just remove the comment altogether
 (copy-paste-o from Solrj). - Steve

 On May 13, 2013, at 11:15 AM, sar...@apache.org wrote:

  Author: sarowe
  Date: Mon May 13 15:15:01 2013
  New Revision: 1481912
 
  URL: http://svn.apache.org/r1481912
  Log:
  LUCENE-4975: replicator module: make Maven configuration functional
 
  Modified:
 lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
 lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 
  Modified: lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template
  URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff
 
 ==
  --- lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template (original)
  +++ lucene/dev/trunk/dev-tools/maven/lucene/pom.xml.template Mon May 13
 15:15:01 2013
  @@ -55,6 +55,7 @@
  modulemisc/module
  modulequeries/module
  modulequeryparser/module
  +modulereplicator/module
  modulesandbox/module
  modulespatial/module
  modulesuggest/module
 
  Modified:
 lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
  URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template?rev=1481912r1=1481911r2=1481912view=diff
 
 ==
  --- lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 (original)
  +++ lucene/dev/trunk/dev-tools/maven/lucene/replicator/pom.xml.template
 Mon May 13 15:15:01 2013
  @@ -59,6 +59,33 @@
artifactIdlucene-facet/artifactId
version${project.version}/version
  /dependency
  +dependency
  +  groupIdorg.apache.httpcomponents/groupId
  +  artifactIdhttpclient/artifactId
  +  !-- HttpSolrServer requires this dependency. --
  +  exclusions
  +exclusion
  +  groupIdcommons-logging/groupId
  +  artifactIdcommons-logging/artifactId
  +/exclusion
  +  /exclusions
  +/dependency
  +dependency
  +  groupIdorg.eclipse.jetty/groupId
  +  artifactIdjetty-server/artifactId
  +/dependency
  +dependency
  +  groupIdorg.eclipse.jetty/groupId
  +  artifactIdjetty-servlet/artifactId
  +/dependency
  +dependency
  +  groupIdorg.eclipse.jetty/groupId
  +  artifactIdjetty-util/artifactId
  +/dependency
  +dependency
  +  groupIdorg.slf4j/groupId
  +  artifactIdjcl-over-slf4j/artifactId
  +/dependency
/dependencies
build
  sourceDirectory${module-path}/src/java/sourceDirectory
 
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 256 - Failure

2013-05-13 Thread Robert Muir
This test makes hundreds of thousands of small stored fields, and also a
few large ones. But doesn't really index anything.

Add nightly, and add inefficient simpletext and it blows up
NRTCachingDirectory (LUCENE-4484).

I disabled nrtcachingdir in this test.

On Mon, May 13, 2013 at 11:24 AM, Robert Muir rcm...@gmail.com wrote:

 I can reproduce this, ill try to see whats going on.

 On Mon, May 13, 2013 at 11:07 AM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/256/

 1 tests failed.
 REGRESSION:
  
 org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat.testBigDocuments

 Error Message:
 GC overhead limit exceeded

 Stack Trace:
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 at
 __randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
 at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
 at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
 at
 org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
 at
 org.apache.lucene.store.RAMOutputStream.writeByte(RAMOutputStream.java:127)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:42)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextUtil.write(SimpleTextUtil.java:33)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.write(SimpleTextStoredFieldsWriter.java:186)
 at
 org.apache.lucene.codecs.simpletext.SimpleTextStoredFieldsWriter.writeField(SimpleTextStoredFieldsWriter.java:98)
 at
 org.apache.lucene.index.StoredFieldsProcessor.finishDocument(StoredFieldsProcessor.java:125)
 at
 org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument(TwoStoredFieldsConsumers.java:65)
 at
 org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor.java:273)
 at
 org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:274)
 at
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:376)
 at
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1508)
 at
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1183)
 at
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:152)
 at
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:114)
 at
 org.apache.lucene.index.BaseStoredFieldsFormatTestCase.testBigDocuments(BaseStoredFieldsFormatTestCase.java:634)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)




 Build Log:
 [...truncated 6058 lines...]
 [junit4:junit4] Suite:
 org.apache.lucene.codecs.simpletext.TestSimpleTextStoredFieldsFormat
 [junit4:junit4]   2 NOTE: download the large Jenkins line-docs file by
 running 'ant get-jenkins-line-docs' in the lucene directory.
 [junit4:junit4]   2 NOTE: reproduce with: ant test
  -Dtestcase=TestSimpleTextStoredFieldsFormat
 -Dtests.method=testBigDocuments -Dtests.seed=D28C83E49F7A7B13
 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true
 -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt -Dtests.timezone=Asia/Dili -Dtests.file.encoding=UTF-8
 [junit4:junit4] ERROR   19.1s J1 |
 TestSimpleTextStoredFieldsFormat.testBigDocuments 
 [junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: GC
 overhead limit exceeded
 [junit4:junit4]at
 __randomizedtesting.SeedInfo.seed([D28C83E49F7A7B13:23A6D9BB2C56D534]:0)
 [junit4:junit4]at

[jira] [Created] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-4816:


 Summary: Add method to CloudSolrServer to send updates to the 
correct shard
 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor


This issue adds a directUpdate method to CloudSolrServer which routes update 
requests to the correct shard. This would be a nice feature to have to 
eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

Initial patch with the new directUpdate method. This is the initial 
implementation and has not been tested. Testing and implementation iterations 
to follow. Patch was generated with Solr 4.3

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1481936 - /lucene/dev/branches/branch_4x/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java

2013-05-13 Thread Adrien Grand
On Mon, May 13, 2013 at 5:43 PM,  rm...@apache.org wrote:
 Author: rmuir
 Date: Mon May 13 15:43:51 2013
 New Revision: 1481936

 URL: http://svn.apache.org/r1481936
 Log:
 avoid NRTCachingDirectory in this test

Thank you Robert!

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1481936 - /lucene/dev/branches/branch_4x/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java

2013-05-13 Thread Robert Muir
no problem. This time i found that to debug the OOM, its much easier to use
System.out.println+RAMUsageEstimator than to try to use crazy heap
dumps/debugger tools.

On Mon, May 13, 2013 at 11:55 AM, Adrien Grand jpou...@gmail.com wrote:

 On Mon, May 13, 2013 at 5:43 PM,  rm...@apache.org wrote:
  Author: rmuir
  Date: Mon May 13 15:43:51 2013
  New Revision: 1481936
 
  URL: http://svn.apache.org/r1481936
  Log:
  avoid NRTCachingDirectory in this test

 Thank you Robert!

 --
 Adrien

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Created] (LUCENE-4998) be more precise about IOContext for reads

2013-05-13 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created LUCENE-4998:
---

 Summary: be more precise about IOContext for reads
 Key: LUCENE-4998
 URL: https://issues.apache.org/jira/browse/LUCENE-4998
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shikhar Bhushan
Priority: Minor
 Fix For: 5.0, 4.4
 Attachments: LUCENE-4998.patch

Set the context as {{IOContext.READ}} / {{IOContext.READONCE}} where applicable



Motivation:

Custom {{PostingsFormat}} may want to check the context on {{SegmentReadState}} 
and branch differently, but for this to work properly the context has to be 
specified correctly up the stack.

For example, {{DirectPostingsFormat}} only loads postings into memory if the 
{{context != MERGE}}. However a better condition would be {{context == 
Context.READ  !context.readOnce}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4998) be more precise about IOContext for reads

2013-05-13 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated LUCENE-4998:


Attachment: LUCENE-4998.patch

 be more precise about IOContext for reads
 -

 Key: LUCENE-4998
 URL: https://issues.apache.org/jira/browse/LUCENE-4998
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shikhar Bhushan
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-4998.patch


 Set the context as {{IOContext.READ}} / {{IOContext.READONCE}} where 
 applicable
 
 Motivation:
 Custom {{PostingsFormat}} may want to check the context on 
 {{SegmentReadState}} and branch differently, but for this to work properly 
 the context has to be specified correctly up the stack.
 For example, {{DirectPostingsFormat}} only loads postings into memory if the 
 {{context != MERGE}}. However a better condition would be {{context == 
 Context.READ  !context.readOnce}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-13 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656075#comment-13656075
 ] 

Christopher commented on SOLR-1913:
---

Thank you for the jar !
I recompiled the war and the plugin seems to be considered ! When I forgotten a 
parameter (eg source) I have well the following error: source parameter is 
missing

But I have another error when I access
http://192.168.0.247:8983/solr/GU/select?q={!bitwise field=acl op=AND source=1}*

java.lang.NoSuchMethodError: 
org.apache.lucene.search.FieldCache.getInts(Lorg/apache/lucene/index/AtomicReader;Ljava/lang/String;Z)Lorg/apache/lucene/search/FieldCache$Ints;



 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4812) Edismax highlighting query doesn't work.

2013-05-13 Thread Nguyen Manh Tien (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Manh Tien updated SOLR-4812:
---

Fix Version/s: 4.4

 Edismax highlighting query doesn't work.
 

 Key: SOLR-4812
 URL: https://issues.apache.org/jira/browse/SOLR-4812
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2, 4.3
 Environment: When hl.q is a edismax query, Highligting will ignore 
 the query specified in hl.q
Reporter: Nguyen Manh Tien
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4812.patch


 When hl.q is an edismax query, Highligting will ignore the query specified in 
 hl.q
 edismax highlighting query hl.q={!edismax qf=title v=Software}
 function getHighlightQuery in edismax don't parse highlight query so it 
 always return null so hl.q is ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656083#comment-13656083
 ] 

Mark Miller commented on SOLR-4816:
---

This looks like a dupe of SOLR-3154 - see that issue for more history.

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656084#comment-13656084
 ] 

Mark Miller commented on SOLR-4816:
---

This shouldn't really be an extra method - it should just be the default way 
the CloudSolrServer works.

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #849: POMs out of sync

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/849/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Server at http://127.0.0.1:43835/fb_/hw returned non ok status:500, 
message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at 
http://127.0.0.1:43835/fb_/hw returned non ok status:500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([E6E81A6A02FE3E39:670E947275A15E05]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:208)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:133)


REGRESSION:  org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk

Error Message:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/maven-build/solr/core/src/test/target/test/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368460764035/.svn/all-wcprops
 does not exist 
source:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/example/solr/collection1/conf/.svn/all-wcprops

Stack Trace:
java.lang.AssertionError: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/maven-build/solr/core/src/test/target/test/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368460764035/.svn/all-wcprops
 does not exist 
source:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/example/solr/collection1/conf/.svn/all-wcprops
at 
__randomizedtesting.SeedInfo.seed([AFB4DCA796AE75DD:8BD271A91C61C3BB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk(ZkCLITest.java:198)


FAILED:  
org.apache.solr.search.QueryEqualityTest.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
maxscore

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: maxscore
at __randomizedtesting.SeedInfo.seed([A93B184D02ADD6B8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:61)




Build Log:
[...truncated 24205 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_21) - Build # 5636 - Still Failing!

2013-05-13 Thread Robert Muir
Oh this would explain it! My commit made it into #5638, lets see if it takes.

On Mon, May 13, 2013 at 12:42 PM, Uwe Schindler u...@thetaphi.de wrote:
 I am not sure, maybe the windows Jenkins slaves use svn 1.7, where the .svn
 folder is only on root level.

 I am on business trip so cannot check at the moment.

 LINUX works with 1.6 as ubuntu lts ships only with 1.6

 Uwe



 Robert Muir rcm...@gmail.com schrieb:

 Yeah, I think thats the issue actually. the second check is trying to
 filter individual files based on whether they are .hidden(). But i
 don't know if this really always works, like what does it return if
 the file itself (stoptags_ja.txt.svn-base) does not begin with ., but
 is inside a directory that starts with . And what does it do on
 windows :)

 so i think instead we can try using an explicit filter on directories
 themselves that start with . and remove the second check:
 http://svn.apache.org/r1481954

 On Mon, May 13, 2013 at 12:30 PM, Mark Miller markrmil...@gmail.com
 wrote:

 Ah, the new output just points to the .svn issue.

 Perhaps we should
 just exclude .svn explicitly?

 - Mark

 On May 13, 2013, at 12:28 PM, Mark Miller markrmil...@gmail.com wrote:

 I added some output to the fail earlier this morning - hopefully that
 helps. I can also try running it in my windows vm.

 - Mark

 On May 13, 2013, at 12:19 PM, Robert Muir rcm...@gmail.com wrote:

 I cannot reproduce this locally, but at least this seems sheisty:

 for (File sourceFile :sourceFiles){
 if (!sourceFile.isHidden()){

 I don't think this .isHidden will exclude .svn correctly on windows:

 Tests whether the file named by this abstract pathname is a hidden
 file. The exact definition of hidden is system-dependent. On
 UNIX
 systems, a file is considered to be hidden if its name begins with a
 period character ('.'). On Microsoft Windows systems, a file is
 considered to be hidden if it has been marked as such in the
 filesystem.

 Doesn't explain why it fails on linux, but still.

 On Mon, May 13, 2013 at 12:12 PM, Robert Muir rcm...@gmail.com wrote:

 I'm looking into this one. I think the logic for file comparison is
 not quite right and needs to ignore .svn

 On Mon, May 13, 2013 at 10:25 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5636/
 Java: 64bit/jdk1.7.0_21 -XX:+UseCompressedOops
 -XX:+UseConcMarkSweepGC

 2 tests failed.
 FAILED:
 org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk

 Error Message:

 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368454347615/lang/.svn/prop-base/stoptags_ja.txt.svn-base
 does not exist
 source:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/solr/collection1/conf/lang/.svn/prop-base/stoptags_ja.txt.svn-base

 Stack Trace:
 java.lang.AssertionError:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/./solrtest-confdropspot-org.apache.solr.cloud.ZkCLITest-1368454347615/lang/.svn/prop-base/stoptags_ja.txt.svn-base
 does not exist
 source:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/solr/collection1/conf/lang/.svn/prop-base/stoptags_ja.txt.svn-base




 at

 __randomizedtesting.SeedInfo.seed([935A541927F509FD:B73CF917AD3ABF9B]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at
 org.apache.solr.cloud.ZkCLITest.testUpConfigLinkConfigClearZk(ZkCLITest.java:198)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at

 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 

[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656119#comment-13656119
 ] 

Joel Bernstein commented on SOLR-4816:
--

OK, reviewing SOLR-3154. Thanks!

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b86) - Build # 5638 - Still Failing!

2013-05-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5638/
Java: 64bit/jdk1.8.0-ea-b86 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
maxscore

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: maxscore
at __randomizedtesting.SeedInfo.seed([F867C78D2EA3DF5F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:490)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 9792 lines...]
[junit4:junit4] Suite: org.apache.solr.search.QueryEqualityTest
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.090; 
org.apache.solr.SolrTestCaseJ4; initCore
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.091; 
org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/'
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.091; 
org.apache.solr.core.SolrResourceLoader; Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.091; 
org.apache.solr.core.SolrResourceLoader; Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.119; 
org.apache.solr.core.SolrConfig; Using Lucene MatchVersion: LUCENE_50
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.152; 
org.apache.solr.core.SolrConfig; Loaded SolrConfig: solrconfig.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.153; 
org.apache.solr.schema.IndexSchema; Reading Solr Schema from schema15.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.156; 
org.apache.solr.schema.IndexSchema; [null] Schema name=test
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.415; 
org.apache.solr.schema.IndexSchema; default search field in schema is text
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.417; 
org.apache.solr.schema.IndexSchema; unique key field: id
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.418; 
org.apache.solr.schema.FileExchangeRateProvider; Reloading exchange rates from 
file currency.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:04:12.420; 
org.apache.solr.schema.FileExchangeRateProvider; Reloading exchange rates from 
file currency.xml
[junit4:junit4]   1 INFO  - 

[jira] [Commented] (SOLR-4785) New MaxScoreQParserPlugin

2013-05-13 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656122#comment-13656122
 ] 

Robert Muir commented on SOLR-4785:
---

{quote}
I had a stab at putting some basic equality tests in place, but looking at the 
test case itself I wonder if QueryEqualityTest should be re-worked with the 
full fury of randomised testing, as it seems to be at best, only testing the 
happy cases.
{quote}

Can you commit this? I think its more important that the build is unbroken.

 New MaxScoreQParserPlugin
 -

 Key: SOLR-4785
 URL: https://issues.apache.org/jira/browse/SOLR-4785
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: 
 SOLR-4785-Add-tests-for-maxscore-to-QueryEqualityTest.patch, SOLR-4785.patch, 
 SOLR-4785.patch


 A customer wants to contribute back this component.
 It is a QParser which behaves exactly like lucene parser (extends it), but 
 returns the Max score from the clauses, i.e. max(c1,c2,c3..) instead of the 
 default which is sum(c1,c2,c3...). It does this by wrapping all SHOULD 
 clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses 
 are passed through as-is. Non-boolean queries, e.g. NumericRange 
 falls-through to lucene parser.
 To use, add to solrconfig.xml:
 {code:xml}
   queryParser name=maxscore class=solr.MaxScoreQParserPlugin/
 {code}
 Then use it in a query
 {noformat}
 q=A AND B AND {!maxscore v=$max}max=C OR (D AND E)
 {noformat}
 This will return the score of A+B+max(C,sum(D+E))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4785) New MaxScoreQParserPlugin

2013-05-13 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656136#comment-13656136
 ] 

Robert Muir commented on SOLR-4785:
---

I will commit this for now. Too many tests are failing in solr (test suite 
never passes).

 New MaxScoreQParserPlugin
 -

 Key: SOLR-4785
 URL: https://issues.apache.org/jira/browse/SOLR-4785
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: 
 SOLR-4785-Add-tests-for-maxscore-to-QueryEqualityTest.patch, SOLR-4785.patch, 
 SOLR-4785.patch


 A customer wants to contribute back this component.
 It is a QParser which behaves exactly like lucene parser (extends it), but 
 returns the Max score from the clauses, i.e. max(c1,c2,c3..) instead of the 
 default which is sum(c1,c2,c3...). It does this by wrapping all SHOULD 
 clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses 
 are passed through as-is. Non-boolean queries, e.g. NumericRange 
 falls-through to lucene parser.
 To use, add to solrconfig.xml:
 {code:xml}
   queryParser name=maxscore class=solr.MaxScoreQParserPlugin/
 {code}
 Then use it in a query
 {noformat}
 q=A AND B AND {!maxscore v=$max}max=C OR (D AND E)
 {noformat}
 This will return the score of A+B+max(C,sum(D+E))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_21) - Build # 5577 - Still Failing!

2013-05-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5577/
Java: 32bit/jdk1.7.0_21 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.QueryEqualityTest

Error Message:
testParserCoverage was run w/o any other method explicitly testing qparser: 
maxscore

Stack Trace:
java.lang.AssertionError: testParserCoverage was run w/o any other method 
explicitly testing qparser: maxscore
at __randomizedtesting.SeedInfo.seed([28F0A80915471FDE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.search.QueryEqualityTest.afterClassParserCoverageTest(QueryEqualityTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 9872 lines...]
[junit4:junit4] Suite: org.apache.solr.search.QueryEqualityTest
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.277; 
org.apache.solr.SolrTestCaseJ4; initCore
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.278; 
org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/'
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.279; 
org.apache.solr.core.SolrResourceLoader; Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/README'
 to classloader
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.279; 
org.apache.solr.core.SolrResourceLoader; Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test-files/solr/collection1/lib/classes/'
 to classloader
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.317; 
org.apache.solr.core.SolrConfig; Using Lucene MatchVersion: LUCENE_44
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.385; 
org.apache.solr.core.SolrConfig; Loaded SolrConfig: solrconfig.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.386; 
org.apache.solr.schema.IndexSchema; Reading Solr Schema from schema15.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.390; 
org.apache.solr.schema.IndexSchema; [null] Schema name=test
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.743; 
org.apache.solr.schema.IndexSchema; default search field in schema is text
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.746; 
org.apache.solr.schema.IndexSchema; unique key field: id
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.747; 
org.apache.solr.schema.FileExchangeRateProvider; Reloading exchange rates from 
file currency.xml
[junit4:junit4]   1 INFO  - 2013-05-13 17:47:10.750; 
org.apache.solr.schema.FileExchangeRateProvider; Reloading exchange rates from 
file currency.xml
[junit4:junit4]   1 INFO  - 2013-05-13 

[jira] [Commented] (LUCENE-4981) Deprecate PositionFilter

2013-05-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656171#comment-13656171
 ] 

Adrien Grand commented on LUCENE-4981:
--

Steve, may I commit this patch?

 Deprecate PositionFilter
 

 Key: LUCENE-4981
 URL: https://issues.apache.org/jira/browse/LUCENE-4981
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-4981.patch


 According to the documentation 
 (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
  PositionFilter is mainly useful to make query parsers generate boolean 
 queries instead of phrase queries although this problem can be solved at 
 query parsing level instead of analysis level (eg. using 
 QueryParser.setAutoGeneratePhraseQueries).
 So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
 propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Default Value for All Indexed Fields

2013-05-13 Thread srividhyau
We are using Lucene 3.0.3.  

Is there a way to set a default value to all fields being indexed in Lucene?
Say, i want to set the default value as NULL, indexed=NOT_ANALYZED,
stored=false.

This default value will be used, when a particular document does not have a
value set for any field.

-Vidhya



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Default-Value-for-All-Indexed-Fields-tp4063006.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3907) Improve the Edge/NGramTokenizer/Filters

2013-05-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-3907.
--

Resolution: Fixed

 Improve the Edge/NGramTokenizer/Filters
 ---

 Key: LUCENE-3907
 URL: https://issues.apache.org/jira/browse/LUCENE-3907
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Adrien Grand
  Labels: gsoc2013
 Fix For: 4.4

 Attachments: LUCENE-3907.patch


 Our ngram tokenizers/filters could use some love.  EG, they output ngrams in 
 multiple passes, instead of stacked, which messes up offsets/positions and 
 requires too much buffering (can hit OOME for long tokens).  They clip at 
 1024 chars (tokenizers) but don't (token filters).  The split up surrogate 
 pairs incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656178#comment-13656178
 ] 

Joel Bernstein commented on SOLR-4816:
--

In looking at SOLR-3154 it looks this was done pre-document routing so it will 
have to be changed. It also looks like it won't work with batches because it 
checks the first document id only.

The batches issue is why I created another method. We could get batches into 
the original method but the code would be pretty hairy. I like the idea of a 
nice clean separate method for this.

Let me know how you'd like to proceed. I can work on updating the SOLR-3154 
implementation or keep working this implementation.

Thanks

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4817) Solr should not fall back to the back compat built in solr.xml in SolrCloud mode.

2013-05-13 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4817:
-

 Summary: Solr should not fall back to the back compat built in 
solr.xml in SolrCloud mode.
 Key: SOLR-4817
 URL: https://issues.apache.org/jira/browse/SOLR-4817
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4


A hard error is much more useful, and this built in solr.xml is not very good 
for solrcloud - with the old style solr.xml with cores in it, you won't have 
persistence and with the new style, it's not really ideal either.

I think it makes it easier to debug solr.home to fail on this instead - but 
just in solrcloud mode for now due to back compat. We might want to pull the 
whole internal solr.xml for 5.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656232#comment-13656232
 ] 

Mark Miller commented on SOLR-4816:
---

bq.  It also looks like it won't work with batches because it checks the first 
document id only.

Yes, see the comments in the issue:

The patch is limited, but a start: right now it's just for String and Integer 
Id's and it only acts upon the first document or deleteby id (favoring 
document) - if you use the bulk methods they are all sent along to the leader 
of the first id.

bq.  I like the idea of a nice clean separate method for this.

I still think it needs to be the default method, and we simply should support 
batching. The other limitation around types is no longer a concern now that 
Yonik changing the hashing.

I don't know that we need to build on the patch in SOLR-3154, it was a very 
quick experiment, but I think that is the right implementation direction. The 
smart client should simply work optimally without having to use alternate 
methods IMO.

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656232#comment-13656232
 ] 

Mark Miller edited comment on SOLR-4816 at 5/13/13 6:37 PM:


bq.  It also looks like it won't work with batches because it checks the first 
document id only.

Yes, see the comments in the issue:

The patch is limited, but a start: right now it's just for String and Integer 
Id's and it only acts upon the first document or deleteby id (favoring 
document) - if you use the bulk methods they are all sent along to the leader 
of the first id.

bq.  I like the idea of a nice clean separate method for this.

I still think it needs to be the default method, and we simply should support 
batching. The other limitation around types is no longer a concern now that 
Yonik changed the hashing.

I don't know that we need to build on the patch in SOLR-3154, it was a very 
quick experiment, but I think that is the right implementation direction. The 
smart client should simply work optimally without having to use alternate 
methods IMO.

  was (Author: markrmil...@gmail.com):
bq.  It also looks like it won't work with batches because it checks the 
first document id only.

Yes, see the comments in the issue:

The patch is limited, but a start: right now it's just for String and Integer 
Id's and it only acts upon the first document or deleteby id (favoring 
document) - if you use the bulk methods they are all sent along to the leader 
of the first id.

bq.  I like the idea of a nice clean separate method for this.

I still think it needs to be the default method, and we simply should support 
batching. The other limitation around types is no longer a concern now that 
Yonik changing the hashing.

I don't know that we need to build on the patch in SOLR-3154, it was a very 
quick experiment, but I think that is the right implementation direction. The 
smart client should simply work optimally without having to use alternate 
methods IMO.
  
 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4817) Solr should not fall back to the back compat built in solr.xml in SolrCloud mode.

2013-05-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656264#comment-13656264
 ] 

Hoss Man commented on SOLR-4817:


+1

4.4 non/cloud mode: warn that implicit solr.xml is being used.
4.4 in cloud mode: fail if no solr.xml
5.0 in either mode: fail if no solr.xml


 Solr should not fall back to the back compat built in solr.xml in SolrCloud 
 mode.
 -

 Key: SOLR-4817
 URL: https://issues.apache.org/jira/browse/SOLR-4817
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4


 A hard error is much more useful, and this built in solr.xml is not very good 
 for solrcloud - with the old style solr.xml with cores in it, you won't have 
 persistence and with the new style, it's not really ideal either.
 I think it makes it easier to debug solr.home to fail on this instead - but 
 just in solrcloud mode for now due to back compat. We might want to pull the 
 whole internal solr.xml for 5.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ReleaseTodo update

2013-05-13 Thread Steve Rowe
Good catch, Jan - feel free to add this to the ReleaseTodo wiki page yourself. 
- Steve

On May 12, 2013, at 7:18 PM, Jan Høydahl jan@cominvent.com wrote:

 Hi,
 
 I discovered that the doc redirect still redirects to 4_1_0 javadocs.
 I changed .htaccess so it now points to 4_3_0 
 https://svn.apache.org/repos/asf/lucene/cms/trunk/content/.htaccess
 
 The Release TODO should mention updating this link - 
 http://wiki.apache.org/lucene-java/ReleaseTodo
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4818) Guice vs Solr

2013-05-13 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-4818:
--

 Summary: Guice vs Solr
 Key: SOLR-4818
 URL: https://issues.apache.org/jira/browse/SOLR-4818
 Project: Solr
  Issue Type: Improvement
Reporter: Mikhail Khludnev


Hello,

I want to follow up IRC log from SOLR-1393. 

At least, questions are: 
- how much guice do you accept: should it load only user's plugin or fully 
substitute solrconfig.xml?
- is there any observable stages for this migration?

I'm ccing [~grant_ingers...@yahoo.com] [~rcmuir] as persons who provided an 
interest or/and concerns about Guice. 

Please vote/ban!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1393) Allow more control over SearchComponents ordering in SearchHandler

2013-05-13 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656274#comment-13656274
 ] 

Mikhail Khludnev commented on SOLR-1393:


Follow up SOLR-4818

 Allow more control over SearchComponents ordering in SearchHandler
 --

 Key: SOLR-1393
 URL: https://issues.apache.org/jira/browse/SOLR-1393
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Priority: Minor
  Labels: newdev

 It would be useful to be able to add the notion of before/after when 
 declaring search components.  Currently, you can either explicitly declare 
 all components or insert at the beginning or end.  It would be nice to be 
 able to say: this new component comes after the Query component without 
 having to declare all the components.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-05-13 Thread Deepthi Sigireddi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656283#comment-13656283
 ] 

Deepthi Sigireddi commented on SOLR-1913:
-

Christopher,
Can you tell which version of lucene-core you have? For instance, in my web 
app, under WEB-INF/lib, I have lucene-core-4.2.1.jar. The method you are having 
problems with has been in lucene-core since 4.0, but not in 3.5.

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.4

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4760) Improve logging messages during startup to better identify core

2013-05-13 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey reassigned SOLR-4760:
--

Assignee: Shawn Heisey

 Improve logging messages during startup to better identify core
 ---

 Key: SOLR-4760
 URL: https://issues.apache.org/jira/browse/SOLR-4760
 Project: Solr
  Issue Type: Wish
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Alexandre Rafalovitch
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 5.0, 4.4, 4.3.1

 Attachments: SOLR-4760.patch, SOLR-4760.patch, SOLR-4760-testfix.patch


 Some log messages could be more informative. For example:
 {code}
 680 [coreLoadExecutor-3-thread-3] WARN org.apache.solr.schema.IndexSchema  – 
 schema has no name!
 {code}
 Would be _very nice_ to know which core this is complaining about.
 Later, once the core is loaded, the core name shows up in the logs,
 but it would be nice to have it earlier without having to
 triangulating it through 'Loading core' messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4785) New MaxScoreQParserPlugin

2013-05-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656284#comment-13656284
 ] 

Jan Høydahl commented on SOLR-4785:
---

Thanks for tackling this Greg and Robert. I'm sure I ran the full tests before 
commit but must have messed it up somewhere along the way.

 New MaxScoreQParserPlugin
 -

 Key: SOLR-4785
 URL: https://issues.apache.org/jira/browse/SOLR-4785
 Project: Solr
  Issue Type: New Feature
  Components: query parsers
Reporter: Jan Høydahl
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: 
 SOLR-4785-Add-tests-for-maxscore-to-QueryEqualityTest.patch, SOLR-4785.patch, 
 SOLR-4785.patch


 A customer wants to contribute back this component.
 It is a QParser which behaves exactly like lucene parser (extends it), but 
 returns the Max score from the clauses, i.e. max(c1,c2,c3..) instead of the 
 default which is sum(c1,c2,c3...). It does this by wrapping all SHOULD 
 clauses in a DisjunctionMaxQuery with tie=1.0. Any MUST or PROHIBITED clauses 
 are passed through as-is. Non-boolean queries, e.g. NumericRange 
 falls-through to lucene parser.
 To use, add to solrconfig.xml:
 {code:xml}
   queryParser name=maxscore class=solr.MaxScoreQParserPlugin/
 {code}
 Then use it in a query
 {noformat}
 q=A AND B AND {!maxscore v=$max}max=C OR (D AND E)
 {noformat}
 This will return the score of A+B+max(C,sum(D+E))

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2013-05-13 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656291#comment-13656291
 ] 

Mikhail Khludnev commented on SOLR-4799:


I want do review the functionality, here is the proposed config

{code:xml}
dataConfig
document
entity name=parent processor=SqlEntityProcessor 
query=SELECT * FROM PARENT ORDER BY id  
entity name=child_1 
processor=OrderedChildrenEntityProcessor
where=parent_id=parent.id query=SELECT * 
FROM CHILD_1 ORDER BY parent_id 
/entity   
/entity
/document
/dataConfig
{code}

Do you like it?

Parent and child SQLs can have different order that kills zipper. 
OrderedChildrenEntityProcessor can enforce ASC order for the PK and FK keys 
(and throw exception in case of violation), but it also might detect order 
itself that complicates the code a little. What do you expect for the first 
code contribution?


 SQLEntityProcessor for zipper join
 --

 Key: SOLR-4799
 URL: https://issues.apache.org/jira/browse/SOLR-4799
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Reporter: Mikhail Khludnev
Priority: Minor
  Labels: dih

 DIH is mostly considered as a playground tool, and real usages end up with 
 SolrJ. I want to contribute few improvements target DIH performance.
 This one provides performant approach for joining SQL Entities with miserable 
 memory at contrast to 
 http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
 The idea is:
 * parent table is explicitly ordered by it’s PK in SQL
 * children table is explicitly ordered by parent_id FK in SQL
 * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
 Do you think it’s worth to contribute it into DIH?
 cc: [~goksron] [~jdyer]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 39475 - Failure!

2013-05-13 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/39475/

1 tests failed.
REGRESSION:  org.apache.lucene.util.TestTimSorter.testRandomLowCardinality

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([393D76D771040186:6B0D2478D5BFB1A9]:0)
at java.lang.System.arraycopy(Native Method)
at org.apache.lucene.util.ArrayTimSorter.save(ArrayTimSorter.java:63)
at org.apache.lucene.util.TimSorter.rotate(TimSorter.java:216)
at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:75)
at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:77)
at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:78)
at org.apache.lucene.util.TimSorter.merge(TimSorter.java:188)
at org.apache.lucene.util.TimSorter.mergeAt(TimSorter.java:169)
at org.apache.lucene.util.TimSorter.ensureInvariants(TimSorter.java:137)
at org.apache.lucene.util.TimSorter.sort(TimSorter.java:200)
at 
org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:66)
at 
org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:131)
at 
org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:135)
at 
org.apache.lucene.util.BaseSortTestCase.testRandomLowCardinality(BaseSortTestCase.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 39475 - Failure!

2013-05-13 Thread Robert Muir
This happens if maxTempSlots is 0. then tmp is null.

On Mon, May 13, 2013 at 4:01 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/39475/

 1 tests failed.
 REGRESSION:  org.apache.lucene.util.TestTimSorter.testRandomLowCardinality

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([393D76D771040186:6B0D2478D5BFB1A9]:0)
 at java.lang.System.arraycopy(Native Method)
 at org.apache.lucene.util.ArrayTimSorter.save(ArrayTimSorter.java:63)
 at org.apache.lucene.util.TimSorter.rotate(TimSorter.java:216)
 at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:75)
 at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:77)
 at org.apache.lucene.util.Sorter.mergeInPlace(Sorter.java:78)
 at org.apache.lucene.util.TimSorter.merge(TimSorter.java:188)
 at org.apache.lucene.util.TimSorter.mergeAt(TimSorter.java:169)
 at 
 org.apache.lucene.util.TimSorter.ensureInvariants(TimSorter.java:137)
 at org.apache.lucene.util.TimSorter.sort(TimSorter.java:200)
 at 
 org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:66)
 at 
 org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:131)
 at 
 org.apache.lucene.util.BaseSortTestCase.test(BaseSortTestCase.java:135)
 at 
 org.apache.lucene.util.BaseSortTestCase.testRandomLowCardinality(BaseSortTestCase.java:155)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 39475 - Failure!

2013-05-13 Thread Adrien Grand
I'll dig.

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4799) SQLEntityProcessor for zipper join

2013-05-13 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656323#comment-13656323
 ] 

James Dyer commented on SOLR-4799:
--

It would be even more awesome if it didn't assume the entities were extending 
SqlEntityProcessor.  I mean, make zipperjoin an option for any entity processor 
as opposed to its own new variant on SqlE.P.

 SQLEntityProcessor for zipper join
 --

 Key: SOLR-4799
 URL: https://issues.apache.org/jira/browse/SOLR-4799
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Reporter: Mikhail Khludnev
Priority: Minor
  Labels: dih

 DIH is mostly considered as a playground tool, and real usages end up with 
 SolrJ. I want to contribute few improvements target DIH performance.
 This one provides performant approach for joining SQL Entities with miserable 
 memory at contrast to 
 http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor  
 The idea is:
 * parent table is explicitly ordered by it’s PK in SQL
 * children table is explicitly ordered by parent_id FK in SQL
 * children entity processor joins ordered resultsets by ‘zipper’ algorithm.
 Do you think it’s worth to contribute it into DIH?
 cc: [~goksron] [~jdyer]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4818) Guice vs Solr

2013-05-13 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656338#comment-13656338
 ] 

Grant Ingersoll commented on SOLR-4818:
---

User's plugin is logical starting point, but I could see it eventually 
replacing solrconfig (or at least, we could inject a version that could read 
old solrconfigs).

I've played around w/ Guice in Solr for replacing the up front servlet filter 
stuff too, but don't have anything publishable at this point in time.

 Guice vs Solr
 -

 Key: SOLR-4818
 URL: https://issues.apache.org/jira/browse/SOLR-4818
 Project: Solr
  Issue Type: Improvement
Reporter: Mikhail Khludnev

 Hello,
 I want to follow up IRC log from SOLR-1393. 
 At least, questions are: 
 - how much guice do you accept: should it load only user's plugin or fully 
 substitute solrconfig.xml?
 - is there any observable stages for this migration?
 I'm ccing [~grant_ingers...@yahoo.com] [~rcmuir] as persons who provided an 
 interest or/and concerns about Guice. 
 Please vote/ban!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-java7 - Build # 3977 - Still Failing

2013-05-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3977/

2 tests failed.
REGRESSION:  
org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest.testRandomStrings

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([4E618C3911A2075B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([4E618C3911A2075B]:0)




Build Log:
[...truncated 6045 lines...]
[junit4:junit4] Suite: org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest
[junit4:junit4]   2 ??? 14, 2013 4:21:14 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
[junit4:junit4]   2 WARNING: Suite execution timed out: 
org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest
[junit4:junit4]   2  jstack at approximately timeout time 
[junit4:junit4]   2 Thread-9 ID=22 RUNNABLE
[junit4:junit4]   2at java.util.HashMap.transfer(HashMap.java:584)
[junit4:junit4]   2at java.util.HashMap.resize(HashMap.java:564)
[junit4:junit4]   2at java.util.HashMap.addEntry(HashMap.java:851)
[junit4:junit4]   2at java.util.HashMap.put(HashMap.java:484)
[junit4:junit4]   2at java.util.HashSet.add(HashSet.java:217)
[junit4:junit4]   2at 
org.apache.uima.analysis_engine.impl.AnalysisEngineManagementImpl.setName(AnalysisEngineManagementImpl.java:245)
[junit4:junit4]   2at 
org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.initialize(AnalysisEngineImplBase.java:181)
[junit4:junit4]   2at 
org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.initialize(AggregateAnalysisEngine_impl.java:127)
[junit4:junit4]   2at 
org.apache.uima.impl.AnalysisEngineFactory_impl.produceResource(AnalysisEngineFactory_impl.java:94)
[junit4:junit4]   2at 
org.apache.uima.impl.CompositeResourceFactory_impl.produceResource(CompositeResourceFactory_impl.java:62)
[junit4:junit4]   2at 
org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:267)
[junit4:junit4]   2at 
org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework.java:335)
[junit4:junit4]   2at 
org.apache.lucene.analysis.uima.ae.BasicAEProvider.getAE(BasicAEProvider.java:73)
[junit4:junit4]   2at 
org.apache.lucene.analysis.uima.BaseUIMATokenizer.analyzeInput(BaseUIMATokenizer.java:63)
[junit4:junit4]   2at 
org.apache.lucene.analysis.uima.UIMATypeAwareAnnotationsTokenizer.initializeIterator(UIMATypeAwareAnnotationsTokenizer.java:72)
[junit4:junit4]   2at 
org.apache.lucene.analysis.uima.UIMATypeAwareAnnotationsTokenizer.incrementToken(UIMATypeAwareAnnotationsTokenizer.java:94)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:635)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.access$000(BaseTokenStreamTestCase.java:57)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread.run(BaseTokenStreamTestCase.java:418)
[junit4:junit4]   2 
[junit4:junit4]   2 
TEST-UIMATypeAwareAnalyzerTest.testRandomStrings-seed#[4E618C3911A2075B] 
ID=19 WAITING on 
org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread@622dffb1
[junit4:junit4]   2at java.lang.Object.wait(Native Method)
[junit4:junit4]   2- waiting on 
org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread@622dffb1
[junit4:junit4]   2at java.lang.Thread.join(Thread.java:1258)
[junit4:junit4]   2at java.lang.Thread.join(Thread.java:1332)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:460)
[junit4:junit4]   2at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:370)
[junit4:junit4]   2at 
org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest.testRandomStrings(UIMATypeAwareAnalyzerTest.java:65)
[junit4:junit4]   2at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
[junit4:junit4]   2at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[junit4:junit4]   2at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4:junit4]   2at java.lang.reflect.Method.invoke(Method.java:601)
[junit4:junit4]   2at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
[junit4:junit4]   2at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
[junit4:junit4]   2at 

Re: [JENKINS] Lucene-Solr-Tests-trunk-java7 - Build # 3977 - Still Failing

2013-05-13 Thread Robert Muir
I can't reproduce this locally, nor on the jenkins box itself.

On Mon, May 13, 2013 at 5:21 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3977/

 2 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest.testRandomStrings

 Error Message:
 Test abandoned because suite timeout was reached.

 Stack Trace:
 java.lang.Exception: Test abandoned because suite timeout was reached.
 at __randomizedtesting.SeedInfo.seed([4E618C3911A2075B]:0)


 FAILED:  
 junit.framework.TestSuite.org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest

 Error Message:
 Suite timeout exceeded (= 720 msec).

 Stack Trace:
 java.lang.Exception: Suite timeout exceeded (= 720 msec).
 at __randomizedtesting.SeedInfo.seed([4E618C3911A2075B]:0)




 Build Log:
 [...truncated 6045 lines...]
 [junit4:junit4] Suite: 
 org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest
 [junit4:junit4]   2 ??? 14, 2013 4:21:14 AM 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
 [junit4:junit4]   2 WARNING: Suite execution timed out: 
 org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest
 [junit4:junit4]   2  jstack at approximately timeout time 
 [junit4:junit4]   2 Thread-9 ID=22 RUNNABLE
 [junit4:junit4]   2at java.util.HashMap.transfer(HashMap.java:584)
 [junit4:junit4]   2at java.util.HashMap.resize(HashMap.java:564)
 [junit4:junit4]   2at java.util.HashMap.addEntry(HashMap.java:851)
 [junit4:junit4]   2at java.util.HashMap.put(HashMap.java:484)
 [junit4:junit4]   2at java.util.HashSet.add(HashSet.java:217)
 [junit4:junit4]   2at 
 org.apache.uima.analysis_engine.impl.AnalysisEngineManagementImpl.setName(AnalysisEngineManagementImpl.java:245)
 [junit4:junit4]   2at 
 org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.initialize(AnalysisEngineImplBase.java:181)
 [junit4:junit4]   2at 
 org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.initialize(AggregateAnalysisEngine_impl.java:127)
 [junit4:junit4]   2at 
 org.apache.uima.impl.AnalysisEngineFactory_impl.produceResource(AnalysisEngineFactory_impl.java:94)
 [junit4:junit4]   2at 
 org.apache.uima.impl.CompositeResourceFactory_impl.produceResource(CompositeResourceFactory_impl.java:62)
 [junit4:junit4]   2at 
 org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:267)
 [junit4:junit4]   2at 
 org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework.java:335)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.uima.ae.BasicAEProvider.getAE(BasicAEProvider.java:73)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.uima.BaseUIMATokenizer.analyzeInput(BaseUIMATokenizer.java:63)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.uima.UIMATypeAwareAnnotationsTokenizer.initializeIterator(UIMATypeAwareAnnotationsTokenizer.java:72)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.uima.UIMATypeAwareAnnotationsTokenizer.incrementToken(UIMATypeAwareAnnotationsTokenizer.java:94)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:635)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.access$000(BaseTokenStreamTestCase.java:57)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread.run(BaseTokenStreamTestCase.java:418)
 [junit4:junit4]   2
 [junit4:junit4]   2 
 TEST-UIMATypeAwareAnalyzerTest.testRandomStrings-seed#[4E618C3911A2075B] 
 ID=19 WAITING on 
 org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread@622dffb1
 [junit4:junit4]   2at java.lang.Object.wait(Native Method)
 [junit4:junit4]   2- waiting on 
 org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread@622dffb1
 [junit4:junit4]   2at java.lang.Thread.join(Thread.java:1258)
 [junit4:junit4]   2at java.lang.Thread.join(Thread.java:1332)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:460)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:370)
 [junit4:junit4]   2at 
 org.apache.lucene.analysis.uima.UIMATypeAwareAnalyzerTest.testRandomStrings(UIMATypeAwareAnalyzerTest.java:65)
 [junit4:junit4]   2at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit4:junit4]   2at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 [junit4:junit4]   2at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [junit4:junit4]   2at java.lang.reflect.Method.invoke(Method.java:601)
 [junit4:junit4]   2at 
 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 39475 - Failure!

2013-05-13 Thread Adrien Grand
This was an actual bug: Sorter.rotate could be called even when one of
the two adjacent slices to rotate is empty, which is bad since it
potentially performs lots of copies/swaps while there is nothing to
do.

I committed a fix.

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 39475 - Failure!

2013-05-13 Thread Robert Muir
Thanks Adrien!

On Mon, May 13, 2013 at 5:34 PM, Adrien Grand jpou...@gmail.com wrote:
 This was an actual bug: Sorter.rotate could be called even when one of
 the two adjacent slices to rotate is empty, which is bad since it
 potentially performs lots of copies/swaps while there is nothing to
 do.

 I committed a fix.

 --
 Adrien

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2649) MM ignored in edismax queries with operators

2013-05-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656449#comment-13656449
 ] 

Shawn Heisey commented on SOLR-2649:


[~janhoy] I like your suggestion.  I would want to be sure that if I specify mm 
(either in the request handler defaults or in my query params), it will ignore 
q.op and use the value specified.  As for coding, I think I'll be useless in 
this area, though I'm interested in taking a look if anyone can point me at 
specific class names.


 MM ignored in edismax queries with operators
 

 Key: SOLR-2649
 URL: https://issues.apache.org/jira/browse/SOLR-2649
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Reporter: Magnus Bergmark
Priority: Minor
 Fix For: 4.4


 Hypothetical scenario:
   1. User searches for stocks oil gold with MM set to 50%
   2. User adds -stockings to the query: stocks oil gold -stockings
   3. User gets no hits since MM was ignored and all terms where AND-ed 
 together
 The behavior seems to be intentional, although the reason why is never 
 explained:
   // For correct lucene queries, turn off mm processing if there
   // were explicit operators (except for AND).
   boolean doMinMatched = (numOR + numNOT + numPluses + numMinuses) == 0; 
 (lines 232-234 taken from 
 tags/lucene_solr_3_3/solr/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java)
 This makes edismax unsuitable as an replacement to dismax; mm is one of the 
 primary features of dismax.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4819) Pimp QueryEqualityTest to use random testing

2013-05-13 Thread Greg Bowyer (JIRA)
Greg Bowyer created SOLR-4819:
-

 Summary: Pimp QueryEqualityTest to use random testing
 Key: SOLR-4819
 URL: https://issues.apache.org/jira/browse/SOLR-4819
 Project: Solr
  Issue Type: Improvement
Reporter: Greg Bowyer
Priority: Minor


The current QueryEqualityTest does some (important but) basic tests of query 
parsing to ensure that queries that are produced are equivalent to each other.

Since we do random testing, it might be a good idea to generate random queries 
rather than pre-canned ones

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4819) Pimp QueryEqualityTest to use random testing

2013-05-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656465#comment-13656465
 ] 

Hoss Man commented on SOLR-4819:


bq. Since we do random testing, it might be a good idea to generate random 
queries rather than pre-canned ones

part of the reason there's no randomization in theses tests yet is because of 
the difficulty in ensuring that hte randomly generated input is in fact valid 
for an arbitrary qparsers.

the itch i was scratching when i wrote that class was that there were some 
really blatent basic bugs with the equals methods in some of hte queries 
produced by the existing parsers causing really bad cache performance, so i 
wanted something that would ensure no parser would be included out of hte box 
w/o at least some basic sanity checking that obviously equivilent queries were 
equivilent.

I think having specific test classes for specific types of queries (or for 
specific qparsers) would be the best place for more randomized testing ... this 
class is really just the front line defense against something really terrible.




 Pimp QueryEqualityTest to use random testing
 

 Key: SOLR-4819
 URL: https://issues.apache.org/jira/browse/SOLR-4819
 Project: Solr
  Issue Type: Improvement
Reporter: Greg Bowyer
Priority: Minor

 The current QueryEqualityTest does some (important but) basic tests of query 
 parsing to ensure that queries that are produced are equivalent to each other.
 Since we do random testing, it might be a good idea to generate random 
 queries rather than pre-canned ones

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3038) Solrj should use javabin wireformat by default with updaterequests

2013-05-13 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-3038:
---

Attachment: SOLR-3038-abstract-writer.patch

Previous patch failed precommit - missing eol-style.  New patch, also updated 
to latest trunk revision.

 Solrj should use javabin wireformat by default with updaterequests
 --

 Key: SOLR-3038
 URL: https://issues.apache.org/jira/browse/SOLR-3038
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0-ALPHA
Reporter: Sami Siren
Priority: Minor
 Attachments: SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch


 The javabin wire format is faster than xml when feeding Solr - it should 
 become the default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4819) Pimp QueryEqualityTest to use random testing

2013-05-13 Thread Greg Bowyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656481#comment-13656481
 ] 

Greg Bowyer commented on SOLR-4819:
---

bq. I think having specific test classes for specific types of queries (or for 
specific qparsers) would be the best place for more randomized testing ... this 
class is really just the front line defense against something really terrible.

That makes sense, I guess I didn't quite understand the purpose of this class

 Pimp QueryEqualityTest to use random testing
 

 Key: SOLR-4819
 URL: https://issues.apache.org/jira/browse/SOLR-4819
 Project: Solr
  Issue Type: Improvement
Reporter: Greg Bowyer
Priority: Minor

 The current QueryEqualityTest does some (important but) basic tests of query 
 parsing to ensure that queries that are produced are equivalent to each other.
 Since we do random testing, it might be a good idea to generate random 
 queries rather than pre-canned ones

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3038) Solrj should use javabin wireformat by default with updaterequests

2013-05-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656517#comment-13656517
 ] 

Shawn Heisey commented on SOLR-3038:


bq. One approach would be to detect the server version and fall back to XML if 
its javabin is known to be incompatible.

[~sokolov] I have been thinking about this since you suggested it.  The idea 
itself is very good, I just think there are too many things that could go wrong.

At this moment, I think that detection really belongs in the user app code.  
The particular way that you have found for detecting 3.x (looking for the 
old-style admin) would fail if the user has taken steps to block access to the 
admin interface - it would think it's dealing with a 4.x server, which wouldn't 
work.

Even if we found a completely reliable way of detecting whether XML is required 
(that couldn't be blocked accidentally or intentionally) it would still involve 
making a request to the server that the app developer did not explicitly put in 
their code.  Also, creating the server object might be conditional on the 
server being up at that moment, a requirement that does not exist today.

I'm the new kid on the Solr committer block.  The veterans may feel differently 
about your idea.  I'm willing to try coding the detection idea, look for an 
alternate patch soon.


 Solrj should use javabin wireformat by default with updaterequests
 --

 Key: SOLR-3038
 URL: https://issues.apache.org/jira/browse/SOLR-3038
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0-ALPHA
Reporter: Sami Siren
Priority: Minor
 Attachments: SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch


 The javabin wire format is faster than xml when feeding Solr - it should 
 become the default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3038) Solrj should use javabin wireformat by default with updaterequests

2013-05-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656531#comment-13656531
 ] 

Shawn Heisey commented on SOLR-3038:


Other thoughts:

Autodetection should discussed and handled in a new issue.  I'm about to start 
my commute, if nobody else makes the issue by the time I get home, I will go 
ahead and do it.

Perhaps it could be done explicitly by the app developer by calling a method 
named something like autoDetectTransport.  The javadoc for the method should 
say that is not 100% reliable, and highly dependent on server settings.  This 
method would attempt to detect the server requirements so it can set the 
request writer and parser accordingly.


 Solrj should use javabin wireformat by default with updaterequests
 --

 Key: SOLR-3038
 URL: https://issues.apache.org/jira/browse/SOLR-3038
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0-ALPHA
Reporter: Sami Siren
Priority: Minor
 Attachments: SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch


 The javabin wire format is faster than xml when feeding Solr - it should 
 become the default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4901) TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()

2013-05-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656561#comment-13656561
 ] 

Michael McCandless commented on LUCENE-4901:


Is Runtime.halt too nice?  Ie is there any chance it will do any cleanup at 
all?  (The javadocs seem to indicate no but still).

I like the crashing version because it's most accurate to what we are trying 
to test here ... ie rather than asking the JVM to shoot itself, we do the 
shooting.

 TestIndexWriterOnJRECrash should work on any JRE vendor via Runtime.halt()
 --

 Key: LUCENE-4901
 URL: https://issues.apache.org/jira/browse/LUCENE-4901
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
 Environment: Red Hat EL 6.3
 IBM Java 1.6.0
 ANT 1.9.0
Reporter: Rodrigo Trujillo
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: LUCENE-4901.patch, test-IBM-java-vendor.patch


 I successfully compiled Lucene 4.2 with IBM.
 Then ran unit tests with the nightly option set to true
 The test case TestIndexWriterOnJRECrash was skipped returning IBM 
 Corporation JRE not supported:
 [junit4:junit4] Suite: org.apache.lucene.index.TestIndexWriterOnJRECrash
 [junit4:junit4] IGNOR/A 0.28s | TestIndexWriterOnJRECrash.testNRTThreads
 [junit4:junit4] Assumption #1: IBM Corporation JRE not supported.
 [junit4:junit4] Completed in 0.68s, 1 test, 1 skipped

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer factory's parameter.

2013-05-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4813:
---

Fix Version/s: 4.3.1
   4.4
   5.0
 Assignee: Hoss Man

Thanks for the bug report, and especially for the patch with test case!

your patch looks correct to me at first glance, but i'll review more closely 
and try to get in for 4.3.1

 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer factory's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Assignee: Hoss Man
Priority: Critical
  Labels: SynonymFilterFactory
 Fix For: 5.0, 4.4, 4.3.1

 Attachments: SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3038) Solrj should use javabin wireformat by default with updaterequests

2013-05-13 Thread Mike Sokolov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656570#comment-13656570
 ] 

Mike Sokolov commented on SOLR-3038:


Isn't that just a cop-out?  If we are serious about negotiating protocols, than 
some form of version identification needs to be supported as a first-class 
service.  Maybe the ping service could return a server version id?

 Solrj should use javabin wireformat by default with updaterequests
 --

 Key: SOLR-3038
 URL: https://issues.apache.org/jira/browse/SOLR-3038
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0-ALPHA
Reporter: Sami Siren
Priority: Minor
 Attachments: SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch, 
 SOLR-3038-abstract-writer.patch, SOLR-3038-abstract-writer.patch


 The javabin wire format is faster than xml when feeding Solr - it should 
 become the default. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4816) Add method to CloudSolrServer to send updates to the correct shard

2013-05-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656607#comment-13656607
 ] 

Joel Bernstein commented on SOLR-4816:
--

Ok, I'll integrate the directUpdate logic into the main request flow.

 Add method to CloudSolrServer to send updates to the correct shard
 --

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch


 This issue adds a directUpdate method to CloudSolrServer which routes 
 update requests to the correct shard. This would be a nice feature to have to 
 eliminate the document routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer factory's parameter.

2013-05-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4813:
---

Attachment: SOLR-4813.patch

Shingo: a flaw in your test was that inform was never called on the 
SynonymFilterFactory you were manually constructing, so it never attempted to 
instantiate your TestTokenizerFactory, so all of the checks you had in it's 
constructor were being ignored.

I simplified the test to use tokenFilterFactory() to try and deal with this, 
but that exposed another problem: since your TestTokenizerFactory class isn't 
registered with SPI, lookupClass() fails on it.

To simplify all of this, I changed the test to us a real tokenizer factory with 
mandatory init arg (PatternTokenizerFactory), so we can check both the positive 
and negative cases -- this includes an explicit check that specifying params 
which neither the SynonymFilterFactor nor the tokenizerFactory are expecting 
causes an error.

I also tweaked your javadocs to try and clarify that the param prefix could be 
used even if the param names don't conflict, and re-ordered the param parsing 
code to group of of the tokenizerFactory stuff together.

still running full tests.



 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer factory's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Assignee: Hoss Man
Priority: Critical
  Labels: SynonymFilterFactory
 Fix For: 5.0, 4.4, 4.3.1

 Attachments: SOLR-4813.patch, SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4813) Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's setting has tokenizer factory's parameter.

2013-05-13 Thread Shingo Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656650#comment-13656650
 ] 

Shingo Sasaki commented on SOLR-4813:
-

Jack: Thanks for your check and advice.

Hoss: Thank you for compensating for the shortcomings of my patch.

By the way, the patch is applied to SynonymFilterFactory of trunk, but 
SynonymFilterFactory of branch_4x is  delegatee of 
[FST|Slow]SynonymFilterFactory.

Is branch_4x's patch required for committing to 4.3.1?

 Unavoidable IllegalArgumentException occurs when SynonymFilterFactory's 
 setting has tokenizer factory's parameter.
 --

 Key: SOLR-4813
 URL: https://issues.apache.org/jira/browse/SOLR-4813
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.3
Reporter: Shingo Sasaki
Assignee: Hoss Man
Priority: Critical
  Labels: SynonymFilterFactory
 Fix For: 5.0, 4.4, 4.3.1

 Attachments: SOLR-4813.patch, SOLR-4813.patch


 When I write SynonymFilterFactory' setting in schema.xml as follows, ...
 {code:xml}
 analyzer
   tokenizer class=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true
tokenizerFactory=solr.NGramTokenizerFactory maxGramSize=2 
 minGramSize=2/
 /analyzer
 {code}
 IllegalArgumentException (Unknown parameters) occurs.
 {noformat}
 Caused by: java.lang.IllegalArgumentException: Unknown parameters: 
 {maxGramSize=2, minGramSize=2}
   at 
 org.apache.lucene.analysis.synonym.FSTSynonymFilterFactory.init(FSTSynonymFilterFactory.java:71)
   at 
 org.apache.lucene.analysis.synonym.SynonymFilterFactory.init(SynonymFilterFactory.java:50)
   ... 28 more
 {noformat}
 However TokenizerFactory's params should be set to loadTokenizerFactory 
 method in [FST|Slow]SynonymFilterFactory. (ref. SOLR-2909)
 I think, the problem was caused by LUCENE-4877 (Fix analyzer factories to 
 throw exception when arguments are invalid) and SOLR-3402 (Parse Version 
 outside of Analysis Factories).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4820) SolrJ - autodetect xml/javabin transport requirements

2013-05-13 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-4820:
--

 Summary: SolrJ - autodetect xml/javabin transport requirements
 Key: SOLR-4820
 URL: https://issues.apache.org/jira/browse/SOLR-4820
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 5.0, 4.4


The idea here is to support automatically detecting whether a target Solr 
server will work with the binary request writer and/or response parser, then 
use that information to pick the most efficient combination.  See discussion on 
SOLR-3038.

This issue concerns itself with 4.x clients, which as of 4.3.1, send XML 
requests and ask for a binary response.  SOLR-3038 aims to change the default 
for requests to binary.  That change would break default compatibility with 3.x 
servers, requiring an explicit change to the XML request writer.

This issue is designed to fill the gap if/when the default request writer is 
changed, to allow the server object to detect when it needs to change request 
writers and response parsers.

I see four possible approaches:

1) Run detection when the object is created.  IMHO, this is a bad idea.
2) Require an explicit call to an autodetect method.
3) Run the detection mechanism the first time a request is processed.  If 
adjustment is deemed necessary, adjust the transports and log a warning, and 
possibly even include that warning in the response object.
4) Don't actually autodetect.  The FIRST time a request fails, try downgrading 
the transport.  If the request writer isn't already XML, change it, log a 
warning, and try the request again.  Repeat for the response parser.  If the 
change works, keep going with the new settings.  If not, undo the changes and 
throw the usual exception, including a note saying that downgrading to XML was 
attempted.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4820) SolrJ - autodetect xml/javabin transport requirements

2013-05-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656656#comment-13656656
 ] 

Shawn Heisey commented on SOLR-4820:


I haven't even attempted to discuss JSON.  Is that something that should 
concern me here?

 SolrJ - autodetect xml/javabin transport requirements
 -

 Key: SOLR-4820
 URL: https://issues.apache.org/jira/browse/SOLR-4820
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 4.3
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 5.0, 4.4


 The idea here is to support automatically detecting whether a target Solr 
 server will work with the binary request writer and/or response parser, then 
 use that information to pick the most efficient combination.  See discussion 
 on SOLR-3038.
 This issue concerns itself with 4.x clients, which as of 4.3.1, send XML 
 requests and ask for a binary response.  SOLR-3038 aims to change the default 
 for requests to binary.  That change would break default compatibility with 
 3.x servers, requiring an explicit change to the XML request writer.
 This issue is designed to fill the gap if/when the default request writer is 
 changed, to allow the server object to detect when it needs to change request 
 writers and response parsers.
 I see four possible approaches:
 1) Run detection when the object is created.  IMHO, this is a bad idea.
 2) Require an explicit call to an autodetect method.
 3) Run the detection mechanism the first time a request is processed.  If 
 adjustment is deemed necessary, adjust the transports and log a warning, and 
 possibly even include that warning in the response object.
 4) Don't actually autodetect.  The FIRST time a request fails, try 
 downgrading the transport.  If the request writer isn't already XML, change 
 it, log a warning, and try the request again.  Repeat for the response 
 parser.  If the change works, keep going with the new settings.  If not, undo 
 the changes and throw the usual exception, including a note saying that 
 downgrading to XML was attempted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4999) Lucene test (testCambridgeMA) fails when JVM 64-bit does not use memory compression

2013-05-13 Thread Rodrigo Trujillo (JIRA)
Rodrigo Trujillo created LUCENE-4999:


 Summary: Lucene test (testCambridgeMA) fails when JVM 64-bit does 
not use memory compression
 Key: LUCENE-4999
 URL: https://issues.apache.org/jira/browse/LUCENE-4999
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.2.1, 4.3
 Environment: Red Hat 6.3
IBM Java 6 - SR13
Ant 1.9.0
Reporter: Rodrigo Trujillo
Priority: Critical


When I ran the Lucene (4.2.1/4.3) test suite with IBM Java I get the following 
error:

[junit4:junit4] Suite: 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestPostingsHighlighter -Dtests.method=testCambridgeMA 
-Dtests.seed=571E16AEAF72C9F9 -Dtests.s
low=true -Dtests.locale=mt_MT -Dtests.timezone=Pacific/Kiritimati 
-Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   0.71s J2 | TestPostingsHighlighter.testCambridgeMA 
[junit4:junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 
Array index out of range: 37
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([571E16AEAF72C9F9:D60B7505C1DC91F8]:0)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.Passage.addMatch(Passage.java:53)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightDoc(PostingsHighlighter.java:547)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightField(PostingsHighlighter.java:425)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:364)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:268)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlight(PostingsHighlighter.java:198)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testCambridgeMA(TestPostingsHighlighter.java:373)
[junit4:junit4]at java.lang.Thread.run(Thread.java:738)
[junit4:junit4]   2 NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FA
ST_DECOMPRESSION, chunkSize=386), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=386)), sim=RandomSimilarityProv
ider(queryNorm=false,coord=yes): {body=DFR I(n)Z(0.3), title=DFR I(F)Z(0.3), 
id=DFR I(n)2}, locale=mt_MT, timezone=Pacific/Kiritimati
[junit4:junit4]   2 NOTE: Linux 2.6.32-279.el6.x86_64 amd64/IBM Corporation 
1.6.0 (64-bit)/cpus=4,threads=1,free=10783032,total=24030208
[junit4:junit4]   2 NOTE: All tests run in this JVM: [FieldQueryTest, 
FieldPhraseListTest, SimpleFragListBuilderTest, FieldTermStackTest, 
OffsetLimitTokenFil
terTest, TokenSourcesTest, TestPostingsHighlighter]
[junit4:junit4] Completed on J2 in 2.46s, 23 tests, 1 error  FAILURES!


This error is not seem with Oracle Java.
A Google search showed that this error has already occurred in community builds 
and the solution proposed was disable the IBM Java in the community tests.

I took a look in the code and found that the root of the problem is due to the 
assignment of the variable referenceSize in RamUsageEstimator.java:

// get object reference size by getting scale factor of Object[] arrays:
try {
  final Method arrayIndexScaleM = unsafeClass.getMethod(arrayIndexScale, 
Class.class);
  referenceSize = ((Number) arrayIndexScaleM.invoke(theUnsafe, 
Object[].class)).intValue();
  supportedFeatures.add(JvmFeature.OBJECT_REFERENCE_SIZE);
} catch (Exception e) {
  // ignore.
}


The Java Object reference size for arrays have 8 bytes in 64-bit machines 
(Oracle or IBM) and can be reduced to 4 bytes (like 32-bit JVMs) using 
Compressed References and Compressed Ordinary Object Pointers (OOPs).

This options seems to be enabled by default in Oracle Java when the heap size 
is under 32GB, but is not in IBM Java.

As workaround, when testing with IBM JVM I can pass the options 
-Xcompressedrefs or -XX:+UseCompressedOops to Junit.

Similarly, you can reproduce the error if you pass the option 
-XX:-UseCompressedOops when testing with Oracle Java.


The bug is in oversize method of ArrayUtil.java. It does nothing when the 
object reference size (bytesPerElement) is 8.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4999) Lucene test (testCambridgeMA) fails when JVM 64-bit does not use memory compression

2013-05-13 Thread Rodrigo Trujillo (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rodrigo Trujillo updated LUCENE-4999:
-

Description: 
When I ran the Lucene (4.2.1/4.3) test suite with IBM Java I get the following 
error:

[junit4:junit4] Suite: 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestPostingsHighlighter -Dtests.method=testCambridgeMA 
-Dtests.seed=571E16AEAF72C9F9 -Dtests.s
low=true -Dtests.locale=mt_MT -Dtests.timezone=Pacific/Kiritimati 
-Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   0.71s J2 | TestPostingsHighlighter.testCambridgeMA 
[junit4:junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 
Array index out of range: 37
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([571E16AEAF72C9F9:D60B7505C1DC91F8]:0)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.Passage.addMatch(Passage.java:53)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightDoc(PostingsHighlighter.java:547)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightField(PostingsHighlighter.java:425)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:364)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:268)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlight(PostingsHighlighter.java:198)
[junit4:junit4]at 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testCambridgeMA(TestPostingsHighlighter.java:373)
[junit4:junit4]at java.lang.Thread.run(Thread.java:738)
[junit4:junit4]   2 NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FA
ST_DECOMPRESSION, chunkSize=386), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=386)), sim=RandomSimilarityProv
ider(queryNorm=false,coord=yes): {body=DFR I(n)Z(0.3), title=DFR I(F)Z(0.3), 
id=DFR I(n)2}, locale=mt_MT, timezone=Pacific/Kiritimati
[junit4:junit4]   2 NOTE: Linux 2.6.32-279.el6.x86_64 amd64/IBM Corporation 
1.6.0 (64-bit)/cpus=4,threads=1,free=10783032,total=24030208
[junit4:junit4]   2 NOTE: All tests run in this JVM: [FieldQueryTest, 
FieldPhraseListTest, SimpleFragListBuilderTest, FieldTermStackTest, 
OffsetLimitTokenFil
terTest, TokenSourcesTest, TestPostingsHighlighter]
[junit4:junit4] Completed on J2 in 2.46s, 23 tests, 1 error  FAILURES!


This error is not seen with Oracle Java.
A Google search showed that this error has already occurred in community builds 
and the solution proposed was disable the IBM Java in the community tests.

I took a look in the code and found that the root of the problem is due to the 
assignment of the variable referenceSize in RamUsageEstimator.java:

// get object reference size by getting scale factor of Object[] arrays:
try {
  final Method arrayIndexScaleM = unsafeClass.getMethod(arrayIndexScale, 
Class.class);
  referenceSize = ((Number) arrayIndexScaleM.invoke(theUnsafe, 
Object[].class)).intValue();
  supportedFeatures.add(JvmFeature.OBJECT_REFERENCE_SIZE);
} catch (Exception e) {
  // ignore.
}


The Java Object reference size for arrays have 8 bytes in 64-bit machines 
(Oracle or IBM) and can be reduced to 4 bytes (like 32-bit JVMs) using 
Compressed References and Compressed Ordinary Object Pointers (OOPs).

This options seems to be enabled by default in Oracle Java when the heap size 
is under 32GB, but is not in IBM Java.

As workaround, when testing with IBM JVM I can pass the options 
-Xcompressedrefs or -XX:+UseCompressedOops to Junit.

Similarly, you can reproduce the error if you pass the option 
-XX:-UseCompressedOops when testing with Oracle Java.


The bug is in oversize method of ArrayUtil.java. It does nothing when the 
object reference size (bytesPerElement) is 8.



  was:
When I ran the Lucene (4.2.1/4.3) test suite with IBM Java I get the following 
error:

[junit4:junit4] Suite: 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestPostingsHighlighter -Dtests.method=testCambridgeMA 
-Dtests.seed=571E16AEAF72C9F9 -Dtests.s
low=true -Dtests.locale=mt_MT -Dtests.timezone=Pacific/Kiritimati 
-Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   0.71s J2 | TestPostingsHighlighter.testCambridgeMA 
[junit4:junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 
Array index out of range: 37
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([571E16AEAF72C9F9:D60B7505C1DC91F8]:0)
[junit4:junit4]at 

[jira] [Commented] (LUCENE-4999) Lucene test (testCambridgeMA) fails when JVM 64-bit does not use memory compression

2013-05-13 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656667#comment-13656667
 ] 

Robert Muir commented on LUCENE-4999:
-

Yes, this at least was a bug in the highlighter. However its currently fixed in 
SVN (will be in 4.3.1):

The datastructure expected parallel arrays and did not grow them consistently 
as you noticed.

{noformat}
* LUCENE-4948: Fixed ArrayIndexOutOfBoundsException in PostingsHighlighter
  if you had a 64-bit JVM without compressed OOPS: IBM J9, or Oracle with
  large heap/explicitly disabled.  (Mike McCandless, Uwe Schindler, Robert Muir)
{noformat}

{quote}
Similarly, you can reproduce the error if you pass the option 
-XX:-UseCompressedOops when testing with Oracle Java.
{quote}

Uwe Schindler did exactly that to our jenkins instances: it randomizes this 
variable so that this class of bugs will be found when using Oracle too, and 
won't be brushed aside as a JVM issue in the future.


 Lucene test (testCambridgeMA) fails when JVM 64-bit does not use memory 
 compression
 ---

 Key: LUCENE-4999
 URL: https://issues.apache.org/jira/browse/LUCENE-4999
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.3, 4.2.1
 Environment: Red Hat 6.3
 IBM Java 6 - SR13
 Ant 1.9.0
Reporter: Rodrigo Trujillo
Priority: Critical

 When I ran the Lucene (4.2.1/4.3) test suite with IBM Java I get the 
 following error:
 [junit4:junit4] Suite: 
 org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
 [junit4:junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestPostingsHighlighter -Dtests.method=testCambridgeMA 
 -Dtests.seed=571E16AEAF72C9F9 -Dtests.s
 low=true -Dtests.locale=mt_MT -Dtests.timezone=Pacific/Kiritimati 
 -Dtests.file.encoding=UTF-8
 [junit4:junit4] ERROR   0.71s J2 | TestPostingsHighlighter.testCambridgeMA 
 [junit4:junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 
 Array index out of range: 37
 [junit4:junit4]at 
 __randomizedtesting.SeedInfo.seed([571E16AEAF72C9F9:D60B7505C1DC91F8]:0)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.Passage.addMatch(Passage.java:53)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightDoc(PostingsHighlighter.java:547)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightField(PostingsHighlighter.java:425)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:364)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:268)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlight(PostingsHighlighter.java:198)
 [junit4:junit4]at 
 org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testCambridgeMA(TestPostingsHighlighter.java:373)
 [junit4:junit4]at java.lang.Thread.run(Thread.java:738)
 [junit4:junit4]   2 NOTE: test params are: 
 codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FA
 ST_DECOMPRESSION, chunkSize=386), 
 termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
  chunkSize=386)), sim=RandomSimilarityProv
 ider(queryNorm=false,coord=yes): {body=DFR I(n)Z(0.3), title=DFR I(F)Z(0.3), 
 id=DFR I(n)2}, locale=mt_MT, timezone=Pacific/Kiritimati
 [junit4:junit4]   2 NOTE: Linux 2.6.32-279.el6.x86_64 amd64/IBM Corporation 
 1.6.0 (64-bit)/cpus=4,threads=1,free=10783032,total=24030208
 [junit4:junit4]   2 NOTE: All tests run in this JVM: [FieldQueryTest, 
 FieldPhraseListTest, SimpleFragListBuilderTest, FieldTermStackTest, 
 OffsetLimitTokenFil
 terTest, TokenSourcesTest, TestPostingsHighlighter]
 [junit4:junit4] Completed on J2 in 2.46s, 23 tests, 1 error  FAILURES!
 This error is not seen with Oracle Java.
 A Google search showed that this error has already occurred in community 
 builds and the solution proposed was disable the IBM Java in the community 
 tests.
 I took a look in the code and found that the root of the problem is due to 
 the assignment of the variable referenceSize in RamUsageEstimator.java:
 // get object reference size by getting scale factor of Object[] arrays:
 try {
   final Method arrayIndexScaleM = 
 unsafeClass.getMethod(arrayIndexScale, Class.class);
   referenceSize = ((Number) arrayIndexScaleM.invoke(theUnsafe, 
 Object[].class)).intValue();
   supportedFeatures.add(JvmFeature.OBJECT_REFERENCE_SIZE);
 } catch (Exception e) {
   // ignore.
 }
 The Java Object reference size for arrays have 8 

[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes 32k

2013-05-13 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656831#comment-13656831
 ] 

Robert Muir commented on LUCENE-4583:
-

I can compromise with this. 

However, I don't like the current patch.

I don't think we should modify ByteBlockPool. Instead, I think 
BinaryDocValuesWriter should do the following:
* use PagedBytes to append the bytes (which has append-only writing, via 
getDataOutput)
* implement the iterator with PagedBytes.getDataInput (its just an iterator so 
this is simple)
* store the lengths instead as absolute offsets with 
MonotonicAppendingLongBuffer (this should be more efficient)

The only thing ByteBlockPool gives is some automagic ram accounting, but this 
is not as good as it looks anyway.
* today, it seems to me ram accounting is broken for this dv type already (the 
lengths are not considered or am i missing something!?)
* since we need to fix that bug anyway (by just adding updateBytesUsed like the 
other consumers), the magic accounting buys us nothing really anyway.

Anyway I can help with this, tomorrow.


 StraightBytesDocValuesField fails if bytes  32k
 

 Key: LUCENE-4583
 URL: https://issues.apache.org/jira/browse/LUCENE-4583
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0, 4.1, 5.0
Reporter: David Smiley
Priority: Critical
 Fix For: 4.4

 Attachments: LUCENE-4583.patch, LUCENE-4583.patch, LUCENE-4583.patch


 I didn't observe any limitations on the size of a bytes based DocValues field 
 value in the docs.  It appears that the limit is 32k, although I didn't get 
 any friendly error telling me that was the limit.  32k is kind of small IMO; 
 I suspect this limit is unintended and as such is a bug.The following 
 test fails:
 {code:java}
   public void testBigDocValue() throws IOException {
 Directory dir = newDirectory();
 IndexWriter writer = new IndexWriter(dir, writerConfig(false));
 Document doc = new Document();
 BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
 bytes.length = bytes.bytes.length;//byte data doesn't matter
 doc.add(new StraightBytesDocValuesField(dvField, bytes));
 writer.addDocument(doc);
 writer.commit();
 writer.close();
 DirectoryReader reader = DirectoryReader.open(dir);
 DocValues docValues = MultiDocValues.getDocValues(reader, dvField);
 //FAILS IF BYTES IS BIG!
 docValues.getSource().getBytes(0, bytes);
 reader.close();
 dir.close();
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org