AW: [VOTE] Release PyLucene 4.2.1-1

2013-04-18 Thread Thomas Koch
Andi,
I now get a different error while compiling __init__.cpp:

org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
error C2059: Syntaxfehler: 'Zeichenfolge'
org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
error C2238: Unerwartete(s) Token vor ';'

The line complained about is #42

40static CompiledAutomaton$AUTOMATON_TYPE *NONE;
41static CompiledAutomaton$AUTOMATON_TYPE *NORMAL;
42static CompiledAutomaton$AUTOMATON_TYPE *PREFIX;
43static CompiledAutomaton$AUTOMATON_TYPE *SINGLE;

PREFIX seems to be another reserved word ... I could compile __init__.cpp
after renaming PREFIX to PREFIX1.
I tried to google a list of reserved words used by VS C++ compiler, but had
no luck...
There are some predefined macros -but none that match our issues
http://msdn.microsoft.com/en-us/library/b0084kay(v=vs.100).aspx



Make output details:

C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYTHON -DJCC_VER=2.16 -D_jcc_shared
-D_java_generics -D_dll_lucene=__declspec(dllexport) -IC:\Program
Files\Java\jdk1.6.0_06/include -IC:\Program
Files\Java\jdk1.6.0_06/include/win32 -Ibuild\_lucene
-IC:\Python27\lib\site-packages\jcc-2.16-py2.7-win32.egg\jcc\sources
-IC:\Python27\include -IC:\Python27\PC /Tpbuild\_lucene\__init__.cpp
/Fobuild\temp.win32-2.7\Release\build\_lucene\__init__.obj /EHsc
/D_CRT_SECURE_NO_WARNINGS
__init__.cpp
C:\Python27\lib\site-packages\jcc-2.16-py2.7-win32.egg\jcc\sources\JCCEnv.h(
118) : warning C4251: 'JCCEnv::refs': class 'std::multimap_Kty,_Ty'
erfordert eine DLL-Schnittstelle, die von Clients von class 'JCCEnv'
verwendet wird
with
[
_Kty=int,
_Ty=countedRef
]
f:\devel\workspaces\workspace.pylucene\pylucene-4.2.1-1\build\_lucene\org/ap
ache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) : error
C2059: Syntaxfehler: 'Zeichenfolge'
f:\devel\workspaces\workspace.pylucene\pylucene-4.2.1-1\build\_lucene\org/ap
ache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) : error
C2238: Unerwartete(s) Token vor ';'
error: command 'C:\Program Files\Microsoft Visual Studio
9.0\VC\BIN\cl.exe' failed with exit status 2
make: *** [compile] Error 1


regards,
Thomas

-Ursprüngliche Nachricht-
Von: Andi Vajda [mailto:va...@apache.org] 
Gesendet: Mittwoch, 17. April 2013 22:11
An: pylucene-dev@lucene.apache.org
Cc: gene...@lucene.apache.org
Betreff: [VOTE] Release PyLucene 4.2.1-1


The PyLucene 4.2.1-0 release candidate had a number of problems preventing
its release. A PyLucene 4.2.1-1 release candidate is now ready for review
from:

   http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_2/CHANGE
S

PyLucene 4.2.1 is built with JCC 2.16 included in these release artifacts:
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_1/lucene/CHA
NGES.txt

Please vote to release these artifacts as PyLucene 4.2.1-1.

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1



Re: AW: [VOTE] Release PyLucene 4.2.1-1

2013-04-18 Thread Andi Vajda


On Thu, 18 Apr 2013, Thomas Koch wrote:


Andi,
I now get a different error while compiling __init__.cpp:

org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
error C2059: Syntaxfehler: 'Zeichenfolge'
org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
error C2238: Unerwartete(s) Token vor ';'

The line complained about is #42

40static CompiledAutomaton$AUTOMATON_TYPE *NONE;
41static CompiledAutomaton$AUTOMATON_TYPE *NORMAL;
42static CompiledAutomaton$AUTOMATON_TYPE *PREFIX;
43static CompiledAutomaton$AUTOMATON_TYPE *SINGLE;

PREFIX seems to be another reserved word ... I could compile __init__.cpp
after renaming PREFIX to PREFIX1.


Instead of renaming PREFIX, could you please have JCC do it for you by 
adding it to the list of reserved words in the JCC invocation via the 
--reserved command line flag ? and rinse and repeat until all such conficts

due to macro definitions are solved ?

Or were you able to complete the build already once PREFIX was renamed ?


I tried to google a list of reserved words used by VS C++ compiler, but had
no luck...


These are not reserved words but macro definitions that conflict with the 
generated code. If PREFIX is, say, defined to 1, line 42 becomes:


  static CompiledAutomaton$AUTOMATON_TYPE *1;

and that doesn't compile.


There are some predefined macros -but none that match our issues
http://msdn.microsoft.com/en-us/library/b0084kay(v=vs.100).aspx


Andi..





Make output details:

C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYTHON -DJCC_VER=2.16 -D_jcc_shared
-D_java_generics -D_dll_lucene=__declspec(dllexport) -IC:\Program
Files\Java\jdk1.6.0_06/include -IC:\Program
Files\Java\jdk1.6.0_06/include/win32 -Ibuild\_lucene
-IC:\Python27\lib\site-packages\jcc-2.16-py2.7-win32.egg\jcc\sources
-IC:\Python27\include -IC:\Python27\PC /Tpbuild\_lucene\__init__.cpp
/Fobuild\temp.win32-2.7\Release\build\_lucene\__init__.obj /EHsc
/D_CRT_SECURE_NO_WARNINGS
__init__.cpp
C:\Python27\lib\site-packages\jcc-2.16-py2.7-win32.egg\jcc\sources\JCCEnv.h(
118) : warning C4251: 'JCCEnv::refs': class 'std::multimap_Kty,_Ty'
erfordert eine DLL-Schnittstelle, die von Clients von class 'JCCEnv'
verwendet wird
   with
   [
   _Kty=int,
   _Ty=countedRef
   ]
f:\devel\workspaces\workspace.pylucene\pylucene-4.2.1-1\build\_lucene\org/ap
ache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) : error
C2059: Syntaxfehler: 'Zeichenfolge'
f:\devel\workspaces\workspace.pylucene\pylucene-4.2.1-1\build\_lucene\org/ap
ache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) : error
C2238: Unerwartete(s) Token vor ';'
error: command 'C:\Program Files\Microsoft Visual Studio
9.0\VC\BIN\cl.exe' failed with exit status 2
make: *** [compile] Error 1


regards,
Thomas

-Ursprüngliche Nachricht-
Von: Andi Vajda [mailto:va...@apache.org]
Gesendet: Mittwoch, 17. April 2013 22:11
An: pylucene-dev@lucene.apache.org
Cc: gene...@lucene.apache.org
Betreff: [VOTE] Release PyLucene 4.2.1-1


The PyLucene 4.2.1-0 release candidate had a number of problems preventing
its release. A PyLucene 4.2.1-1 release candidate is now ready for review
from:

  http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_2/CHANGE
S

PyLucene 4.2.1 is built with JCC 2.16 included in these release artifacts:
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_1/lucene/CHA
NGES.txt

Please vote to release these artifacts as PyLucene 4.2.1-1.

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1



Re: [VOTE] Release PyLucene 4.2.1-1

2013-04-18 Thread Andi Vajda


Because of issues on Windows, this release candidate is also not good.
Please hold your votes until a Windows-happy release candidate is made 
available.


Sorry for the bother.

Andi..

On Wed, 17 Apr 2013, Andi Vajda wrote:



The PyLucene 4.2.1-0 release candidate had a number of problems preventing 
its release. A PyLucene 4.2.1-1 release candidate is now ready for review 
from:


 http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_2/CHANGES

PyLucene 4.2.1 is built with JCC 2.16 included in these release artifacts:
http://svn.apache.org/repos/asf/lucene/pylucene/trunk/jcc/CHANGES

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_1/lucene/CHANGES.txt

Please vote to release these artifacts as PyLucene 4.2.1-1.

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1



[jira] [Created] (LUCENE-4939) Join's TermsIncludingScoreQuery Weight has wrong normalization

2013-04-18 Thread David Smiley (JIRA)
David Smiley created LUCENE-4939:


 Summary: Join's TermsIncludingScoreQuery Weight has wrong 
normalization
 Key: LUCENE-4939
 URL: https://issues.apache.org/jira/browse/LUCENE-4939
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Reporter: David Smiley
Priority: Minor


In the Join module, TermsIncludingScoreQuery's Weight implementation looks 
suspiciously wrong.  It creates a Weight based on the original query and 
delegates a couple calls to it in getValueForNormalization() and normalize() -- 
ok fine.  But then it doesn't do anything with it!  Furthermore, the original 
query has already been run by this point anyway.

Question: Should the original query, which currently runs separately (see 
JoinUtil), participate in the Weight normalization of the main query?  It would 
be tricky to wire all this together based on the current structure but arguably 
that is more correct.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4940) ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet

2013-04-18 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-4940:
---

 Summary: ToParentBlockJoinQuery throws exception on empty parent 
filter DocIdSet
 Key: LUCENE-4940
 URL: https://issues.apache.org/jira/browse/LUCENE-4940
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 5.0, 4.3
Reporter: Simon Willnauer
 Fix For: 5.0, 4.3


previously DocIdSet.EMPTY_DOCIDSET retuned NO_MORE_DOCS by default when 
DocIdSetIterator#docId() was called but since LUCENE-4924 we return -1 if the 
empty iterator is unpositioned / not exhausted. This causes an 
IllegalStateException since we couldn't detect the empty DocIdSet. Our tests 
somehow don't catch this and I ran into it integrating my RC0 that I build into 
ES last night so this is unreleased...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4940) ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet

2013-04-18 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-4940:
---

Assignee: Simon Willnauer

 ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet
 ---

 Key: LUCENE-4940
 URL: https://issues.apache.org/jira/browse/LUCENE-4940
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 5.0, 4.3
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3


 previously DocIdSet.EMPTY_DOCIDSET retuned NO_MORE_DOCS by default when 
 DocIdSetIterator#docId() was called but since LUCENE-4924 we return -1 if the 
 empty iterator is unpositioned / not exhausted. This causes an 
 IllegalStateException since we couldn't detect the empty DocIdSet. Our tests 
 somehow don't catch this and I ran into it integrating my RC0 that I build 
 into ES last night so this is unreleased...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4940) ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet

2013-04-18 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4940:


Attachment: LUCENE-4940.patch

here is a test and a fix...

 ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet
 ---

 Key: LUCENE-4940
 URL: https://issues.apache.org/jira/browse/LUCENE-4940
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 5.0, 4.3
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4940.patch


 previously DocIdSet.EMPTY_DOCIDSET retuned NO_MORE_DOCS by default when 
 DocIdSetIterator#docId() was called but since LUCENE-4924 we return -1 if the 
 empty iterator is unpositioned / not exhausted. This causes an 
 IllegalStateException since we couldn't detect the empty DocIdSet. Our tests 
 somehow don't catch this and I ran into it integrating my RC0 that I build 
 into ES last night so this is unreleased...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4940) ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet

2013-04-18 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13634963#comment-13634963
 ] 

Martijn van Groningen commented on LUCENE-4940:
---

+1 The patch look good! 

 ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet
 ---

 Key: LUCENE-4940
 URL: https://issues.apache.org/jira/browse/LUCENE-4940
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 5.0, 4.3
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4940.patch


 previously DocIdSet.EMPTY_DOCIDSET retuned NO_MORE_DOCS by default when 
 DocIdSetIterator#docId() was called but since LUCENE-4924 we return -1 if the 
 empty iterator is unpositioned / not exhausted. This causes an 
 IllegalStateException since we couldn't detect the empty DocIdSet. Our tests 
 somehow don't catch this and I ran into it integrating my RC0 that I build 
 into ES last night so this is unreleased...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4940) ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet

2013-04-18 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-4940.
-

Resolution: Fixed

 ToParentBlockJoinQuery throws exception on empty parent filter DocIdSet
 ---

 Key: LUCENE-4940
 URL: https://issues.apache.org/jira/browse/LUCENE-4940
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 5.0, 4.3
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4940.patch


 previously DocIdSet.EMPTY_DOCIDSET retuned NO_MORE_DOCS by default when 
 DocIdSetIterator#docId() was called but since LUCENE-4924 we return -1 if the 
 empty iterator is unpositioned / not exhausted. This causes an 
 IllegalStateException since we couldn't detect the empty DocIdSet. Our tests 
 somehow don't catch this and I ran into it integrating my RC0 that I build 
 into ES last night so this is unreleased...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX ([[ Exception while replacing ENV. Please report this as a bug. ]]

2013-04-18 Thread Policeman Jenkins Server
{{ java.lang.NullPointerException }})
 - Build # 405 - Still Failing!
MIME-Version: 1.0
Content-Type: multipart/mixed; 
boundary==_Part_19_1758445043.1366277634282
X-Jenkins-Job: Lucene-Solr-trunk-MacOSX
X-Jenkins-Result: FAILURE
Precedence: bulk

--=_Part_19_1758445043.1366277634282
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/405/
Java: [[ Exception while replacing ENV. Please report this as a bug. ]]
{{ java.lang.NullPointerException }}

No tests ran.

Build Log:
[...truncated 1647 lines...]
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected 
reader termination

common.init:

compile-lucene-core:

init:

-clover.disable:

-clover.load:

-clover.classpath:

-clover.setup:

clover:

common.compile-core:
[mkdir] Created dir: 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/classes/java
[javac] Compiling 33 source files to 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/classes/java
 [copy] Copying 14 files to 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/classes/java

compile-core:

compile-test-framework:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/ivy-settings.xml

resolve:

init:

compile-lucene-core:

compile-codecs:
 [echo] Building codecs...
hudson.remoting.RequestAbortedException: 
hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected reader 
termination
at hudson.remoting.Request.call(Request.java:174)
at hudson.remoting.Channel.call(Channel.java:672)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:158)
at com.sun.proxy.$Proxy75.join(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:915)
at hudson.Launcher$ProcStarter.join(Launcher.java:360)
at hudson.tasks.Ant.perform(Ant.java:217)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:802)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:584)
at hudson.model.Run.execute(Run.java:1575)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:237)
Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: 
Unexpected reader termination
at hudson.remoting.Request.abort(Request.java:299)
at hudson.remoting.Channel.terminate(Channel.java:732)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:76)
Caused by: java.io.IOException: Unexpected reader termination
... 1 more
Caused by: java.lang.OutOfMemoryError: Java heap space


--=_Part_19_1758445043.1366277634282--

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4734) Can not create collection via collections API on empty solr

2013-04-18 Thread Alexander Eibner (JIRA)
Alexander Eibner created SOLR-4734:
--

 Summary: Can not create collection via collections API on empty 
solr
 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
Solr: 4.2.1
ZooKeeper: 3.4.5
Tomcat 7.0.27 
Reporter: Alexander Eibner


The following setup and steps always lead to the same error:
app01: ZooKeeper
app02: ZooKeeper, Solr (in Tomcat)
app03: ZooKeeper, Solr (in Tomcat) 


*) Start ZooKeeper as ensemble on all machines.
*) Start tomcat on app02/app03

{code:javascript|title=clusterstate.json}
null
cZxid = 0x10014
ctime = Thu Apr 18 10:59:24 CEST 2013
mZxid = 0x10014
mtime = Thu Apr 18 10:59:24 CEST 2013
pZxid = 0x10014
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 0
{code}

*) Upload the configuration (on app02) for the collection via the following 
command:
{noformat}
zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 --confdir 
config/solr/storage/conf/ --confname storage-conf 
{noformat}

*) Linking the configuration (on app02) via the following command:
{noformat}
zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
--zkhost app01:4181,app02:4181,app03:4181
{noformat}

*) Create Collection via: 
{noformat}
http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
{noformat}

{code:javascript|title=clusterstate.json}
{storage:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  app02:9985_solr_storage_shard1_replica2:{
shard:shard1,
state:down,
core:storage_shard1_replica2,
collection:storage,
node_name:app02:9985_solr,
base_url:http://app02:9985/solr},
  app03:9985_solr_storage_shard1_replica1:{
shard:shard1,
state:down,
core:storage_shard1_replica1,
collection:storage,
node_name:app03:9985_solr,
base_url:http://app03:9985/solr,
router:compositeId}}
cZxid = 0x10014
ctime = Thu Apr 18 10:59:24 CEST 2013
mZxid = 0x10047
mtime = Thu Apr 18 11:04:06 CEST 2013
pZxid = 0x10014
cversion = 0
dataVersion = 2
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 847
numChildren = 0
{code}

This creates the replication of the shard on app02 and app03, but neither of 
them is marked as leader, both are marked as DOWN.
And after wards I can not access the collection.
In the browser I get:
{noformat}
SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
{noformat}

The following stacktrace in the logs:
{code}
Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
'storage_shard1_replica2': 
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at 

[jira] [Updated] (SOLR-4734) Can not create collection via collections API on empty solr

2013-04-18 Thread Alexander Eibner (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Eibner updated SOLR-4734:
---

Attachment: config-logs.zip

Minimal set of configurations for reproducing the error.

Log files for the steps above

 Can not create collection via collections API on empty solr
 ---

 Key: SOLR-4734
 URL: https://issues.apache.org/jira/browse/SOLR-4734
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2.1
 Environment: Linux 64bit on 3.2.0-33-generic kernel
 Solr: 4.2.1
 ZooKeeper: 3.4.5
 Tomcat 7.0.27 
Reporter: Alexander Eibner
 Attachments: config-logs.zip


 The following setup and steps always lead to the same error:
 app01: ZooKeeper
 app02: ZooKeeper, Solr (in Tomcat)
 app03: ZooKeeper, Solr (in Tomcat) 
 *) Start ZooKeeper as ensemble on all machines.
 *) Start tomcat on app02/app03
 {code:javascript|title=clusterstate.json}
 null
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10014
 mtime = Thu Apr 18 10:59:24 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 0
 numChildren = 0
 {code}
 *) Upload the configuration (on app02) for the collection via the following 
 command:
 {noformat}
 zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
 --confdir config/solr/storage/conf/ --confname storage-conf 
 {noformat}
 *) Linking the configuration (on app02) via the following command:
 {noformat}
 zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
 --zkhost app01:4181,app02:4181,app03:4181
 {noformat}
 *) Create Collection via: 
 {noformat}
 http://app02/solr/admin/collections?action=CREATEname=storagenumShards=1replicationFactor=2collection.configName=storage-conf
 {noformat}
 {code:javascript|title=clusterstate.json}
 {storage:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   app02:9985_solr_storage_shard1_replica2:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica2,
 collection:storage,
 node_name:app02:9985_solr,
 base_url:http://app02:9985/solr},
   app03:9985_solr_storage_shard1_replica1:{
 shard:shard1,
 state:down,
 core:storage_shard1_replica1,
 collection:storage,
 node_name:app03:9985_solr,
 base_url:http://app03:9985/solr,
 router:compositeId}}
 cZxid = 0x10014
 ctime = Thu Apr 18 10:59:24 CEST 2013
 mZxid = 0x10047
 mtime = Thu Apr 18 11:04:06 CEST 2013
 pZxid = 0x10014
 cversion = 0
 dataVersion = 2
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 847
 numChildren = 0
 {code}
 This creates the replication of the shard on app02 and app03, but neither of 
 them is marked as leader, both are marked as DOWN.
 And after wards I can not access the collection.
 In the browser I get:
 {noformat}
 SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:
 {noformat}
 The following stacktrace in the logs:
 {code}
 Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
 'storage_shard1_replica2': 
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 

[jira] [Commented] (LUCENE-4939) Join's TermsIncludingScoreQuery Weight has wrong normalization

2013-04-18 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635040#comment-13635040
 ] 

Martijn van Groningen commented on LUCENE-4939:
---

I never really thought about this... I think it doesn't make sense that the 
TermsIncludingScoreQuery's Weight delegates to the original query. Any query 
normalisation should happen on the from query. The to  query 
(TermsIncludingScoreQuery) should just use the scores it gets from the from 
query (so query time boosts should be placed on the from query).

 Join's TermsIncludingScoreQuery Weight has wrong normalization
 --

 Key: LUCENE-4939
 URL: https://issues.apache.org/jira/browse/LUCENE-4939
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Reporter: David Smiley
Priority: Minor

 In the Join module, TermsIncludingScoreQuery's Weight implementation looks 
 suspiciously wrong.  It creates a Weight based on the original query and 
 delegates a couple calls to it in getValueForNormalization() and normalize() 
 -- ok fine.  But then it doesn't do anything with it!  Furthermore, the 
 original query has already been run by this point anyway.
 Question: Should the original query, which currently runs separately (see 
 JoinUtil), participate in the Weight normalization of the main query?  It 
 would be tricky to wire all this together based on the current structure but 
 arguably that is more correct.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4941) JoinUtil's TermsQuery should sort terms only once.

2013-04-18 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-4941:
--

Attachment: LUCENE-4941.patch

 JoinUtil's TermsQuery should sort terms only once.
 --

 Key: LUCENE-4941
 URL: https://issues.apache.org/jira/browse/LUCENE-4941
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Attachments: LUCENE-4941.patch


 The sorting of the 'from' terms occurs as often as the number of segments. 
 This only needs to happen once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4941) JoinUtil's TermsQuery should sort terms only once.

2013-04-18 Thread Martijn van Groningen (JIRA)
Martijn van Groningen created LUCENE-4941:
-

 Summary: JoinUtil's TermsQuery should sort terms only once.
 Key: LUCENE-4941
 URL: https://issues.apache.org/jira/browse/LUCENE-4941
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Attachments: LUCENE-4941.patch

The sorting of the 'from' terms occurs as often as the number of segments. This 
only needs to happen once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



ReleaseNotes 4.3

2013-04-18 Thread Simon Willnauer
I started with the lucene release notes here:

http://wiki.apache.org/lucene-java/ReleaseNote43


please feel free to add to is as needed!

aside of this I'd really appreciate some help with the solr part, mark can
you help with that?

simon


[jira] [Created] (SOLR-4735) Improve Solr metrics reporting

2013-04-18 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-4735:
---

 Summary: Improve Solr metrics reporting
 Key: SOLR-4735
 URL: https://issues.apache.org/jira/browse/SOLR-4735
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor


Following on from a discussion on the mailing list:
http://search-lucene.com/m/IO0EI1qdyJF1/codahalesubj=Solr+metrics+in+Codahale+metrics+and+Graphite+

It would be good to make Solr play more nicely with existing devops monitoring 
systems, such as Graphite or Ganglia.  Stats monitoring at the moment is 
poll-only, either via JMX or through the admin stats page.  I'd like to 
refactor things a bit to make this more pluggable.

This patch is a start.  It adds a new interface, InstrumentedBean, which 
extends SolrInfoMBean to return a 
[[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
couple of MetricReporters (which basically just duplicate the JMX and admin 
page reporting that's there at the moment, but which should be more 
extensible).  The patch includes a change to RequestHandlerBase showing how 
this could work.  The idea would be to eventually replace the getStatistics() 
call on SolrInfoMBean with this instead.

The next step would be to allow more MetricReporters to be defined in 
solrconfig.xml.  The Metrics library comes with ganglia and graphite reporting 
modules, and we can add contrib plugins for both of those.

There's some more general cleanup that could be done around SolrInfoMBean 
(we've got two plugin handlers at /mbeans and /plugins that basically do the 
same thing, and the beans themselves have some weirdly inconsistent data on 
them - getVersion() returns different things for different impls, and 
getSource() seems pretty useless), but maybe that's for another issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1413) Add MockSolrServer to SolrJ client tests

2013-04-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635095#comment-13635095
 ] 

Erick Erickson commented on SOLR-1413:
--

Thanks for closing Lance!

 Add MockSolrServer to SolrJ client tests
 

 Key: SOLR-1413
 URL: https://issues.apache.org/jira/browse/SOLR-1413
 Project: Solr
  Issue Type: Test
  Components: clients - java
 Environment: Any Solr distribution. Uses only the SolrJ client code, 
 nothing in the Solr core.
Reporter: Lance Norskog
Priority: Minor
 Fix For: 3.3

 Attachments: SOLR-1413.patch, SOLR-1413.patch


 The SolrJ unit test suite has no mock solr server for HTTP access, and 
 there are no low-level tests of the Solrj HTTP wire protocols.
 This patch includes org.apache.solr.client.solrj.MockHTTPServer.java and 
 org.apache.solr.client.solrj.TestHTTP_XML_single.java. The mock server does 
 not parse its input and responds with pre-configured byte streams. The latter 
 does a few tests in the XML wire format. Most of the tests do one request and 
 set up success and failure responses.
 Unfortunately, there is a bug: I could not get 2 successive requests to work. 
 The mock server's TCP socket does not work when reading the second request.  
 If someone who knows the JDK socket classes could look at the mock server, I 
 would greatly appreciate it.
 The alternative is to steal a bunch of files from the apache commons 
 httpclient test suite. This is a quite sophisticate bunch of code:
 http://svn.apache.org/repos/asf/httpcomponents/oac.hc3x/trunk/src/test/org/apache/commons/httpclient/server/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4735) Improve Solr metrics reporting

2013-04-18 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-4735:


Attachment: SOLR-4735.patch

Here's the patch.

NB: this uses the same metrics library that I tried to use in my ill-starred 
attempts at SOLR-1972.  However, the various leaky thread abstractions that 
were causing problems there have all been removed from this version.

The JMX reporting doesn't quite work yet either, as it's dependent on being 
able to set the domains per-bean, and that functionality hasn't been released 
yet.

 Improve Solr metrics reporting
 --

 Key: SOLR-4735
 URL: https://issues.apache.org/jira/browse/SOLR-4735
 Project: Solr
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Attachments: SOLR-4735.patch


 Following on from a discussion on the mailing list:
 http://search-lucene.com/m/IO0EI1qdyJF1/codahalesubj=Solr+metrics+in+Codahale+metrics+and+Graphite+
 It would be good to make Solr play more nicely with existing devops 
 monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
 moment is poll-only, either via JMX or through the admin stats page.  I'd 
 like to refactor things a bit to make this more pluggable.
 This patch is a start.  It adds a new interface, InstrumentedBean, which 
 extends SolrInfoMBean to return a 
 [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
 couple of MetricReporters (which basically just duplicate the JMX and admin 
 page reporting that's there at the moment, but which should be more 
 extensible).  The patch includes a change to RequestHandlerBase showing how 
 this could work.  The idea would be to eventually replace the getStatistics() 
 call on SolrInfoMBean with this instead.
 The next step would be to allow more MetricReporters to be defined in 
 solrconfig.xml.  The Metrics library comes with ganglia and graphite 
 reporting modules, and we can add contrib plugins for both of those.
 There's some more general cleanup that could be done around SolrInfoMBean 
 (we've got two plugin handlers at /mbeans and /plugins that basically do the 
 same thing, and the beans themselves have some weirdly inconsistent data on 
 them - getVersion() returns different things for different impls, and 
 getSource() seems pretty useless), but maybe that's for another issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4725) Should we stop supporting name and dataDir in the autodiscover mode?

2013-04-18 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-4725.
--

   Resolution: Implemented
Fix Version/s: 5.0
   4.3

Resolved by changes to SOLR-4662

 Should we stop supporting name and dataDir in the autodiscover mode?
 

 Key: SOLR-4725
 URL: https://issues.apache.org/jira/browse/SOLR-4725
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 4.3, 5.0

 Attachments: SOLR-4725.patch


 Making this a blocker so we resolve it. Should be quick to code if we have 
 consensus, maybe nothing at all to do here.
 I'm not too happy with the fact that the new core discovery process has two 
 real gotcha's. The individual core.properties file can define 'name' and 
 'dataDir'. It seems too easy to either use the same name for two different 
 cores or use the same dataDir, just copy the core.properties file around and 
 fail to edit one them. In large installations this could be a bear to track 
 down.
 Straw-man proposal is the we remove support for them both in discovery mode. 
 The name defaults to the directory in which core.properties is found and the 
 data dir is immediately below there.
 Currently, there are checks to fail to load either core if either 'name' or 
 'dataDir' is defined in more than one core. I think the error reporting is 
 weak, you probably have to look in the log file and there should be a way to 
 get this in the admin UI at least.
 Maybe the right thing to do is just leave it as-is and emphasize that 
 specifying the dataDir and name is expert level and you have to get it right, 
 but I wanted to get wider exposure to the problem before we push 4.3 out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4662) Finalize what we're going to do with solr.xml, auto-discovery, config sets.

2013-04-18 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-4662.
--

   Resolution: Fixed
Fix Version/s: 5.0
   4.3

Mark checked the code in last night, merged into 4.x and 4.3.

 Finalize what we're going to do with solr.xml, auto-discovery, config sets.
 ---

 Key: SOLR-4662
 URL: https://issues.apache.org/jira/browse/SOLR-4662
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.3, 5.0

 Attachments: SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch, 
 SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch


 Spinoff from SOLR-4615, breaking it out here so we can address the changes in 
 pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4662) Finalize what we're going to do with solr.xml, auto-discovery, config sets.

2013-04-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635199#comment-13635199
 ] 

Erick Erickson commented on SOLR-4662:
--

commit tag bot seems to have skipped this...

4.3 : r -  1469148
4.x : r -  1469112
trunk: r - 1469089

 Finalize what we're going to do with solr.xml, auto-discovery, config sets.
 ---

 Key: SOLR-4662
 URL: https://issues.apache.org/jira/browse/SOLR-4662
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.3, 5.0

 Attachments: SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch, 
 SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch


 Spinoff from SOLR-4615, breaking it out here so we can address the changes in 
 pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4942) Indexed non-point shapes index excessive terms

2013-04-18 Thread David Smiley (JIRA)
David Smiley created LUCENE-4942:


 Summary: Indexed non-point shapes index excessive terms
 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley


Indexed non-point shapes are comprised of a set of terms that represent grid 
cells.  Cells completely within the shape or cells on the intersecting edge 
that are at the maximum detail depth being indexed for the shape are denoted as 
leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens are 
actually indexed twice_, one with the leaf byte and one without.

The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
cells and so the tokens with '+' are completely redundant.

The Recursive [algorithm] based PrefixTree Strategy better supports correct 
search of indexed non-point shapes than TermQuery does and the distinction is 
relevant.  However, the foundational search algorithms used by this strategy 
(Intersects  Contains; the other 2 are based on these) could each be upgraded 
to deal with this correctly.  Not trivial but very doable.

In the end, spatial non-point indexes can probably be trimmed my ~40% by doing 
this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4662) Finalize what we're going to do with solr.xml, auto-discovery, config sets.

2013-04-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635223#comment-13635223
 ] 

Mark Miller commented on SOLR-4662:
---

bq. commit tag bot seems to have skipped this...

It's not running - I've meant to move it from its ad hoc running spot (my 
primary dev machine where i have not been running it lately) to a permanent 
one, but just have not gotten to it yet. Soon though.

 Finalize what we're going to do with solr.xml, auto-discovery, config sets.
 ---

 Key: SOLR-4662
 URL: https://issues.apache.org/jira/browse/SOLR-4662
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.3, 5.0

 Attachments: SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch, 
 SOLR-4662.patch, SOLR-4662.patch, SOLR-4662.patch


 Spinoff from SOLR-4615, breaking it out here so we can address the changes in 
 pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 1164 - Still Failing

2013-04-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-java7/1164/

1 tests failed.
FAILED:  org.apache.lucene.search.join.TestBlockJoin.testEmptyChildFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A195D04895346701:903EBBAF049541EB]:0)
at 
org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:289)
at 
org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:99)
at 
org.apache.lucene.index.FilterAtomicReader$FilterDocsEnum.nextDoc(FilterAtomicReader.java:240)
at 
org.apache.lucene.index.AssertingAtomicReader$AssertingDocsEnum.nextDoc(AssertingAtomicReader.java:252)
at 
org.apache.lucene.search.AssertingIndexSearcher$AssertingScorer.nextDoc(AssertingIndexSearcher.java:295)
at org.apache.lucene.search.Scorer.score(Scorer.java:64)
at 
org.apache.lucene.search.AssertingIndexSearcher$AssertingScorer.score(AssertingIndexSearcher.java:260)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:612)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:102)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
at 
org.apache.lucene.search.join.TestBlockJoin.testEmptyChildFilter(TestBlockJoin.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)

[jira] [Created] (SOLR-4736) Support group.mincount for Result Grouping

2013-04-18 Thread yuanyun.cn (JIRA)
yuanyun.cn created SOLR-4736:


 Summary: Support group.mincount for Result Grouping
 Key: SOLR-4736
 URL: https://issues.apache.org/jira/browse/SOLR-4736
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: yuanyun.cn
Priority: Minor
 Fix For: 4.3, 5.0


Result Grouping is a very useful feature: we can use it to find duplicate data 
in index, but it lacks of one feature-group.mincount. 

With group.mincount, we can specify that only groups that has equal or more 
than ${mincount} for the group field will be returned.

Specially, we can use group.mincount=2 to only return duplicate data.
Could we add this in future release? Thanks. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4936:
-

Attachment: LUCENE-4936.patch

Patch:

 * Adds MathUtil.gcd(long, long)

 * Adds GCD compression to Lucene42, Disk and CheapBastard.

 * Improves BaseDocValuesFormatTest which almost only tested TABLE_COMPRESSED 
with Lucene42DVF

 * No more attempts to compress storage when the values are known to be dense, 
such as SORTED ords.

I measured how slower doc values indexing is with these new checks, and it is 
completely unnoticeable with random or dense values since the GCD quickly 
reaches 1. When the GCD is larger, it only made indexing 2% slower (every doc 
has a single field which is a NumericDocValuesField). So I think it's fine.

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 1164 - Still Failing

2013-04-18 Thread Simon Willnauer
this is a test bug... I will commit a fix in a bit


On Thu, Apr 18, 2013 at 5:09 PM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-java7/1164/

 1 tests failed.
 FAILED:  org.apache.lucene.search.join.TestBlockJoin.testEmptyChildFilter

 Error Message:


 Stack Trace:
 java.lang.AssertionError
 at
 __randomizedtesting.SeedInfo.seed([A195D04895346701:903EBBAF049541EB]:0)
 at
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:289)
 at
 org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:99)
 at
 org.apache.lucene.index.FilterAtomicReader$FilterDocsEnum.nextDoc(FilterAtomicReader.java:240)
 at
 org.apache.lucene.index.AssertingAtomicReader$AssertingDocsEnum.nextDoc(AssertingAtomicReader.java:252)
 at
 org.apache.lucene.search.AssertingIndexSearcher$AssertingScorer.nextDoc(AssertingIndexSearcher.java:295)
 at org.apache.lucene.search.Scorer.score(Scorer.java:64)
 at
 org.apache.lucene.search.AssertingIndexSearcher$AssertingScorer.score(AssertingIndexSearcher.java:260)
 at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:612)
 at
 org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:102)
 at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
 at
 org.apache.lucene.search.join.TestBlockJoin.testEmptyChildFilter(TestBlockJoin.java:106)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 

[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635310#comment-13635310
 ] 

Robert Muir commented on LUCENE-4936:
-

Looks great! I'm glad you were able to make this fast.

A few ideas:
* I like the switch with corruption-check on DiskDV. Can we easily integrate 
this into Lucene42?
* Can we update the file format docs (we attempt to describe the numerics 
strategies succinctly here)

I can do a more thorough review and some additional testing later, but this 
looks awesome.

Later we should think about a place (maybe in codec file format docs, maybe 
even NumericDocValuesField?) to add some practical general guidelines to users, 
that might not otherwise be intuitive: Stuff like if you are putting Dates in 
NumericDV, zero out portions you dont care about (e.g. milliseconds, time, etc) 
to save space, indexing as UTC will be a little more efficient than with local 
offset, etc.

{quote}
Improves BaseDocValuesFormatTest which almost only tested TABLE_COMPRESSED 
with Lucene42DVF
{quote}

Yeah this is a good catch! We should also maybe open an issue to review DiskDV 
and try to make it more efficient. Optimizations like TABLE_COMPRESSED don't 
exist there I think: it could be handy if someone wants e.g. smallfloat scoring 
factor. Its nice this patch provides back compat for DiskDV but its not totally 
necessary in the future, if we want to review and rewrite it. In general that 
codec was just done very quickly and hasn't seen much benchmarking or anything: 
could use some work.

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635329#comment-13635329
 ] 

Robert Muir commented on LUCENE-4936:
-

{quote}
indexing as UTC will be a little more efficient than with local offset, etc.
{quote}

We could probably solve issues like that too (maybe in something like 
cheap-bastard codec), if we did a first pass to compute min/max
and then did GCD only on the deltas from min... right?

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635374#comment-13635374
 ] 

Uwe Schindler commented on LUCENE-4936:
---

bq. indexing as UTC will be a little more efficient than with local offset, etc.

If you use a NumericField and store the long epoch in it, it is UTC as no 
localization involved.

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4731) Fresh clone of github lucene-solr repo already has modified files somehow

2013-04-18 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635398#comment-13635398
 ] 

Hoss Man commented on SOLR-4731:


It would also be useful to know what exactly git thinks the modification is. 
(ie: what does git diff tell you?)


 Fresh clone of github lucene-solr repo already has modified files somehow
 -

 Key: SOLR-4731
 URL: https://issues.apache.org/jira/browse/SOLR-4731
 Project: Solr
  Issue Type: Bug
Reporter: Uri Laserson

 I forked the lucene-solr repo on github.
 Then
 git clone g...@github.com:laserson/lucene-solr.git
 Then `git status` gives me
 $ git status
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   solr/example/cloud-scripts/zkcli.bat
 #
 no changes added to commit (use git add and/or git commit -a)
 Even though I never touched anything

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2013-04-18 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635391#comment-13635391
 ] 

Ryan McKinley commented on LUCENE-4942:
---

Without the + (or equivalent) how do you know that everything below that is 
covered by the shape?

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635406#comment-13635406
 ] 

Robert Muir commented on LUCENE-4936:
-

But NumericField is totally unrelated to docvalues!

Besides, delta+GCD has other applications than just GMT offset, e.g. solr's 
Currencyfield (at least in the US, people love prices like 199, 299, 399...): 
in that case it would save 9 bits per value where it would do nothing with the 
current patch.

I'm not arguing the extra pass should be the in the default codec, i just said 
it might be interesting for cheap-bastard or something.


 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-949) AnalyzingQueryParser can't work with leading wildcards.

2013-04-18 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-949:
---

Attachment: (was: LUCENE-949.patch)

 AnalyzingQueryParser can't work with leading wildcards.
 ---

 Key: LUCENE-949
 URL: https://issues.apache.org/jira/browse/LUCENE-949
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 2.2
Reporter: Stefan Klein

 The getWildcardQuery mehtod in AnalyzingQueryParser.java need the following 
 changes to accept leading wildcards:
   protected Query getWildcardQuery(String field, String termStr) throws 
 ParseException
   {
   String useTermStr = termStr;
   String leadingWildcard = null;
   if (*.equals(field))
   {
   if (*.equals(useTermStr))
   return new MatchAllDocsQuery();
   }
   boolean hasLeadingWildcard = (useTermStr.startsWith(*) || 
 useTermStr.startsWith(?)) ? true : false;
   if (!getAllowLeadingWildcard()  hasLeadingWildcard)
   throw new ParseException('*' or '?' not allowed as 
 first character in WildcardQuery);
   if (getLowercaseExpandedTerms())
   {
   useTermStr = useTermStr.toLowerCase();
   }
   if (hasLeadingWildcard)
   {
   leadingWildcard = useTermStr.substring(0, 1);
   useTermStr = useTermStr.substring(1);
   }
   List tlist = new ArrayList();
   List wlist = new ArrayList();
   /*
* somewhat a hack: find/store wildcard chars in order to put 
 them back
* after analyzing
*/
   boolean isWithinToken = (!useTermStr.startsWith(?)  
 !useTermStr.startsWith(*));
   isWithinToken = true;
   StringBuffer tmpBuffer = new StringBuffer();
   char[] chars = useTermStr.toCharArray();
   for (int i = 0; i  useTermStr.length(); i++)
   {
   if (chars[i] == '?' || chars[i] == '*')
   {
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = false;
   }
   else
   {
   if (!isWithinToken)
   {
   wlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = true;
   }
   tmpBuffer.append(chars[i]);
   }
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   }
   else
   {
   wlist.add(tmpBuffer.toString());
   }
   // get Analyzer from superclass and tokenize the term
   TokenStream source = getAnalyzer().tokenStream(field, new 
 StringReader(useTermStr));
   org.apache.lucene.analysis.Token t;
   int countTokens = 0;
   while (true)
   {
   try
   {
   t = source.next();
   }
   catch (IOException e)
   {
   t = null;
   }
   if (t == null)
   {
   break;
   }
   if (!.equals(t.termText()))
   {
   try
   {
   tlist.set(countTokens++, t.termText());
   }
   catch (IndexOutOfBoundsException ioobe)
   {
   countTokens = -1;
   }
   }
   }
   try
   {
   source.close();
   }
   catch (IOException e)
   {
   // ignore
   }
   if (countTokens != tlist.size())
   {
   /*
* this means that the analyzer used either added or 
 consumed

[jira] [Updated] (LUCENE-949) AnalyzingQueryParser can't work with leading wildcards.

2013-04-18 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-949:
---

Attachment: (was: AnalyzingQueryParser.java)

 AnalyzingQueryParser can't work with leading wildcards.
 ---

 Key: LUCENE-949
 URL: https://issues.apache.org/jira/browse/LUCENE-949
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 2.2
Reporter: Stefan Klein

 The getWildcardQuery mehtod in AnalyzingQueryParser.java need the following 
 changes to accept leading wildcards:
   protected Query getWildcardQuery(String field, String termStr) throws 
 ParseException
   {
   String useTermStr = termStr;
   String leadingWildcard = null;
   if (*.equals(field))
   {
   if (*.equals(useTermStr))
   return new MatchAllDocsQuery();
   }
   boolean hasLeadingWildcard = (useTermStr.startsWith(*) || 
 useTermStr.startsWith(?)) ? true : false;
   if (!getAllowLeadingWildcard()  hasLeadingWildcard)
   throw new ParseException('*' or '?' not allowed as 
 first character in WildcardQuery);
   if (getLowercaseExpandedTerms())
   {
   useTermStr = useTermStr.toLowerCase();
   }
   if (hasLeadingWildcard)
   {
   leadingWildcard = useTermStr.substring(0, 1);
   useTermStr = useTermStr.substring(1);
   }
   List tlist = new ArrayList();
   List wlist = new ArrayList();
   /*
* somewhat a hack: find/store wildcard chars in order to put 
 them back
* after analyzing
*/
   boolean isWithinToken = (!useTermStr.startsWith(?)  
 !useTermStr.startsWith(*));
   isWithinToken = true;
   StringBuffer tmpBuffer = new StringBuffer();
   char[] chars = useTermStr.toCharArray();
   for (int i = 0; i  useTermStr.length(); i++)
   {
   if (chars[i] == '?' || chars[i] == '*')
   {
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = false;
   }
   else
   {
   if (!isWithinToken)
   {
   wlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = true;
   }
   tmpBuffer.append(chars[i]);
   }
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   }
   else
   {
   wlist.add(tmpBuffer.toString());
   }
   // get Analyzer from superclass and tokenize the term
   TokenStream source = getAnalyzer().tokenStream(field, new 
 StringReader(useTermStr));
   org.apache.lucene.analysis.Token t;
   int countTokens = 0;
   while (true)
   {
   try
   {
   t = source.next();
   }
   catch (IOException e)
   {
   t = null;
   }
   if (t == null)
   {
   break;
   }
   if (!.equals(t.termText()))
   {
   try
   {
   tlist.set(countTokens++, t.termText());
   }
   catch (IndexOutOfBoundsException ioobe)
   {
   countTokens = -1;
   }
   }
   }
   try
   {
   source.close();
   }
   catch (IOException e)
   {
   // ignore
   }
   if (countTokens != tlist.size())
   {
   /*
* this means that the analyzer used either added or 
 

[jira] [Updated] (LUCENE-949) AnalyzingQueryParser can't work with leading wildcards.

2013-04-18 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-949:
---

Attachment: LUCENE-949.patch

 AnalyzingQueryParser can't work with leading wildcards.
 ---

 Key: LUCENE-949
 URL: https://issues.apache.org/jira/browse/LUCENE-949
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 2.2
Reporter: Stefan Klein
 Attachments: LUCENE-949.patch


 The getWildcardQuery mehtod in AnalyzingQueryParser.java need the following 
 changes to accept leading wildcards:
   protected Query getWildcardQuery(String field, String termStr) throws 
 ParseException
   {
   String useTermStr = termStr;
   String leadingWildcard = null;
   if (*.equals(field))
   {
   if (*.equals(useTermStr))
   return new MatchAllDocsQuery();
   }
   boolean hasLeadingWildcard = (useTermStr.startsWith(*) || 
 useTermStr.startsWith(?)) ? true : false;
   if (!getAllowLeadingWildcard()  hasLeadingWildcard)
   throw new ParseException('*' or '?' not allowed as 
 first character in WildcardQuery);
   if (getLowercaseExpandedTerms())
   {
   useTermStr = useTermStr.toLowerCase();
   }
   if (hasLeadingWildcard)
   {
   leadingWildcard = useTermStr.substring(0, 1);
   useTermStr = useTermStr.substring(1);
   }
   List tlist = new ArrayList();
   List wlist = new ArrayList();
   /*
* somewhat a hack: find/store wildcard chars in order to put 
 them back
* after analyzing
*/
   boolean isWithinToken = (!useTermStr.startsWith(?)  
 !useTermStr.startsWith(*));
   isWithinToken = true;
   StringBuffer tmpBuffer = new StringBuffer();
   char[] chars = useTermStr.toCharArray();
   for (int i = 0; i  useTermStr.length(); i++)
   {
   if (chars[i] == '?' || chars[i] == '*')
   {
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = false;
   }
   else
   {
   if (!isWithinToken)
   {
   wlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = true;
   }
   tmpBuffer.append(chars[i]);
   }
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   }
   else
   {
   wlist.add(tmpBuffer.toString());
   }
   // get Analyzer from superclass and tokenize the term
   TokenStream source = getAnalyzer().tokenStream(field, new 
 StringReader(useTermStr));
   org.apache.lucene.analysis.Token t;
   int countTokens = 0;
   while (true)
   {
   try
   {
   t = source.next();
   }
   catch (IOException e)
   {
   t = null;
   }
   if (t == null)
   {
   break;
   }
   if (!.equals(t.termText()))
   {
   try
   {
   tlist.set(countTokens++, t.termText());
   }
   catch (IndexOutOfBoundsException ioobe)
   {
   countTokens = -1;
   }
   }
   }
   try
   {
   source.close();
   }
   catch (IOException e)
   {
   // ignore
   }
   if (countTokens != tlist.size())
   {
   /*
* this means that the analyzer used 

[jira] [Commented] (LUCENE-949) AnalyzingQueryParser can't work with leading wildcards.

2013-04-18 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635415#comment-13635415
 ] 

Tim Allison commented on LUCENE-949:


Thank you very much for the feedback. I refactored a bit and added escaped 
wildcard handling.  Let me know how this looks.

 AnalyzingQueryParser can't work with leading wildcards.
 ---

 Key: LUCENE-949
 URL: https://issues.apache.org/jira/browse/LUCENE-949
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 2.2
Reporter: Stefan Klein
 Attachments: LUCENE-949.patch


 The getWildcardQuery mehtod in AnalyzingQueryParser.java need the following 
 changes to accept leading wildcards:
   protected Query getWildcardQuery(String field, String termStr) throws 
 ParseException
   {
   String useTermStr = termStr;
   String leadingWildcard = null;
   if (*.equals(field))
   {
   if (*.equals(useTermStr))
   return new MatchAllDocsQuery();
   }
   boolean hasLeadingWildcard = (useTermStr.startsWith(*) || 
 useTermStr.startsWith(?)) ? true : false;
   if (!getAllowLeadingWildcard()  hasLeadingWildcard)
   throw new ParseException('*' or '?' not allowed as 
 first character in WildcardQuery);
   if (getLowercaseExpandedTerms())
   {
   useTermStr = useTermStr.toLowerCase();
   }
   if (hasLeadingWildcard)
   {
   leadingWildcard = useTermStr.substring(0, 1);
   useTermStr = useTermStr.substring(1);
   }
   List tlist = new ArrayList();
   List wlist = new ArrayList();
   /*
* somewhat a hack: find/store wildcard chars in order to put 
 them back
* after analyzing
*/
   boolean isWithinToken = (!useTermStr.startsWith(?)  
 !useTermStr.startsWith(*));
   isWithinToken = true;
   StringBuffer tmpBuffer = new StringBuffer();
   char[] chars = useTermStr.toCharArray();
   for (int i = 0; i  useTermStr.length(); i++)
   {
   if (chars[i] == '?' || chars[i] == '*')
   {
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = false;
   }
   else
   {
   if (!isWithinToken)
   {
   wlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = true;
   }
   tmpBuffer.append(chars[i]);
   }
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   }
   else
   {
   wlist.add(tmpBuffer.toString());
   }
   // get Analyzer from superclass and tokenize the term
   TokenStream source = getAnalyzer().tokenStream(field, new 
 StringReader(useTermStr));
   org.apache.lucene.analysis.Token t;
   int countTokens = 0;
   while (true)
   {
   try
   {
   t = source.next();
   }
   catch (IOException e)
   {
   t = null;
   }
   if (t == null)
   {
   break;
   }
   if (!.equals(t.termText()))
   {
   try
   {
   tlist.set(countTokens++, t.termText());
   }
   catch (IndexOutOfBoundsException ioobe)
   {
   countTokens = -1;
   }
   }
   }
   try
   {
   source.close();
   }
   catch (IOException e)
   {
   // ignore
   

[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635423#comment-13635423
 ] 

Robert Muir commented on LUCENE-4936:
-

Sorry i was thinking about car prices when i said 9bpv, but you get the drift :)

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-949) AnalyzingQueryParser can't work with leading wildcards.

2013-04-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635425#comment-13635425
 ] 

Steve Rowe commented on LUCENE-949:
---

Tim, thanks, I'll take a look at your new patch later today.

FYI, you shouldn't remove old patches - when you upload a file with the same 
name, the older versions still appear, but their names appear in grey, and you 
can see the date/time each was uploaded.  See e.g. SOLR-3251, where I've 
uploaded the same-named patch multiple times.


 AnalyzingQueryParser can't work with leading wildcards.
 ---

 Key: LUCENE-949
 URL: https://issues.apache.org/jira/browse/LUCENE-949
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 2.2
Reporter: Stefan Klein
 Attachments: LUCENE-949.patch


 The getWildcardQuery mehtod in AnalyzingQueryParser.java need the following 
 changes to accept leading wildcards:
   protected Query getWildcardQuery(String field, String termStr) throws 
 ParseException
   {
   String useTermStr = termStr;
   String leadingWildcard = null;
   if (*.equals(field))
   {
   if (*.equals(useTermStr))
   return new MatchAllDocsQuery();
   }
   boolean hasLeadingWildcard = (useTermStr.startsWith(*) || 
 useTermStr.startsWith(?)) ? true : false;
   if (!getAllowLeadingWildcard()  hasLeadingWildcard)
   throw new ParseException('*' or '?' not allowed as 
 first character in WildcardQuery);
   if (getLowercaseExpandedTerms())
   {
   useTermStr = useTermStr.toLowerCase();
   }
   if (hasLeadingWildcard)
   {
   leadingWildcard = useTermStr.substring(0, 1);
   useTermStr = useTermStr.substring(1);
   }
   List tlist = new ArrayList();
   List wlist = new ArrayList();
   /*
* somewhat a hack: find/store wildcard chars in order to put 
 them back
* after analyzing
*/
   boolean isWithinToken = (!useTermStr.startsWith(?)  
 !useTermStr.startsWith(*));
   isWithinToken = true;
   StringBuffer tmpBuffer = new StringBuffer();
   char[] chars = useTermStr.toCharArray();
   for (int i = 0; i  useTermStr.length(); i++)
   {
   if (chars[i] == '?' || chars[i] == '*')
   {
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = false;
   }
   else
   {
   if (!isWithinToken)
   {
   wlist.add(tmpBuffer.toString());
   tmpBuffer.setLength(0);
   }
   isWithinToken = true;
   }
   tmpBuffer.append(chars[i]);
   }
   if (isWithinToken)
   {
   tlist.add(tmpBuffer.toString());
   }
   else
   {
   wlist.add(tmpBuffer.toString());
   }
   // get Analyzer from superclass and tokenize the term
   TokenStream source = getAnalyzer().tokenStream(field, new 
 StringReader(useTermStr));
   org.apache.lucene.analysis.Token t;
   int countTokens = 0;
   while (true)
   {
   try
   {
   t = source.next();
   }
   catch (IOException e)
   {
   t = null;
   }
   if (t == null)
   {
   break;
   }
   if (!.equals(t.termText()))
   {
   try
   {
   tlist.set(countTokens++, t.termText());
   }
   catch (IndexOutOfBoundsException ioobe)
   {
   countTokens = -1;
   }
 

[jira] [Updated] (SOLR-4622) deprecate usage of DEFAULT_HOST_CONTEXT and DEFAULT_HOST_PORT

2013-04-18 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4622:
---

Attachment: SOLR-4622__phase3_trunk_only.patch

attached a trunk only patch for implementing phrase#3 of hte original proposal. 
 (been holding off while a lot of CoreContainer/solr.xml stuff was up in the 
air)

All tests pass, i'll commit once 4.3 is closer to fully baked so as to not make 
things hard for any last minute bug fix backports.

 deprecate usage of DEFAULT_HOST_CONTEXT and DEFAULT_HOST_PORT
 -

 Key: SOLR-4622
 URL: https://issues.apache.org/jira/browse/SOLR-4622
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.3, 5.0

 Attachments: SOLR-4622.patch, SOLR-4622__phase3_trunk_only.patch


 Frequently, people who try to use solr cloud in a differnet servlet container 
 (or in a jetty other then the preconfigured one supplied) they can be easily 
 confused as to why/how/where solr is trying to hostContext=solr and 
 hostPort=8983.
 these can be explicitly overridden in solr.xml, and the defaults are setup to 
 read from system properties, but we should eliminate the hardcoded defaults 
 from the java code (where most users can't even grep for them) and instead 
 specify defaults for the sys properties in the example configs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2013-04-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635440#comment-13635440
 ] 

David Smiley commented on LUCENE-4942:
--

You don't ;-)   This is why I believe TermQueryStrategy is fundamentally flawed 
for indexing non-point shapes.  Yet AFAIK it's the choice ElasticSearch wants 
to use (or at least wanted).  In ES if you indexed a country and your search 
box is something small in the middle of that country, you *won't* match that 
country.

To be clear I'm recommending two things:
* Have TermQueryStrategy _not_ index its leaves with the '+' -- it doesn't use 
them.
* Have RecursivePrefixTreeStrategy _only_ index the leaf versions of those leaf 
cells, not a redundant non-leaf version.  Some non-trivial code needs to change 
in a few of the search algorithms.

In *both* cases, the semantics are the same; no new or fewer documents match.  
But the spatial index is ~40% smaller I figure, faster indexing as well.  It's 
_possible_ some of the search algorithms for RecursivePrefixTreeStrategy will 
be slightly slower since sometimes they'll need to visit an additional token at 
certain parts of the algorithms to check for both leaf and non-leaf indexed 
cells but I think it'll be quite negligible.

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4936) docvalues date compression

2013-04-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635444#comment-13635444
 ] 

Uwe Schindler commented on LUCENE-4936:
---

bq. But NumericField is totally unrelated to docvalues!

Thats clear. I just said, if you use a LONG docvalues field and store the long 
epoch its always timezone-less. That was what I wanted to say. This applies to 
Solr, too.

 docvalues date compression
 --

 Key: LUCENE-4936
 URL: https://issues.apache.org/jira/browse/LUCENE-4936
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Assignee: Adrien Grand
 Attachments: LUCENE-4936.patch, LUCENE-4936.patch


 DocValues fields can be very wasteful if you are storing dates (like solr's 
 TrieDateField does if you enable docvalues) and don't actually need all the 
 precision: e.g. date-only fields like date of birth with no time component, 
 time fields without milliseconds precision, and so on.
 Ideally we'd compute GCD of all the values to save space 
 (numberOfTrailingZeros is not really enough here), but i think we should at 
 least look for values like 8640, 360, and 1000 to be practical.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4731) Fresh clone of github lucene-solr repo already has modified files somehow

2013-04-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635467#comment-13635467
 ] 

Uwe Schindler commented on SOLR-4731:
-

I am quite sure this is a newline issue, but it would really be good to get the 
changes.

The reason for this problem might be different handling of GIT with newlines. 
zkCli is a windows-only file, so in SVN it is marked as eol-style:CRLF, the 
opposite zKcli.sh is marked as eol-style:LF. Default source files without 
platform dependentness are marked as eol-style:native. I think GIT cannot 
handle that.

 Fresh clone of github lucene-solr repo already has modified files somehow
 -

 Key: SOLR-4731
 URL: https://issues.apache.org/jira/browse/SOLR-4731
 Project: Solr
  Issue Type: Bug
Reporter: Uri Laserson

 I forked the lucene-solr repo on github.
 Then
 git clone g...@github.com:laserson/lucene-solr.git
 Then `git status` gives me
 $ git status
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   solr/example/cloud-scripts/zkcli.bat
 #
 no changes added to commit (use git add and/or git commit -a)
 Even though I never touched anything

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2013-04-18 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635473#comment-13635473
 ] 

Ryan McKinley commented on LUCENE-4942:
---

I see -- so only index the leaves and traverse the terms for each query rather 
then a pile of term queries.

Sounds good, but it seems like benchmarking is the only way to know if it is a 
reasonable tradeoff! 

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4942) Indexed non-point shapes index excessive terms

2013-04-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635509#comment-13635509
 ] 

David Smiley commented on LUCENE-4942:
--

There definitely needs to be benchmarking for spatial; but I feel confident in 
this case that that it'll be well worth it for RPT; I'm quite familiar with the 
algorithms in there.  It's an unquestionable win-win for TermQueryStrategy.

 Indexed non-point shapes index excessive terms
 --

 Key: LUCENE-4942
 URL: https://issues.apache.org/jira/browse/LUCENE-4942
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley

 Indexed non-point shapes are comprised of a set of terms that represent grid 
 cells.  Cells completely within the shape or cells on the intersecting edge 
 that are at the maximum detail depth being indexed for the shape are denoted 
 as leaf cells.  Such cells have a trailing '\+' at the end.  _Such tokens 
 are actually indexed twice_, one with the leaf byte and one without.
 The TermQuery based PrefixTree Strategy doesn't consider the notion of 'leaf' 
 cells and so the tokens with '+' are completely redundant.
 The Recursive [algorithm] based PrefixTree Strategy better supports correct 
 search of indexed non-point shapes than TermQuery does and the distinction is 
 relevant.  However, the foundational search algorithms used by this strategy 
 (Intersects  Contains; the other 2 are based on these) could each be 
 upgraded to deal with this correctly.  Not trivial but very doable.
 In the end, spatial non-point indexes can probably be trimmed my ~40% by 
 doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr on AWS EC2 machine

2013-04-18 Thread Bill Au
So you are not using elastic IP.  Take a look at this which is available in
4.2:

https://issues.apache.org/jira/browse/SOLR-4078


On Wed, Apr 17, 2013 at 4:04 AM, Piyush piy...@istream.com wrote:

 I'm facing problem with the setup of SolrCloud on AWS EC2 machine.
 The scenario is follows,

 I have three servers for zookeeper and solr.

 Each server has zookeeper running on it.
 When I start Solr with zookeeper hosts information, it start and works as
 expected.

 The problem is that the zookeeper when generating the cluster information
 uses private ip of the servers and thus I cannot query it using the Solrj
 which cannot recognize the private IP.
 For e.g
 server1. private IP ip-a,b,c,d
 public IP : u,v,w,x
 The zookeeper recognizes the solr instance by the private IP (Obviously
 which won't be visible from the outside EC2 machines)

 The cluster information looks something like this:
 live nodes:[10.165.15.104:8983_solr]
 collections:{vicon=DocCollection(vicon)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{10.165.15.104:8983_solr_vicon:{
   shard:shard1,
   state:down,
   core:vicon,
   collection:vicon,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true,
   router:compositeId}, collection1=DocCollection(collection1)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{10.165.15.104:8983_solr_collection1:{
   shard:shard1,
   state:down,
   core:collection1,
   collection:collection1,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true,
   router:compositeId}, collections=DocCollection(collections)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{
 10.165.15.104:8983_solr_collections:{
   shard:shard1,
   state:active,
   core:collections,
   collection:collections,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true},
 10.147.129.56:8983_solr_collections:{
   shard:shard1,
   state:down,
   core:collections,
   collection:collections,
   node_name:10.147.129.56:8983_solr,
   base_url:http://10.147.129.56:8983/solr,
   router:compositeId}}
 Live nodes IP is the private IP  and not the public one

 Is there any way in which we can zookeeper to store cluster information as
 the host name rather than ip. If that cannot be done how can I running solr
 Cloud on AWS EC2 machine?



 Thanks and regards,
 Piyush



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.6.0_45) - Build # 2720 - Failure!

2013-04-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/2720/
Java: 64bit/jdk1.6.0_45 -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 168 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Solr on AWS EC2 machine

2013-04-18 Thread Mark Miller
We guess the host address to use by default - if you want or need to use a 
different host, just override it per node in solr.xml (or by sys prop 
substitution with solr.xml) - simply set the host attribute on the solr node 
to the desired  host name.

- Mark

On Apr 17, 2013, at 4:04 AM, Piyush piy...@istream.com wrote:

 I'm facing problem with the setup of SolrCloud on AWS EC2 machine. 
 The scenario is follows,
 
 I have three servers for zookeeper and solr.
 
 Each server has zookeeper running on it.
 When I start Solr with zookeeper hosts information, it start and works as 
 expected. 
 
 The problem is that the zookeeper when generating the cluster information 
 uses private ip of the servers and thus I cannot query it using the Solrj 
 which cannot recognize the private IP.
 For e.g
 server1. private IP ip-a,b,c,d
 public IP : u,v,w,x
 The zookeeper recognizes the solr instance by the private IP (Obviously which 
 won't be visible from the outside EC2 machines)
 
 The cluster information looks something like this:
 live nodes:[10.165.15.104:8983_solr] collections:{vicon=DocCollection(vicon)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{10.165.15.104:8983_solr_vicon:{
   shard:shard1,
   state:down,
   core:vicon,
   collection:vicon,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true,
   router:compositeId}, collection1=DocCollection(collection1)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{10.165.15.104:8983_solr_collection1:{
   shard:shard1,
   state:down,
   core:collection1,
   collection:collection1,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true,
   router:compositeId}, collections=DocCollection(collections)={
   shards:{shard1:{
   range:8000-7fff,
   state:active,
   replicas:{
 10.165.15.104:8983_solr_collections:{
   shard:shard1,
   state:active,
   core:collections,
   collection:collections,
   node_name:10.165.15.104:8983_solr,
   base_url:http://10.165.15.104:8983/solr;,
   leader:true},
 10.147.129.56:8983_solr_collections:{
   shard:shard1,
   state:down,
   core:collections,
   collection:collections,
   node_name:10.147.129.56:8983_solr,
   base_url:http://10.147.129.56:8983/solr,
   router:compositeId}}
 Live nodes IP is the private IP  and not the public one
 
 Is there any way in which we can zookeeper to store cluster information as 
 the host name rather than ip. If that cannot be done how can I running solr 
 Cloud on AWS EC2 machine?
 
 
 
 Thanks and regards,
 Piyush


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2079) Expose HttpServletRequest object from SolrQueryRequest object

2013-04-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635572#comment-13635572
 ] 

Tomás Fernández Löbbe commented on SOLR-2079:
-

I would find this very useful, specially for custom components that may require 
the original request information like headers. Right now there are no good 
options to get this information, out of the box is not available and there is 
no easy way to extend the SolrRequestParsers or SolrDispatchFilter (without 
recompiling/redeploying) to customize this parsing. 
I like the proposition of adding the object to the SolrRequest context. I'm 
attaching a possible solution. 

 Expose HttpServletRequest object from SolrQueryRequest object
 -

 Key: SOLR-2079
 URL: https://issues.apache.org/jira/browse/SOLR-2079
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, search
Reporter: Chris A. Mattmann
 Fix For: 4.3

 Attachments: SOLR-2079.patch, 
 SOLR-2079.Quach.Mattmann.082310.patch.txt


 This patch adds the HttpServletRequest object to the SolrQueryRequest object. 
 The HttpServletRequest object is needed to obtain the client's IP address for 
 geotargetting, and is part of the patches from W. Quach and C. Mattmann.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4729) Using a copyField with * as the source doesn't work

2013-04-18 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635573#comment-13635573
 ] 

Hoss Man commented on SOLR-4729:


Adam: we're going to need more details on exactly what svn branch  revision 
you're testing, what exactly your schema looks like, and how exactly you 
generated that Exception: what exactly did you do in the analysis tab, and what 
appeared in solr's logs arround that exception (eg: the underlying request to 
solr made by the UI, the log messages from that request, and the full stack of 
hte exception.)

I just committed a test demonstrating that a source=* copyfield works, so i'm 
fairly certain that SOLR-4650 fixed this -- but if you're still getting errors 
then there may be some edge case here we're not understanding...

Committed revision 1469529.
Committed revision 1469533.
Committed revision 1469534.



 Using a copyField with * as the source doesn't work
 ---

 Key: SOLR-4729
 URL: https://issues.apache.org/jira/browse/SOLR-4729
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.2
Reporter: Adam Hahn

 It seems you can no longer use a wildcard as the source when defining a 
 copyField.  I don't believe that this was fixed as part of SOLR-4650 since 
 I've tested it with the 4/17 nightly build and it doesn't work.
 I'm using the following line: copyField source=* dest=text/
 If I index something, this line is ignored.  If I go to the Analysis tab, the 
 fields aren't populated and I see the error: 
 'org.apache.solr.common.SolrException: undefined field: *' in the log.
 This worked correctly in 4.0, but I didn't test it in 4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2079) Expose HttpServletRequest object from SolrQueryRequest object

2013-04-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-2079:


Attachment: SOLR-2079.patch

 Expose HttpServletRequest object from SolrQueryRequest object
 -

 Key: SOLR-2079
 URL: https://issues.apache.org/jira/browse/SOLR-2079
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, search
Reporter: Chris A. Mattmann
 Fix For: 4.3

 Attachments: SOLR-2079.patch, 
 SOLR-2079.Quach.Mattmann.082310.patch.txt


 This patch adds the HttpServletRequest object to the SolrQueryRequest object. 
 The HttpServletRequest object is needed to obtain the client's IP address for 
 geotargetting, and is part of the patches from W. Quach and C. Mattmann.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2079) Expose HttpServletRequest object from SolrQueryRequest object

2013-04-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-2079:


Attachment: SOLR-2079.patch

 Expose HttpServletRequest object from SolrQueryRequest object
 -

 Key: SOLR-2079
 URL: https://issues.apache.org/jira/browse/SOLR-2079
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, search
Reporter: Chris A. Mattmann
 Fix For: 4.3

 Attachments: SOLR-2079.patch, SOLR-2079.patch, 
 SOLR-2079.Quach.Mattmann.082310.patch.txt


 This patch adds the HttpServletRequest object to the SolrQueryRequest object. 
 The HttpServletRequest object is needed to obtain the client's IP address for 
 geotargetting, and is part of the patches from W. Quach and C. Mattmann.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2079) Expose HttpServletRequest object from SolrQueryRequest object

2013-04-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-2079:


Attachment: SOLR-2079.patch

 Expose HttpServletRequest object from SolrQueryRequest object
 -

 Key: SOLR-2079
 URL: https://issues.apache.org/jira/browse/SOLR-2079
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, search
Reporter: Chris A. Mattmann
 Fix For: 4.3

 Attachments: SOLR-2079.patch, SOLR-2079.patch, SOLR-2079.patch, 
 SOLR-2079.Quach.Mattmann.082310.patch.txt


 This patch adds the HttpServletRequest object to the SolrQueryRequest object. 
 The HttpServletRequest object is needed to obtain the client's IP address for 
 geotargetting, and is part of the patches from W. Quach and C. Mattmann.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4737:
-

 Summary: Update Guava to 14.01
 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 4.2.1-1

2013-04-18 Thread Christian Heimes
Am 18.04.2013 19:08, schrieb Andi Vajda:
 
 On Thu, 18 Apr 2013, Thomas Koch wrote:
 
 Andi,
 I now get a different error while compiling __init__.cpp:

 org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
 error C2059: Syntaxfehler: 'Zeichenfolge'
 org/apache/lucene/util/automaton/CompiledAutomaton$AUTOMATON_TYPE.h(42) :
 error C2238: Unerwartete(s) Token vor ';'

 The line complained about is #42

 40static CompiledAutomaton$AUTOMATON_TYPE *NONE;
 41static CompiledAutomaton$AUTOMATON_TYPE *NORMAL;
 42static CompiledAutomaton$AUTOMATON_TYPE *PREFIX;
 43static CompiledAutomaton$AUTOMATON_TYPE *SINGLE;

 PREFIX seems to be another reserved word ... I could compile __init__.cpp
 after renaming PREFIX to PREFIX1.
 
 Instead of renaming PREFIX, could you please have JCC do it for you by
 adding it to the list of reserved words in the JCC invocation via the
 --reserved command line flag ? and rinse and repeat until all such conficts
 due to macro definitions are solved ?
 
 Or were you able to complete the build already once PREFIX was renamed ?

I'm pretty sure the Windows build issue is caused by the PREFIX macro in
PC/pyconfig.h. I ran into the same issue a while ago. I have created a
bug report for the issue http://bugs.python.org/issue17791

Christian


RE: [JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.6.0_45) - Build # 2720 - Failure!

2013-04-18 Thread Uwe Schindler
Sorry, my fault.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
 Sent: Thursday, April 18, 2013 9:10 PM
 To: dev@lucene.apache.org; hoss...@apache.org
 Subject: [JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.6.0_45) - Build #
 2720 - Failure!
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/2720/
 Java: 64bit/jdk1.6.0_45 -XX:+UseSerialGC
 
 No tests ran.
 
 Build Log:
 [...truncated 168 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4731) Fresh clone of github lucene-solr repo already has modified files somehow

2013-04-18 Thread Uri Laserson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uri Laserson updated SOLR-4731:
---

Attachment: weird.diff

 Fresh clone of github lucene-solr repo already has modified files somehow
 -

 Key: SOLR-4731
 URL: https://issues.apache.org/jira/browse/SOLR-4731
 Project: Solr
  Issue Type: Bug
Reporter: Uri Laserson
 Attachments: weird.diff


 I forked the lucene-solr repo on github.
 Then
 git clone g...@github.com:laserson/lucene-solr.git
 Then `git status` gives me
 $ git status
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   solr/example/cloud-scripts/zkcli.bat
 #
 no changes added to commit (use git add and/or git commit -a)
 Even though I never touched anything

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4731) Fresh clone of github lucene-solr repo already has modified files somehow

2013-04-18 Thread Uri Laserson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635613#comment-13635613
 ] 

Uri Laserson commented on SOLR-4731:


It is definitely a newline issue.  When I tried to dump the diff into a file, I 
get a warning about it.  I am on OS X.  I attached the diff.

 Fresh clone of github lucene-solr repo already has modified files somehow
 -

 Key: SOLR-4731
 URL: https://issues.apache.org/jira/browse/SOLR-4731
 Project: Solr
  Issue Type: Bug
Reporter: Uri Laserson
 Attachments: weird.diff


 I forked the lucene-solr repo on github.
 Then
 git clone g...@github.com:laserson/lucene-solr.git
 Then `git status` gives me
 $ git status
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   solr/example/cloud-scripts/zkcli.bat
 #
 no changes added to commit (use git add and/or git commit -a)
 Even though I never touched anything

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4731) Fresh clone of github lucene-solr repo already has modified files somehow

2013-04-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635615#comment-13635615
 ] 

Uwe Schindler commented on SOLR-4731:
-

Yeah, that's exactly the problem.

The Lucene team can do nothing about that. Lucene uses Subversion for its 
source code management. The GIT repository is provided for convenience to 
external developers by GITHUB and the ASF infra team. The bug here is that GIT 
does not know svn:eol-style properties.

 Fresh clone of github lucene-solr repo already has modified files somehow
 -

 Key: SOLR-4731
 URL: https://issues.apache.org/jira/browse/SOLR-4731
 Project: Solr
  Issue Type: Bug
Reporter: Uri Laserson
 Attachments: weird.diff


 I forked the lucene-solr repo on github.
 Then
 git clone g...@github.com:laserson/lucene-solr.git
 Then `git status` gives me
 $ git status
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   solr/example/cloud-scripts/zkcli.bat
 #
 no changes added to commit (use git add and/or git commit -a)
 Even though I never touched anything

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3781) when wiring Solr into a larger web application which controls the web context root,something can't work

2013-04-18 Thread Sam Kass (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Kass updated SOLR-3781:
---

Attachment: LoadAdminUiServlet.patch

Fix for not finding admin.html with prefix

 when wiring Solr into a larger web application which controls the web context 
 root,something can't work
 ---

 Key: SOLR-3781
 URL: https://issues.apache.org/jira/browse/SOLR-3781
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
 Environment: win7 jetty-distribution-7.6.5.v20120716
 startup param:
 -Djetty.port=8084 -DzkRun -Dbootstrap_conf=true
Reporter: shenjc
Priority: Minor
  Labels: patch
 Attachments: LoadAdminUiServlet.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 if i am wiring Solr into a larger web application which controls the web 
 context root, you will probably want to mount Solr under a path prefix 
 (app.war with /app/solr mounted into it, for example).
  For example:
 RootApp.war /
 myApp.war---/myApp
 prefixPath---xxx
 jsdir--js
 js filemain.js
 admin file-admin.html
 org.apache.solr.servlet.LoadAdminUiServlet
 line:49  InputStream in = 
 getServletContext().getResourceAsStream(/admin.html);
 can't find admin/html because it's in the prefixPath directory
 org.apache.solr.cloud.ZkController
 line:149-150
 this.nodeName = this.hostName + ':' + this.localHostPort + '_' + 
 this.localHostContext;
 this.baseURL = this.localHost + : + this.localHostPort + / + 
 this.localHostContext;
 it can't match this condition
 baseURL need to be http://xx:xx/myApp/myPrefixPath 
 eg. http://xx:xx/myApp/xxx

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3781) when wiring Solr into a larger web application which controls the web context root,something can't work

2013-04-18 Thread Sam Kass (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635626#comment-13635626
 ] 

Sam Kass edited comment on SOLR-3781 at 4/18/13 8:18 PM:
-

I attached a patch that seems to work for finding the admin.html inside the 
prefix.  Instead of explicitly using the admin.html path, it just takes 
whatever the servlet path in the request is and loads that.

It doesn't solve the entire problem, as there still seems to be a problem with 
the cores request not getting the prefix prepended.

Is getting the admin console working with prefixes targeted for any release 
soon?

  was (Author: samkass):
Fix for not finding admin.html with prefix
  
 when wiring Solr into a larger web application which controls the web context 
 root,something can't work
 ---

 Key: SOLR-3781
 URL: https://issues.apache.org/jira/browse/SOLR-3781
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
 Environment: win7 jetty-distribution-7.6.5.v20120716
 startup param:
 -Djetty.port=8084 -DzkRun -Dbootstrap_conf=true
Reporter: shenjc
Priority: Minor
  Labels: patch
 Attachments: LoadAdminUiServlet.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 if i am wiring Solr into a larger web application which controls the web 
 context root, you will probably want to mount Solr under a path prefix 
 (app.war with /app/solr mounted into it, for example).
  For example:
 RootApp.war /
 myApp.war---/myApp
 prefixPath---xxx
 jsdir--js
 js filemain.js
 admin file-admin.html
 org.apache.solr.servlet.LoadAdminUiServlet
 line:49  InputStream in = 
 getServletContext().getResourceAsStream(/admin.html);
 can't find admin/html because it's in the prefixPath directory
 org.apache.solr.cloud.ZkController
 line:149-150
 this.nodeName = this.hostName + ':' + this.localHostPort + '_' + 
 this.localHostContext;
 this.baseURL = this.localHost + : + this.localHostPort + / + 
 this.localHostContext;
 it can't match this condition
 baseURL need to be http://xx:xx/myApp/myPrefixPath 
 eg. http://xx:xx/myApp/xxx

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3781) when wiring Solr into a larger web application which controls the web context root,something can't work

2013-04-18 Thread Sam Kass (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635626#comment-13635626
 ] 

Sam Kass edited comment on SOLR-3781 at 4/18/13 8:18 PM:
-

I attached a patch that seems to work for finding the admin.html inside the 
prefix.  Instead of explicitly using the admin.html path, it just takes 
whatever the servlet path in the request is and loads that.

It doesn't solve the entire problem, as there still seems to be a problem with 
the cores request not getting the prefix prepended.

Is getting the admin console working with prefixes targeted for any release 
soon?

(Also, forgive me if I didn't do this quite right-- it's my first attempt 
submitting a patch)

  was (Author: samkass):
I attached a patch that seems to work for finding the admin.html inside the 
prefix.  Instead of explicitly using the admin.html path, it just takes 
whatever the servlet path in the request is and loads that.

It doesn't solve the entire problem, as there still seems to be a problem with 
the cores request not getting the prefix prepended.

Is getting the admin console working with prefixes targeted for any release 
soon?
  
 when wiring Solr into a larger web application which controls the web context 
 root,something can't work
 ---

 Key: SOLR-3781
 URL: https://issues.apache.org/jira/browse/SOLR-3781
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
 Environment: win7 jetty-distribution-7.6.5.v20120716
 startup param:
 -Djetty.port=8084 -DzkRun -Dbootstrap_conf=true
Reporter: shenjc
Priority: Minor
  Labels: patch
 Attachments: LoadAdminUiServlet.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 if i am wiring Solr into a larger web application which controls the web 
 context root, you will probably want to mount Solr under a path prefix 
 (app.war with /app/solr mounted into it, for example).
  For example:
 RootApp.war /
 myApp.war---/myApp
 prefixPath---xxx
 jsdir--js
 js filemain.js
 admin file-admin.html
 org.apache.solr.servlet.LoadAdminUiServlet
 line:49  InputStream in = 
 getServletContext().getResourceAsStream(/admin.html);
 can't find admin/html because it's in the prefixPath directory
 org.apache.solr.cloud.ZkController
 line:149-150
 this.nodeName = this.hostName + ':' + this.localHostPort + '_' + 
 this.localHostContext;
 this.baseURL = this.localHost + : + this.localHostPort + / + 
 this.localHostContext;
 it can't match this condition
 baseURL need to be http://xx:xx/myApp/myPrefixPath 
 eg. http://xx:xx/myApp/xxx

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #829: POMs out of sync

2013-04-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/829/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Server at http://127.0.0.1:37638 returned non ok status:500, message:Server 
Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at 
http://127.0.0.1:37638 returned non ok status:500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([E51D8D15A1949BCC:64FB030DD6CBFBF0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:372)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deleteCollectionWithDownNodes(CollectionsAPIDistributedZkTest.java:206)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:148)




Build Log:
[...truncated 23758 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-3251) dynamically add fields to schema

2013-04-18 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-3251:
-

Attachment: SOLR-3251.patch

Patch:

- Restores SolrIndexSearcher.getSchema(), but makes it return the schema 
snapshot passed into the SolrIndexSearcher ctor, rather than the latest 
available schema from SolrCore.
- Converts query request code to pull the schema from a searcher if one is 
already available.
- Removes all fields and non-static methods from o.a.s.update.DocumentBuilder - 
this is dead code.
- Removes DIH's o.a.s.handler.dataimport.config.Document entirely - this is 
dead code.
- Reworks DIH schema handling, so that DIHConfiguration, which is created 
per-request, hosts a schema snapshot and derived schema info (lowercase field 
mappings). 
- ExternalFileFieldReloader's newSearcher() callback now checks if the schema 
has changed, and if so, reloads its FileFloatSource cache.

I tried converting query code to always pull a searcher from the request and 
then pull the schema from there, rather than from the request, but this caused 
lots of imbalanced searcher refcounts, because searchers weren't already bound 
to the request in some cases, and request.close() apparently wasn't always 
invoked in some tests.  So I backtracked and only pulled the schema from 
already-available searchers.

So we'll now have three schema sources: 

# SolrCore.getLatestSchema()
# SolrQueryRequest.getSchema() - schema snapshot at request construction
# SolrIndexSearcher.getSchema() - schema snapshot at searcher construction

Update code will use the schema snapshot from the request, when available, and 
the latest schema from SolrCore when it's not.

I believe that since the only permitted schema change now is new fields, it's 
okay for query code to also pull the schema from the request, and for update 
code to also pull the latest schema from SolrCore.



 dynamically add fields to schema
 

 Key: SOLR-3251
 URL: https://issues.apache.org/jira/browse/SOLR-3251
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Steve Rowe
 Fix For: 4.3, 5.0

 Attachments: SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, 
 SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch


 One related piece of functionality needed for SOLR-3250 is the ability to 
 dynamically add a field to the schema.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3781) when wiring Solr into a larger web application which controls the web context root,something can't work

2013-04-18 Thread Sam Kass (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635626#comment-13635626
 ] 

Sam Kass edited comment on SOLR-3781 at 4/18/13 8:46 PM:
-

I attached a patch that seems to work for finding the admin.html inside the 
prefix.  Instead of explicitly using the admin.html path, it just takes 
whatever the servlet path in the request is and loads that.

It doesn't solve the entire problem loading the admin page, as there still 
seems to be a problem with the cores request not getting the prefix 
prepended, but it solves the explicit problem the description specifies.

Is getting the admin console working with prefixes targeted for any release 
soon?

(Also, forgive me if I didn't do this quite right-- it's my first attempt 
submitting a patch)

  was (Author: samkass):
I attached a patch that seems to work for finding the admin.html inside the 
prefix.  Instead of explicitly using the admin.html path, it just takes 
whatever the servlet path in the request is and loads that.

It doesn't solve the entire problem, as there still seems to be a problem with 
the cores request not getting the prefix prepended.

Is getting the admin console working with prefixes targeted for any release 
soon?

(Also, forgive me if I didn't do this quite right-- it's my first attempt 
submitting a patch)
  
 when wiring Solr into a larger web application which controls the web context 
 root,something can't work
 ---

 Key: SOLR-3781
 URL: https://issues.apache.org/jira/browse/SOLR-3781
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
 Environment: win7 jetty-distribution-7.6.5.v20120716
 startup param:
 -Djetty.port=8084 -DzkRun -Dbootstrap_conf=true
Reporter: shenjc
Priority: Minor
  Labels: patch
 Attachments: LoadAdminUiServlet.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 if i am wiring Solr into a larger web application which controls the web 
 context root, you will probably want to mount Solr under a path prefix 
 (app.war with /app/solr mounted into it, for example).
  For example:
 RootApp.war /
 myApp.war---/myApp
 prefixPath---xxx
 jsdir--js
 js filemain.js
 admin file-admin.html
 org.apache.solr.servlet.LoadAdminUiServlet
 line:49  InputStream in = 
 getServletContext().getResourceAsStream(/admin.html);
 can't find admin/html because it's in the prefixPath directory
 org.apache.solr.cloud.ZkController
 line:149-150
 this.nodeName = this.hostName + ':' + this.localHostPort + '_' + 
 this.localHostContext;
 this.baseURL = this.localHost + : + this.localHostPort + / + 
 this.localHostContext;
 it can't match this condition
 baseURL need to be http://xx:xx/myApp/myPrefixPath 
 eg. http://xx:xx/myApp/xxx

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4358) SolrJ, by preventing multi-part post, loses key information about file name that Tika needs

2013-04-18 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated SOLR-4358:


Attachment: SOLR-4358.patch

Here is an updated patch.

It adds 'setUseMultipart(true)' into the random test configs.

*BUT* it seems to have issues with ZK distributed search.  I don't know if that 
is just a test/environmet issue on my side or a real issue.  But I get this 
failure:
{code}
Tests with failures:
 -org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

{code}



 SolrJ, by preventing multi-part post, loses key information about file name 
 that Tika needs
 ---

 Key: SOLR-4358
 URL: https://issues.apache.org/jira/browse/SOLR-4358
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Karl Wright
Assignee: Ryan McKinley
 Attachments: additional_changes.diff, SOLR-4358.patch, 
 SOLR-4358.patch, SOLR-4358.patch


 SolrJ accepts a ContentStream, which has a name field.  Within 
 HttpSolrServer.java, if SolrJ makes the decision to use multipart posts, this 
 filename is transmitted as part of the form boundary information.  However, 
 if SolrJ chooses not to use multipart post, the filename information is lost.
 This information is used by SolrCell (Tika) to make decisions about content 
 extraction, so it is very important that it makes it into Solr in one way or 
 another.  Either SolrJ should set appropriate equivalent headers to send the 
 filename automatically, or it should force multipart posts when this 
 information is present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3251) dynamically add fields to schema

2013-04-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635737#comment-13635737
 ] 

Robert Muir commented on SOLR-3251:
---

{quote}
I tried converting query code to always pull a searcher from the request and 
then pull the schema from there, rather than from the request, but this caused 
lots of imbalanced searcher refcounts, because searchers weren't already bound 
to the request in some cases, and request.close() apparently wasn't always 
invoked in some tests. So I backtracked and only pulled the schema from 
already-available searchers.

So we'll now have three schema sources: 
{quote}

I don't think we should make bad design decisions because of a few bad tests? 
They should be closing this thing, and its just random chance that the current 
implementation doesnt leak anything if nobody called certain methods yet.

There is a real value i think in having request.getSchema() == 
request.getSearcher().getSchema().

I took the patch locally and tried this in SolrQueryRequestBase.java and it 
didnt seem like such a disaster to me:

{code}
  // The index schema associated with this request
  @Override
  public IndexSchema getSchema() {
SolrIndexSearcher s = getSearcher();
if (s == null) {
  return null;
} else {
  return s.getSchema();
}
  }
{code}


 dynamically add fields to schema
 

 Key: SOLR-3251
 URL: https://issues.apache.org/jira/browse/SOLR-3251
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Steve Rowe
 Fix For: 4.3, 5.0

 Attachments: SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, 
 SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch


 One related piece of functionality needed for SOLR-3250 is the ability to 
 dynamically add a field to the schema.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4358) SolrJ, by preventing multi-part post, loses key information about file name that Tika needs

2013-04-18 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635745#comment-13635745
 ] 

Karl Wright commented on SOLR-4358:
---

Why would SolrCloud be affected at all by an HttpSolrServer.java change?


 SolrJ, by preventing multi-part post, loses key information about file name 
 that Tika needs
 ---

 Key: SOLR-4358
 URL: https://issues.apache.org/jira/browse/SOLR-4358
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Karl Wright
Assignee: Ryan McKinley
 Attachments: additional_changes.diff, SOLR-4358.patch, 
 SOLR-4358.patch, SOLR-4358.patch


 SolrJ accepts a ContentStream, which has a name field.  Within 
 HttpSolrServer.java, if SolrJ makes the decision to use multipart posts, this 
 filename is transmitted as part of the form boundary information.  However, 
 if SolrJ chooses not to use multipart post, the filename information is lost.
 This information is used by SolrCell (Tika) to make decisions about content 
 extraction, so it is very important that it makes it into Solr in one way or 
 another.  Either SolrJ should set appropriate equivalent headers to send the 
 filename automatically, or it should force multipart posts when this 
 information is present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3251) dynamically add fields to schema

2013-04-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635759#comment-13635759
 ] 

Yonik Seeley commented on SOLR-3251:


bq. There is a real value i think in having request.getSchema() == 
request.getSearcher().getSchema().

This introduces a new dependency that did not exist in the past, and I don't 
think we should do that.  There should be no need to get an open searcher to 
get schema information.  As the failing tests show, it can have unintended 
consequences.  getSearcher() is also a blocking operation and if called in the 
wrong context can lead to deadlock (certain callbacks are forbidden to call 
getSearcher).


 dynamically add fields to schema
 

 Key: SOLR-3251
 URL: https://issues.apache.org/jira/browse/SOLR-3251
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Steve Rowe
 Fix For: 4.3, 5.0

 Attachments: SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, 
 SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch


 One related piece of functionality needed for SOLR-3250 is the ability to 
 dynamically add a field to the schema.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3251) dynamically add fields to schema

2013-04-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635764#comment-13635764
 ] 

Steve Rowe commented on SOLR-3251:
--

bq. There is a real value i think in having request.getSchema() == 
request.getSearcher().getSchema().

This won't work at all for update requests that depend on new fields in a 
schema newer than that on the request's searcher.

 dynamically add fields to schema
 

 Key: SOLR-3251
 URL: https://issues.apache.org/jira/browse/SOLR-3251
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Steve Rowe
 Fix For: 4.3, 5.0

 Attachments: SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, 
 SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch, SOLR-3251.patch


 One related piece of functionality needed for SOLR-3250 is the ability to 
 dynamically add a field to the schema.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2082) Performance improvement for merging posting lists

2013-04-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635788#comment-13635788
 ] 

Michael McCandless commented on LUCENE-2082:


Hi Aleksandra,

I don't think anyone is working on this now ... it'd be quite a bit of work!

The classes have changed names but the core idea is the same.  Have a look at 
PostingsFormat: that's the Codec component that handles reading/writing/merging 
of all postings files (terms dict, docs/freqs/positions/offsets).  It seems 
like for this issue you'd need to override Fields/Terms/PostingsConsumer.merge 
methods.

But some things here will likely require changes outside of Codec, eg today we 
always remove deletes while merging, but for this issue it looks like you may 
want to have a threshold below which the deletes are not removed...

 Performance improvement for merging posting lists
 -

 Key: LUCENE-2082
 URL: https://issues.apache.org/jira/browse/LUCENE-2082
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Michael Busch
Priority: Minor
  Labels: gsoc2013
 Fix For: 4.3


 A while ago I had an idea about how to improve the merge performance
 for posting lists. This is currently by far the most expensive part of
 segment merging due to all the VInt de-/encoding. Not sure if an idea
 for improving this was already mentioned in the past?
 So the basic idea is it to perform a raw copy of as much posting data
 as possible. The reason why this is difficult is that we have to
 remove deleted documents. But often the fraction of deleted docs in a
 segment is rather low (10%?), so it's likely that there are quite
 long consecutive sections without any deletions.
 To find these sections we could use the skip lists. Basically at any
 point during the merge we would find the skip entry before the next
 deleted doc. All entries to this point can be copied without
 de-/encoding of the VInts. Then for the section that has deleted docs
 we perform the normal way of merging to remove the deletes. Then we
 check again with the skip lists if we can raw copy the next section.
 To make this work there are a few different necessary changes:
 1) Currently the multilevel skiplist reader/writer can only deal with 
 fixed-size
 skips (16 on the lowest level). It would be an easy change to allow
 variable-size skips, but then the MultiLevelSkipListReader can't
 return numSkippedDocs anymore, which SegmentTermDocs needs - change 2)
 2) Store the last docID in which a term occurred in the term
 dictionary. This would also be beneficial for other use cases. By
 doing that the SegmentTermDocs#next(), #read() and #skipTo() know when
 the end of the postinglist is reached. Currently they have to track
 the df, which is why after a skip it's important to take the
 numSkippedDocs into account.
 3) Change the merging algorithm according to my description above. It's
 important to create a new skiplist entry at the beginning of every
 block that is copied in raw mode, because its next skip entry's values
 are deltas from the beginning of the block. Also the very first posting, and
 that one only, needs to be decoded/encoded to make sure that the
 payload length is explicitly written (i.e. must not depend on the
 previous length). Also such a skip entry has to be created at the
 beginning of each source segment's posting list. With change 2) we don't
 have to worry about the positions of the skip entries. And having a few
 extra skip entries in merged segments won't hurt much.
 If a segment has no deletions at all this will avoid any
 decoding/encoding of VInts (best case). I think it will also work
 great for segments with a rather low amount of deletions. We should
 probably then have a threshold: if the number of deletes exceeds this
 threshold we should fall back to old style merging.
 I haven't implemented any of this, so there might be complications I
 haven't thought about. Please let me know if you can think of reasons
 why this wouldn't work or if you think more changes are necessary.
 I will probably not have time to work on this soon, but I wanted to
 open this issue to not forget about it :). Anyone should feel free to
 take this!
 Btw: I think the flex-indexing branch would be a great place to try this
 out as a new codec. This would also be good to figure out what APIs
 are needed to make merging fully flexible as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635808#comment-13635808
 ] 

David Smiley commented on SOLR-4737:


Just curious; what's in this version that you're looking to benefit from?

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b84) - Build # 5180 - Failure!

2013-04-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5180/
Java: 64bit/jdk1.8.0-ea-b84 -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues

Error Message:
Requested array size exceeds VM limit

Stack Trace:
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at 
__randomizedtesting.SeedInfo.seed([663D470D4C55B8C1:985D06C8F4923F71]:0)
at org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:64)
at org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:37)
at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:138)
at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:34)
at 
org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.init(FieldValueHitQueue.java:63)
at 
org.apache.lucene.search.FieldValueHitQueue.create(FieldValueHitQueue.java:171)
at 
org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1123)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:526)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:501)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
at 
org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues(TestFunctionQuerySort.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:487)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)




Build Log:
[...truncated 7699 lines...]
[junit4:junit4] Suite: org.apache.lucene.queries.function.TestFunctionQuerySort
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestFunctionQuerySort 
-Dtests.method=testSearchAfterWhenSortingByFunctionValues 
-Dtests.seed=663D470D4C55B8C1 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=fi_FI -Dtests.timezone=Egypt -Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   0.62s J0 | 
TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues 
[junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: Requested array 
size exceeds VM limit
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([663D470D4C55B8C1:985D06C8F4923F71]:0)
[junit4:junit4]at 
org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:64)
[junit4:junit4]at 
org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:37)
[junit4:junit4]at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:138)
[junit4:junit4]at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:34)
[junit4:junit4]at 
org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.init(FieldValueHitQueue.java:63)
[junit4:junit4]at 
org.apache.lucene.search.FieldValueHitQueue.create(FieldValueHitQueue.java:171)
[junit4:junit4]at 

[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635950#comment-13635950
 ] 

Commit Tag Bot commented on SOLR-4737:
--

[trunk commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469663

SOLR-4737: Update Guava to 14.01

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635951#comment-13635951
 ] 

Mark Miller commented on SOLR-4737:
---

Bug fixes, keeping up to date - these thing tend to get more difficult the 
longer you wait and it's nice to update libs at the start of a release cycle. 
About to update jetty as well.

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4738) Update to latest Jetty bug fix release, 8.1.10

2013-04-18 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4738:
-

 Summary: Update to latest Jetty bug fix release, 8.1.10
 Key: SOLR-4738
 URL: https://issues.apache.org/jira/browse/SOLR-4738
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4738) Update to latest Jetty bug fix release, 8.1.10

2013-04-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635957#comment-13635957
 ] 

Mark Miller commented on SOLR-4738:
---

jetty-8.1.10.v20130312 - 12 March 2013
 + 376273 Early EOF because of SSL Protocol Error on
   https://api-3t.paypal.com/nvp.
 + 381521 allow compress methods to be configured
 + 392129 fixed handling of timeouts after startAsync
 + 394064 ensure that JarFile instances are closed on JarFileResource.release()
 + 398649 ServletContextListener.contextDestroyed() is not called on
   ContextHandler unregistration
 + 399703 made encoding error handling consistent
 + 399799 do not hold lock while calling invalidation listeners
 + 399967 Shutdown hook calls destroy
 + 400040 NullPointerException in HttpGenerator.prepareBuffers
 + 400142 ConcurrentModificationException in JDBC SessionManger
 + 400144 When loading a session fails the JDBCSessionManger produces duplicate
   session IDs
 + 400312 ServletContextListener.contextInitialized() is not called when added
   in ServletContainerInitializer.onStartup
 + 400457 Thread context classloader hierarchy not searched when finding
   webapp's java:comp/env
 + 400859 limit max size of writes from cached content
 + 401211 Remove requirement for jetty-websocket.jar in WEB-INF/lib
 + 401317 Make Safari 5.x websocket support minVersion level error more clear
 + 401382 Prevent parseAvailable from parsing next chunk when previous has not
   been consumed. Handle no content-type in chunked request.
 + 401474 Performance problem in org.eclipse.jetty.annotation.AnnotationParser
 + 401485 zip file closed exception
 + 401531 StringIndexOutOfBoundsException for /* url-pattern of
   jsp-property-group fix for multiple mappings to *.jsp
 + 401908 Enhance DosFilter to allow dynamic configuration of attributes.
 + 402048 org.eclipse.jetty.server.ShutdownMonitor doesn't stop after the jetty
   server is stopped
 + 402485 reseed secure random
 + 402735 jetty.sh to support status which is == check
 + 402833 Test harness for global error page and hide exception message from
   reason string

jetty-8.1.9.v20130131 - 31 January 2013
 + 362226 HttpConnection wait call causes thread resource exhaustion
 + 367638 throw exception for excess form keys
 + 381521 Only set Vary header when content could be compressed
 + 382237 support non java JSON classes
 + 391248 fixing localhost checking in statistics servlet
 + 391249 fix for invalid XML node dispatchedTimeMean in statistics servlet
 + 391345 fix missing br tag in statistics servlet
 + 391623 Add option to --stop to wait for target jetty to stop
 + 392417 Prevent Cookie parsing interpreting unicode chars
 + 392492 expect headers only examined for requests=HTTP/1.1
 + 393075 1xx 204 and 304 ignore all headers suggesting content
 + 393158 java.lang.IllegalStateException when sending an empty InputStream
 + 393220 remove dead code from ServletHandler and log ServletExceptions in
   warn instead of debug
 + 393947 additional tests
 + 393968 fix typo in javadoc
 + 394294 A web-bundle started before jetty-osgi should be deployed as a webapp
   when jetty-osgi starts
 + 394514 Preserve URI parameters in sendRedirect
 + 394541 remove continuation jar from distro, add as dep to test-jetty-webapp
 + 394719 remove regex from classpath matching
 + 394811 Make JAASLoginService log login failures to DEBUG instead of WARN.
   Same for some other exceptions.
 + 394829 Session can not be restored after SessionManager.setIdleSavePeriod
   has saved the session
 + 394839 Allow multipart mime with no boundary
 + 394870 Make enablement of remote access to test webapp configurable in
   override-web.xml
 + 395215 Multipart mime with just LF and no CRLF
 + 395380 add ValidUrlRule to jetty-rewrite
 + 395394 allow logging from boot classloader
 + 396253 FilterRegistration wrong order
 + 396459 Log specific message for empty request body for multipart mime
   requests
 + 396500 HttpClient Exchange takes forever to complete when less content sent
   than Content-Length
 + 396574 add JETTY_HOME as a location for pid to be found
 + 396886 MultiPartFilter strips bad escaping on filename=...
 + 397110 Accept %u encodings in URIs
 + 397111 Tolerate empty or excessive whitespace preceeding MultiParts
 + 397112 Requests with byte-range throws NPE if requested file has no mimetype
   (eg no file extension)
 + 397130 maxFormContentSize set in jetty.xml is ignored
 + 397190 improve ValidUrlRule to iterate on codepoints
 + 397321 Wrong condition in default start.config for annotations
 + 397535 Support pluggable alias checking to support symbolic links
 + 398337 UTF-16 percent encoding in UTF-16 form content
 + 399132 check parent dir of session store against file to be removed
 + JETTY-1533 handle URL with no path

 Update to latest Jetty bug fix release, 8.1.10
 --

  

[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635958#comment-13635958
 ] 

Commit Tag Bot commented on SOLR-4737:
--

[branch_4x commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469666

SOLR-4737: Update Guava to 14.01

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: New JIRA tagging Commit Bot

2013-04-18 Thread Mark Miller
Took me a bit longer than I anticipated to finally just do this, but it's done.

The commit bot is officially back in action in a permanent fashion - it has 
almost no delay, it tags right after you commit and it should tag for all 
branches, not just 4x and 5x.

- Mark

On Apr 10, 2013, at 3:53 PM, Mark Miller markrmil...@gmail.com wrote:

 Okay, the new bot has not murdered any children as it's been running off and 
 on for 6 days or so now. I'll look at setting it up in a more permanent 
 fashion (not just running when I remember to turn it on on my primary dev 
 machine).
 
 - Mark
 
 On Apr 4, 2013, at 9:30 AM, Mark Miller markrmil...@gmail.com wrote:
 
 I'm experimenting with an event driven commit bot today. It should mean much 
 lower latency for tagging, and less room for accidentally tagging old 
 commits. If today goes well, I'll look at setting things up on a more 
 permanent basis.
 
 - Mark
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4733) Rollback does not work correctly with tlog and optimistic concurrency updates

2013-04-18 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4733:
---

Description: When using the updateLog, attempting to rollback atomic 
updates still causes post-rollback atomic updates to still report a conflict 
with the version assigned to the update posted prior to the rollback  (was: I 
wrote a simple test that seems to reproduce the unexpected behaviour. See the 
below test case addBeanThenRollbackThenAddBeanThenRollbackTest().

It seems on rollback the bean is not written to Solr system, though I think the 
client remembers the bean which then creates a version conflict SolrException.


* *The test case:*
{code:java}
@Test
public void addBeanThenRollbackThenAddBeanThenRollbackTest() throws Exception {

MyTestBean myTestBean = createTestBean(addBeanTest);
UpdateResponse updateResponseOne = server.addBean(myTestBean);
Assert.assertEquals(0, updateResponseOne.getStatus());

rollback();
Thread.sleep(1000);

// No Bean Found
{
MyTestBean myTestBeanStored = getTestBean(myTestBean.getId());
Assert.assertNull(myTestBeanStored);
}

UpdateResponse updateResponseTwo = server.addBean(myTestBean);
Assert.assertEquals(0, updateResponseTwo.getStatus());

rollback();
Thread.sleep(1000);

// No Bean Found
{
MyTestBean myTestBeanStored = getTestBean(myTestBean.getId());
Assert.assertNull(myTestBeanStored);
}

}
{code}

* *The stack trace:*
{code}
org.apache.solr.common.SolrException: version conflict for 
154ff2e0-621b-4eb0-a1d3-4bbe7ea01573 expected=-1 actual=1432619355523252224
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.addBean(SolrServer.java:136)
at org.apache.solr.client.solrj.SolrServer.addBean(SolrServer.java:125)
at 
test.SolrJBeanTest.addBeanThenRollbackThenAddBeanThenRollbackTest(SolrJBeanTest.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
{code}


* *The test class:*
{code:java}
package test;

import java.io.Serializable;
import java.util.Date;
import java.util.List;
import java.util.Locale;
import java.util.UUID;

import junit.framework.Assert;

import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.beans.Field;
import org.apache.solr.client.solrj.impl.BinaryRequestWriter;
import 

[jira] [Resolved] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4737.
---

Resolution: Fixed

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4733) Rollback does not work correctly with tlog and optimistic concurrency updates

2013-04-18 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4733:
---

Attachment: SOLR-4733.patch

patch demonstrating the crux of Mark's reported failure as an extension of 
existing tests

* general improvement to BasicFunctionalityTest setup to randomize use of tlog 
for the whole class
* new BasicFunctionalityTest.testRollbackWithOptimisticConcurrency that covers 
the basics of this issue (leveraging the randomized use of tlog)
* new TestUpdate.testRollbackWithOptimisticConcurrency that always uses tlog 
and goes more in depth into verifying that the optimistic concurrency update 
logic is rolled back properly.

As things stand, the new BasicFunctionalityTest test fails 50% of the time (if 
and only if tlog is used) and the new TestUpdate test fails consistently.

 Rollback does not work correctly with tlog and optimistic concurrency updates
 -

 Key: SOLR-4733
 URL: https://issues.apache.org/jira/browse/SOLR-4733
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2.1
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
  Labels: solrj
 Attachments: SOLR-4733.patch


 When using the updateLog, attempting to rollback atomic updates still causes 
 post-rollback atomic updates to still report a conflict with the version 
 assigned to the update posted prior to the rollback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b84) - Build # 5180 - Failure!

2013-04-18 Thread Chris Hostetter

FYI: this seed reproduces the OOM for me using both java6 and java7 on the 
4x branch, so definitely not just a java8 thing...

ant test  -Dtestcase=TestFunctionQuerySort 
-Dtests.method=testSearchAfterWhenSortingByFunctionValues 
-Dtests.seed=663D470D4C55B8C1 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=fi_FI -Dtests.timezone=Egypt -Dtests.file.encoding=UTF-8


: Date: Fri, 19 Apr 2013 01:43:58 + (UTC)
: From: Policeman Jenkins Server jenk...@thetaphi.de
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b84) - Build #
: 5180 - Failure!
: 
: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5180/
: Java: 64bit/jdk1.8.0-ea-b84 -XX:+UseParallelGC
: 
: 1 tests failed.
: REGRESSION:  
org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues
: 
: Error Message:
: Requested array size exceeds VM limit
: 
: Stack Trace:
: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
:   at 
__randomizedtesting.SeedInfo.seed([663D470D4C55B8C1:985D06C8F4923F71]:0)
:   at org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:64)
:   at org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:37)
:   at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:138)
:   at 
org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:34)
:   at 
org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.init(FieldValueHitQueue.java:63)
:   at 
org.apache.lucene.search.FieldValueHitQueue.create(FieldValueHitQueue.java:171)
:   at 
org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1123)
:   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:526)
:   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:501)
:   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
:   at 
org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues(TestFunctionQuerySort.java:72)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:487)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:   at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
: 
: 
: 
: 
: Build Log:
: [...truncated 7699 lines...]
: [junit4:junit4] Suite: 
org.apache.lucene.queries.function.TestFunctionQuerySort
: [junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestFunctionQuerySort 
-Dtests.method=testSearchAfterWhenSortingByFunctionValues 
-Dtests.seed=663D470D4C55B8C1 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=fi_FI -Dtests.timezone=Egypt -Dtests.file.encoding=UTF-8
: [junit4:junit4] ERROR   0.62s J0 | 
TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues 
: [junit4:junit4] Throwable #1: java.lang.OutOfMemoryError: Requested 
array size exceeds VM limit
: [junit4:junit4]  at 

[jira] [Commented] (SOLR-4733) Rollback does not work correctly with tlog and optimistic concurrency updates

2013-04-18 Thread Mark S (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635974#comment-13635974
 ] 

Mark S commented on SOLR-4733:
--

Thanks for the comments, I definitely appreciate the you taking the time.

My test case should be very self contained, and all that is required is the url 
of your Solr instance (Default value:  http://localhost:8080/solr/collection1) 
and JUnit on the classpath.  I should have mentioned that I am using a vanilla 
Solr deployment running inside of tomcat instance on Ubuntu.  As far as I 
recall, no schema changes or anything.

I try to keep my test bare bones to reduce confusion.  The test class I 
provided here is different from the test class I provided SOLR-4605.   The 
SolrJBeanTest here has only three test methods: addBeanTest(), 
addBeanThenRollbackTest() and 
addBeanThenRollbackThenAddBeanThenRollbackTest().  I included the first two 
method tests as a system and configuration check, with the 3rd test method 
addBeanThenRollbackThenAddBeanThenRollbackTest() as a means to highlight the 
problem.


 Rollback does not work correctly with tlog and optimistic concurrency updates
 -

 Key: SOLR-4733
 URL: https://issues.apache.org/jira/browse/SOLR-4733
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2.1
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
  Labels: solrj
 Attachments: SOLR-4733.patch


 When using the updateLog, attempting to rollback atomic updates still causes 
 post-rollback atomic updates to still report a conflict with the version 
 assigned to the update posted prior to the rollback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4733) Rollback does not work correctly with tlog and optimistic concurrency updates

2013-04-18 Thread Mark S (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635974#comment-13635974
 ] 

Mark S edited comment on SOLR-4733 at 4/19/13 2:50 AM:
---

Thanks for the comments, I definitely appreciate you taking the time.

My test case should be very self contained, and all that is required is the url 
of your Solr instance (Default value:  http://localhost:8080/solr/collection1) 
and JUnit on the classpath.  I should have mentioned that I am using a vanilla 
Solr deployment running inside of tomcat instance on Ubuntu.  As far as I 
recall, no schema changes or anything.

I try to keep my test bare bones to reduce confusion.  The test class I 
provided here is different from the test class I provided SOLR-4605.   The 
SolrJBeanTest here has only three test methods: addBeanTest(), 
addBeanThenRollbackTest() and 
addBeanThenRollbackThenAddBeanThenRollbackTest().  I included the first two 
method tests as a system and configuration check, with the 3rd test method 
addBeanThenRollbackThenAddBeanThenRollbackTest() as a means to highlight the 
problem.


  was (Author: marks1900):
Thanks for the comments, I definitely appreciate the you taking the time.

My test case should be very self contained, and all that is required is the url 
of your Solr instance (Default value:  http://localhost:8080/solr/collection1) 
and JUnit on the classpath.  I should have mentioned that I am using a vanilla 
Solr deployment running inside of tomcat instance on Ubuntu.  As far as I 
recall, no schema changes or anything.

I try to keep my test bare bones to reduce confusion.  The test class I 
provided here is different from the test class I provided SOLR-4605.   The 
SolrJBeanTest here has only three test methods: addBeanTest(), 
addBeanThenRollbackTest() and 
addBeanThenRollbackThenAddBeanThenRollbackTest().  I included the first two 
method tests as a system and configuration check, with the 3rd test method 
addBeanThenRollbackThenAddBeanThenRollbackTest() as a means to highlight the 
problem.

  
 Rollback does not work correctly with tlog and optimistic concurrency updates
 -

 Key: SOLR-4733
 URL: https://issues.apache.org/jira/browse/SOLR-4733
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2.1
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
  Labels: solrj
 Attachments: SOLR-4733.patch


 When using the updateLog, attempting to rollback atomic updates still causes 
 post-rollback atomic updates to still report a conflict with the version 
 assigned to the update posted prior to the rollback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b84) - Build # 5180 - Failure!

2013-04-18 Thread Robert Muir
its this issue: https://issues.apache.org/jira/browse/LUCENE-4938

But my questions on the issue need to be answered before anything is
committed.

On Thu, Apr 18, 2013 at 10:40 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:


 FYI: this seed reproduces the OOM for me using both java6 and java7 on the
 4x branch, so definitely not just a java8 thing...

 ant test  -Dtestcase=TestFunctionQuerySort
 -Dtests.method=testSearchAfterWhenSortingByFunctionValues
 -Dtests.seed=663D470D4C55B8C1 -Dtests.multiplier=3 -Dtests.slow=true
 -Dtests.locale=fi_FI -Dtests.timezone=Egypt -Dtests.file.encoding=UTF-8


 : Date: Fri, 19 Apr 2013 01:43:58 + (UTC)
 : From: Policeman Jenkins Server jenk...@thetaphi.de
 : Reply-To: dev@lucene.apache.org
 : To: dev@lucene.apache.org
 : Subject: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b84) - Build
 #
 : 5180 - Failure!
 :
 : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5180/
 : Java: 64bit/jdk1.8.0-ea-b84 -XX:+UseParallelGC
 :
 : 1 tests failed.
 : REGRESSION:
  
 org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues
 :
 : Error Message:
 : Requested array size exceeds VM limit
 :
 : Stack Trace:
 : java.lang.OutOfMemoryError: Requested array size exceeds VM limit
 :   at
 __randomizedtesting.SeedInfo.seed([663D470D4C55B8C1:985D06C8F4923F71]:0)
 :   at
 org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:64)
 :   at
 org.apache.lucene.util.PriorityQueue.init(PriorityQueue.java:37)
 :   at
 org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:138)
 :   at
 org.apache.lucene.search.FieldValueHitQueue.init(FieldValueHitQueue.java:34)
 :   at
 org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.init(FieldValueHitQueue.java:63)
 :   at
 org.apache.lucene.search.FieldValueHitQueue.create(FieldValueHitQueue.java:171)
 :   at
 org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1123)
 :   at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:526)
 :   at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:501)
 :   at
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:378)
 :   at
 org.apache.lucene.queries.function.TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues(TestFunctionQuerySort.java:72)
 :   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 :   at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 :   at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 :   at java.lang.reflect.Method.invoke(Method.java:487)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 :   at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 :   at
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 :   at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 :   at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 :   at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 :   at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 :   at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 :   at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 :   at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 :   at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 :
 :
 :
 :
 : Build Log:
 : [...truncated 7699 lines...]
 : [junit4:junit4] Suite:
 org.apache.lucene.queries.function.TestFunctionQuerySort
 : [junit4:junit4]   2 NOTE: reproduce with: ant test
  -Dtestcase=TestFunctionQuerySort
 -Dtests.method=testSearchAfterWhenSortingByFunctionValues
 -Dtests.seed=663D470D4C55B8C1 -Dtests.multiplier=3 -Dtests.slow=true
 -Dtests.locale=fi_FI -Dtests.timezone=Egypt 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_21) - Build # 5181 - Still Failing!

2013-04-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5181/
Java: 32bit/jdk1.7.0_21 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 21503 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/core/lib/guava-14.0.1.jar
 [licenses] Scanned 102 JAR file(s) for licenses (in 0.62s.), 1 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:381: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:67: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:232: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/custom-tasks.xml:43:
 License check failed. Check the logs.

Total time: 41 minutes 15 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_21 -server -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4584) Request proxy mechanism not work if rows param is equal to zero

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635997#comment-13635997
 ] 

Commit Tag Bot commented on SOLR-4584:
--

[trunk commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469672

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 Request proxy mechanism not work if rows param is equal to zero
 ---

 Key: SOLR-4584
 URL: https://issues.apache.org/jira/browse/SOLR-4584
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2
 Environment: Linux Centos 6, Tomcat 7
Reporter: Yago Riveiro
Assignee: Mark Miller
 Fix For: 4.3, 5.0

 Attachments: Screen Shot 00.png, Screen Shot 01.png, Screen Shot 
 02.png, Screen Shot 03.png, select


 If I try to do a request like:
 http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select?q=*:*rows=0
 The request fail. The solr UI logging has this error:
 {code:java} 
 null:org.apache.solr.common.SolrException: Error trying to proxy request for 
 url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
 {code} 
 Chrome says:
 This webpage is not available
 The webpage at 
 http://192.168.20.47:8983/solr/ST-038412DCC2_0612/query?q=id:*rows=0 might 
 be temporarily down or it may have moved permanently to a new web address.
 Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error.
 If the param rows is set to rows=4 or superior the query return data as 
 expected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4716) this bug for fixed bug SOLR-4210. proxy request for remote core

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635996#comment-13635996
 ] 

Commit Tag Bot commented on SOLR-4716:
--

[trunk commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469672

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 this bug for fixed bug SOLR-4210. proxy request for remote core 
 

 Key: SOLR-4716
 URL: https://issues.apache.org/jira/browse/SOLR-4716
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Po Rui
 Attachments: SOLR-4716.patch


 For bug SOLR-4210. remoteQuery() have some issue in tomcat. it's ok in jetty 
 but not work in tomcat(maybe some other web server too) cause the 
 IOUtils.closeQuietly(os) wouldn't flush before close . this lead a Bogus 
 chunk size error cause the transfer-encoding is chunked and also the 
 Content-Length was set to non -1. so we should invoke flush() explicitly 
 before close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4716) this bug for fixed bug SOLR-4210. proxy request for remote core

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636008#comment-13636008
 ] 

Commit Tag Bot commented on SOLR-4716:
--

[branch_4x commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469676

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 this bug for fixed bug SOLR-4210. proxy request for remote core 
 

 Key: SOLR-4716
 URL: https://issues.apache.org/jira/browse/SOLR-4716
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Po Rui
 Attachments: SOLR-4716.patch


 For bug SOLR-4210. remoteQuery() have some issue in tomcat. it's ok in jetty 
 but not work in tomcat(maybe some other web server too) cause the 
 IOUtils.closeQuietly(os) wouldn't flush before close . this lead a Bogus 
 chunk size error cause the transfer-encoding is chunked and also the 
 Content-Length was set to non -1. so we should invoke flush() explicitly 
 before close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4584) Request proxy mechanism not work if rows param is equal to zero

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636009#comment-13636009
 ] 

Commit Tag Bot commented on SOLR-4584:
--

[branch_4x commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469676

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 Request proxy mechanism not work if rows param is equal to zero
 ---

 Key: SOLR-4584
 URL: https://issues.apache.org/jira/browse/SOLR-4584
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2
 Environment: Linux Centos 6, Tomcat 7
Reporter: Yago Riveiro
Assignee: Mark Miller
 Fix For: 4.3, 5.0

 Attachments: Screen Shot 00.png, Screen Shot 01.png, Screen Shot 
 02.png, Screen Shot 03.png, select


 If I try to do a request like:
 http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select?q=*:*rows=0
 The request fail. The solr UI logging has this error:
 {code:java} 
 null:org.apache.solr.common.SolrException: Error trying to proxy request for 
 url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
 {code} 
 Chrome says:
 This webpage is not available
 The webpage at 
 http://192.168.20.47:8983/solr/ST-038412DCC2_0612/query?q=id:*rows=0 might 
 be temporarily down or it may have moved permanently to a new web address.
 Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error.
 If the param rows is set to rows=4 or superior the query return data as 
 expected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4716) this bug for fixed bug SOLR-4210. proxy request for remote core

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636012#comment-13636012
 ] 

Commit Tag Bot commented on SOLR-4716:
--

[lucene_solr_4_3 commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469677

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 this bug for fixed bug SOLR-4210. proxy request for remote core 
 

 Key: SOLR-4716
 URL: https://issues.apache.org/jira/browse/SOLR-4716
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Po Rui
 Attachments: SOLR-4716.patch


 For bug SOLR-4210. remoteQuery() have some issue in tomcat. it's ok in jetty 
 but not work in tomcat(maybe some other web server too) cause the 
 IOUtils.closeQuietly(os) wouldn't flush before close . this lead a Bogus 
 chunk size error cause the transfer-encoding is chunked and also the 
 Content-Length was set to non -1. so we should invoke flush() explicitly 
 before close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4584) Request proxy mechanism not work if rows param is equal to zero

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636013#comment-13636013
 ] 

Commit Tag Bot commented on SOLR-4584:
--

[lucene_solr_4_3 commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469677

SOLR-4716,SOLR-4584: SolrCloud request proxying does not work on Tomcat and 
perhaps other non Jetty containers.

 Request proxy mechanism not work if rows param is equal to zero
 ---

 Key: SOLR-4584
 URL: https://issues.apache.org/jira/browse/SOLR-4584
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2
 Environment: Linux Centos 6, Tomcat 7
Reporter: Yago Riveiro
Assignee: Mark Miller
 Fix For: 4.3, 5.0

 Attachments: Screen Shot 00.png, Screen Shot 01.png, Screen Shot 
 02.png, Screen Shot 03.png, select


 If I try to do a request like:
 http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select?q=*:*rows=0
 The request fail. The solr UI logging has this error:
 {code:java} 
 null:org.apache.solr.common.SolrException: Error trying to proxy request for 
 url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
 {code} 
 Chrome says:
 This webpage is not available
 The webpage at 
 http://192.168.20.47:8983/solr/ST-038412DCC2_0612/query?q=id:*rows=0 might 
 be temporarily down or it may have moved permanently to a new web address.
 Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error.
 If the param rows is set to rows=4 or superior the query return data as 
 expected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4716) this bug for fixed bug SOLR-4210. proxy request for remote core

2013-04-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4716.
---

Resolution: Fixed
  Assignee: Mark Miller

Thanks Po!

 this bug for fixed bug SOLR-4210. proxy request for remote core 
 

 Key: SOLR-4716
 URL: https://issues.apache.org/jira/browse/SOLR-4716
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Po Rui
Assignee: Mark Miller
 Attachments: SOLR-4716.patch


 For bug SOLR-4210. remoteQuery() have some issue in tomcat. it's ok in jetty 
 but not work in tomcat(maybe some other web server too) cause the 
 IOUtils.closeQuietly(os) wouldn't flush before close . this lead a Bogus 
 chunk size error cause the transfer-encoding is chunked and also the 
 Content-Length was set to non -1. so we should invoke flush() explicitly 
 before close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636016#comment-13636016
 ] 

Commit Tag Bot commented on SOLR-4737:
--

[trunk commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469678

SOLR-4737: Update sha1 file

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4737) Update Guava to 14.01

2013-04-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636017#comment-13636017
 ] 

Commit Tag Bot commented on SOLR-4737:
--

[branch_4x commit] markrmiller
http://svn.apache.org/viewvc?view=revisionrevision=1469679

SOLR-4737: Update sha1 file

 Update Guava to 14.01
 -

 Key: SOLR-4737
 URL: https://issues.apache.org/jira/browse/SOLR-4737
 Project: Solr
  Issue Type: Task
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1529 - Failure

2013-04-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1529/

All tests passed

Build Log:
[...truncated 21048 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/core/lib/guava-14.0.1.jar
 [licenses] Scanned 102 JAR file(s) for licenses (in 1.27s.), 1 error(s).

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:381:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:67:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/solr/build.xml:232:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/lucene/tools/custom-tasks.xml:43:
 License check failed. Check the logs.

Total time: 64 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-4584) Request proxy mechanism not work if rows param is equal to zero

2013-04-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4584.
---

Resolution: Fixed

 Request proxy mechanism not work if rows param is equal to zero
 ---

 Key: SOLR-4584
 URL: https://issues.apache.org/jira/browse/SOLR-4584
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2
 Environment: Linux Centos 6, Tomcat 7
Reporter: Yago Riveiro
Assignee: Mark Miller
 Fix For: 4.3, 5.0

 Attachments: Screen Shot 00.png, Screen Shot 01.png, Screen Shot 
 02.png, Screen Shot 03.png, select


 If I try to do a request like:
 http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select?q=*:*rows=0
 The request fail. The solr UI logging has this error:
 {code:java} 
 null:org.apache.solr.common.SolrException: Error trying to proxy request for 
 url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
 {code} 
 Chrome says:
 This webpage is not available
 The webpage at 
 http://192.168.20.47:8983/solr/ST-038412DCC2_0612/query?q=id:*rows=0 might 
 be temporarily down or it may have moved permanently to a new web address.
 Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error.
 If the param rows is set to rows=4 or superior the query return data as 
 expected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4358) SolrJ, by preventing multi-part post, loses key information about file name that Tika needs

2013-04-18 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636032#comment-13636032
 ] 

Ryan McKinley commented on SOLR-4358:
-

SolrCloud/Distributed search uses HttpSolrServer for internal communication -- 
so something must be fishy.

It is not clear to me how the tests are failing -- just that they are.

 SolrJ, by preventing multi-part post, loses key information about file name 
 that Tika needs
 ---

 Key: SOLR-4358
 URL: https://issues.apache.org/jira/browse/SOLR-4358
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Karl Wright
Assignee: Ryan McKinley
 Attachments: additional_changes.diff, SOLR-4358.patch, 
 SOLR-4358.patch, SOLR-4358.patch


 SolrJ accepts a ContentStream, which has a name field.  Within 
 HttpSolrServer.java, if SolrJ makes the decision to use multipart posts, this 
 filename is transmitted as part of the form boundary information.  However, 
 if SolrJ chooses not to use multipart post, the filename information is lost.
 This information is used by SolrCell (Tika) to make decisions about content 
 extraction, so it is very important that it makes it into Solr in one way or 
 another.  Either SolrJ should set appropriate equivalent headers to send the 
 filename automatically, or it should force multipart posts when this 
 information is present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >