[jira] [Commented] (SOLR-6489) morphlines-cell tests fail after upgrade to TIKA 1.6

2014-09-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126707#comment-14126707
 ] 

Uwe Schindler commented on SOLR-6489:
-

Hi Mark,

if this is impossible to fix for you, I can revert my commit. I did not want to 
hurry anything, the bug was found by Jenkins after my commit (as I had no 
chance to test this locally).

In my opinion, disabling the Morphlines tests until this is fixed is a simplier 
solution than preventing the upgrade of TIKA at all, especially as more people 
will use plain Solr Cell than those using Morphlines. The main problem here (as 
said before) is the fact that Solr depends on an external library, that itsself 
depends on an older version of Solr + a specific version of TIKA of that older 
Solr version. In my opinion, to fix the whole thing completely, the morphlines 
code (at least the Solr-relevant part) should also be donated to Solr and 
maintained in our repository, so it fits the version of Solr.

I was also not expecting a failure, because TIKA did not change its APIs and 
the compile worked perfectly fine. To me morphlines-cell was just a client to 
the extracting content handler, I was not aware that it uses TIKA on its own, 
bypassing extracting content handler (which is not clear from looking at the 
tests). Maybe you give me a short introduction what it does in addition to Solr 
Cell and why those features cannot be in Solr Cell, so people who want to use 
it do not depend on Hadoop support.

 morphlines-cell tests fail after upgrade to TIKA 1.6
 

 Key: SOLR-6489
 URL: https://issues.apache.org/jira/browse/SOLR-6489
 Project: Solr
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.11
Reporter: Uwe Schindler
Assignee: Mark Miller
 Fix For: 5.0, 4.11


 After upgrade tp Apache TIKA 1.6 (SOLR-6488), solr-morphlines-cell tests fail 
 with scripting error messages.
 Due to missing understanding, caused by the crazy configuration file format 
 and inability to figure out the test setup, I have to give up and hope that 
 somebody else can take care. In addition, on my own machines, all of Hadoop 
 does not work at all, so I cannot debug (Windows).
 The whole Morphlines setup is not really good, because Solr core depends on 
 another TIKA version than the included morphlines libraries. This is not a 
 good situation for Solr, because we should be able to upgrade to any version 
 of our core components and not depend on external libraries that itsself 
 depend on Solr in older versions!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 615 - Still Failing

2014-09-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/615/

6 tests failed.
FAILED:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testChildDoctransformer

Error Message:
Expected mime type application/octet-stream but got text/html. html head 
meta http-equiv=Content-Type content=text/html;charset=ISO-8859-1/ 
titleError 500 Server Error/title /head body h2HTTP ERROR: 500/h2 
pProblem accessing /solr/collection1/select. Reason: preServer 
Error/pre/p hr /ismallPowered by Jetty:///small/i 












/body /html 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. html
head
meta http-equiv=Content-Type content=text/html;charset=ISO-8859-1/
titleError 500 Server Error/title
/head
body
h2HTTP ERROR: 500/h2
pProblem accessing /solr/collection1/select. Reason:
preServer Error/pre/p
hr /ismallPowered by Jetty:///small/i




















/body
/html

at 
__randomizedtesting.SeedInfo.seed([8F45AECB1F59C56D:FC9FB1519341B26B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:512)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testChildDoctransformer(SolrExampleTests.java:1373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)

[jira] [Created] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)
Littlestar created LUCENE-5928:
--

 Summary: WildcardQuery may has memory leak
 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar


data 800G, records 15*1*1.
one search thread.
content:???
content:*
content:*1
content:*2
content:*3

jvm heap=96G, but the jvm memusage over 130g?
run more wildcard, use memory more

Does luence search/index use a lot of DirectMemory or Native Memory?
I use -XX:MaxDirectMemorySize=4g, it does nothing better.


Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126800#comment-14126800
 ] 

Littlestar commented on LUCENE-5928:


when I changed to NIO, It works OK.
Does MMAP use a lot of NativeMemory?

 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar

 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Confused about writing a ZK state.

2014-09-09 Thread Noble Paul
It would be nice if you substantiate this by giving a usecase.

The command can be something like a setreplicaprop as a collection API.
Yeah , it should be written by the overseer and it should be an overseer
command. (I'm not endorsing the idea, just an implementation suggestion)


On Mon, Sep 8, 2014 at 10:24 PM, Erick Erickson erickerick...@gmail.com
wrote:

 I'm just not getting it. But then again it's late and the code is
 unfamiliar

 Anyway, I'm working on SOLR-6491 for which I want to have a
 preferredLeader property in ZK.

 I _think_ this fits best as a property in the same place as the
 leader prop and it would be a boolean. I.e. the cluster state for
 collection1/shards/shard1/replicas/core_node_2 might have a
 preferred_leader attribute that could be set to true. This would
 be totally independent of whether or not leader was true, although
 they would very often be the same. The preferredLeader is really
 just supposed to be a hint at leader-election time.

 Anyway, all this seems well and good but I don't see a convenient way
 to set/clear a single property in a single node in clusterstate. What
 I think I'm seeing is that the cluster state is only written by the
 Overseer and the Overseer doesn't deal with this case yet. Things like
 updateState seem like they have another purpose.

 So I'm guessing that I need to write another command for Overseer to
 implement, something like setnodeprop that takes a collection, shard,
 node, and one or more (property/propval) pairs. Then, to change the
 clusterstate I'd put together a ZkNodeProps and put it in the queue
 returned from Overseer.getInQueue(zkClient). Then wait for it to be
 processed before declaring victory (actually I'd only have to wait in
 the test I think).

 Mostly I'm looking for whether this is on the right track or
 completely of base. Also giving folks a chance to object before I
 invest the time and effort in something totally useless.

 Thanks!
 Erick

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
-
Noble Paul


[jira] [Assigned] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-6485:


Assignee: Noble Paul

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126813#comment-14126813
 ] 

Noble Paul commented on SOLR-6485:
--

Any reason why you could not reuse the DirectoryFileInputStream class? I see a 
lot of code being duplicated 

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1202: POMs out of sync

2014-09-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1202/

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=12315, name=Thread-5230, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=12315, name=Thread-5230, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)
at __randomizedtesting.SeedInfo.seed([CB25BA24528B8077]:0)


FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=12315, name=Thread-5230, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=12315, name=Thread-5230, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at 

[jira] [Resolved] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5928.
---
Resolution: Not a Problem
  Assignee: Uwe Schindler

Hi,
this is not an issue of WildcardQuery. This is also not related to used heap 
space. What you see differences in is in most cases a common misunderstanding 
about those 2 terms:

- Virtual Memory (VIRT): This is allocated address space, *it is not allocated 
memory*. On 64 bit platforms this is for free and is not limited by physical 
memory (it is not even related to each other). If you use mmap, VIRT is 
something like RES + up to 2 times the size of all open indexes. Internally the 
whole index is seen like a swap file to the OS kernel.
- Resident Memory (RES): This is size of heap space + size of direct memory. 
This is *allocated* memory, but may reside on swap, too.

By executing a Wildcard like *:* you just access the whole term dictionary 
and all positings lists, so they are accessed on disk and therefore loaded into 
file system cache. When using MMap, the space in file system cache is also 
shown in VIRT of the process, because the linux/windows kernel maps the file 
system memory into the address space. But its does not waste memory.

For more information, see: 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar
Assignee: Uwe Schindler

 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6457) LBHttpSolrServer: AIOOBE risk if counter overflows

2014-09-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126865#comment-14126865
 ] 

ASF subversion and git services commented on SOLR-6457:
---

Commit 1623744 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1623744 ]

SOLR-6457

 LBHttpSolrServer: AIOOBE risk if counter overflows
 --

 Key: SOLR-6457
 URL: https://issues.apache.org/jira/browse/SOLR-6457
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0, 4.1, 4.2, 4.2.1, 4.3, 4.3.1, 4.4, 4.5, 4.5.1, 4.6, 
 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9
Reporter: longkeyy
Assignee: Noble Paul
  Labels: patch
 Attachments: SOLR-6457.patch


 org.apache.solr.client.solrj.impl.LBHttpSolrServer
 line 442
   int count = counter.incrementAndGet();  
   ServerWrapper wrapper = serverList[count % serverList.length];
 when counter overflows, the mod operation of 
 count % serverList.length will start trying to use negative numbers as 
 array indexes.
 suggess to fixup it ,eg:
 //keep count is greater than 0
 int count = counter.incrementAndGet()  0x7FF;  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6457) LBHttpSolrServer: AIOOBE risk if counter overflows

2014-09-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126878#comment-14126878
 ] 

ASF subversion and git services commented on SOLR-6457:
---

Commit 1623752 from [~noble.paul] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1623752 ]

SOLR-6457

 LBHttpSolrServer: AIOOBE risk if counter overflows
 --

 Key: SOLR-6457
 URL: https://issues.apache.org/jira/browse/SOLR-6457
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0, 4.1, 4.2, 4.2.1, 4.3, 4.3.1, 4.4, 4.5, 4.5.1, 4.6, 
 4.6.1, 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9
Reporter: longkeyy
Assignee: Noble Paul
  Labels: patch
 Attachments: SOLR-6457.patch


 org.apache.solr.client.solrj.impl.LBHttpSolrServer
 line 442
   int count = counter.incrementAndGet();  
   ServerWrapper wrapper = serverList[count % serverList.length];
 when counter overflows, the mod operation of 
 count % serverList.length will start trying to use negative numbers as 
 array indexes.
 suggess to fixup it ,eg:
 //keep count is greater than 0
 int count = counter.incrementAndGet()  0x7FF;  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126867#comment-14126867
 ] 

Uwe Schindler edited comment on LUCENE-5928 at 9/9/14 11:44 AM:


Hi,
this is not an issue of WildcardQuery. This is also not related to used heap 
space. What you see differences in is in most cases a common misunderstanding 
about those 2 terms:

- Virtual Memory (VIRT): This is allocated address space, *it is not allocated 
memory*. On 64 bit platforms this is for free and is not limited by physical 
memory (it is not even related to each other). If you use mmap, VIRT is 
something like RES + up to 2 times the size of all open indexes. Internally the 
whole index is seen like a swap file to the OS kernel.
- Resident Memory (RES): This is size of heap space + size of direct memory. 
This is *allocated* memory, but may reside on swap, too. Please keep in mind, 
that some operating systems also count memory, which was mmapped from file 
system cache to the process there, because this is resident. You can see this 
looking at SHR (share), which is memory shared with other processes (in that 
case the kernel). For Lucene this RES memory is also not a problem, becaus ethe 
file system cache is managed by the kernel and freed on request (SHR/RES goes 
down then).

By executing a Wildcard like *:* you just access the whole term dictionary 
and all positings lists, so they are accessed on disk and therefore loaded into 
file system cache. When using MMap, the space in file system cache is also 
shown in VIRT of the process, because the linux/windows kernel maps the file 
system memory into the address space. But its does not waste memory.

For more information, see: 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html


was (Author: thetaphi):
Hi,
this is not an issue of WildcardQuery. This is also not related to used heap 
space. What you see differences in is in most cases a common misunderstanding 
about those 2 terms:

- Virtual Memory (VIRT): This is allocated address space, *it is not allocated 
memory*. On 64 bit platforms this is for free and is not limited by physical 
memory (it is not even related to each other). If you use mmap, VIRT is 
something like RES + up to 2 times the size of all open indexes. Internally the 
whole index is seen like a swap file to the OS kernel.
- Resident Memory (RES): This is size of heap space + size of direct memory. 
This is *allocated* memory, but may reside on swap, too.

By executing a Wildcard like *:* you just access the whole term dictionary 
and all positings lists, so they are accessed on disk and therefore loaded into 
file system cache. When using MMap, the space in file system cache is also 
shown in VIRT of the process, because the linux/windows kernel maps the file 
system memory into the address space. But its does not waste memory.

For more information, see: 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar
Assignee: Uwe Schindler

 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126913#comment-14126913
 ] 

Uwe Schindler commented on LUCENE-5928:
---

You may also look at: 
http://stackoverflow.com/questions/561245/virtual-memory-usage-from-java-under-linux-too-much-memory-used

 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar
Assignee: Uwe Schindler

 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



September 2014 board report.

2014-09-09 Thread Mark Miller
I've committed a draft board report to svn. Please review if you have the
time. https://svn.apache.org/repos/asf/lucene/board-reports

Uwe, since you have been leading the security issue charge this quarter,
would you mind filling in that section?

Thanks,

-- 
- Mark

http://about.me/markrmiller


[jira] [Updated] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-09 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6491:
-
Summary: Add preferredLeader as a ROLE and a collections API command to 
respect this role  (was: Add preferredLeader as a ROLE and a collecitons API 
command to respect this role)

 Add preferredLeader as a ROLE and a collections API command to respect this 
 role
 

 Key: SOLR-6491
 URL: https://issues.apache.org/jira/browse/SOLR-6491
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0, 4.11
Reporter: Erick Erickson
Assignee: Erick Erickson

 Leaders can currently get out of balance due to the sequence of how nodes are 
 brought up in a cluster. For very good reasons shard leadership cannot be 
 permanently assigned.
 However, it seems reasonable that a sys admin could optionally specify that a 
 particular node be the _preferred_ leader for a particular collection/shard. 
 During leader election, preference would be given to any node so marked when 
 electing any leader.
 So the proposal here is to add another role for preferredLeader to the 
 collections API, something like
 ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId
 Second, it would be good to have a new collections API call like 
 ELECTPREFERREDLEADERS?collection=collection_name
 (I really hate that name so far, but you see the idea). That command would 
 (asynchronously?) make an attempt to transfer leadership for each shard in a 
 collection to the leader labeled as the preferred leader by the new ADDROLE 
 role.
 I'm going to start working on this, any suggestions welcome!
 This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4296 - Failure!

2014-09-09 Thread Mark Miller
I would revert the commit until it’s ready.

- Mark

http://about.me/markrmiller

 On Sep 5, 2014, at 11:18 PM, Erick Erickson erickerick...@gmail.com wrote:
 
 Crap! This is SOLR-5322 that I just checked in. Looks like the file
 permissions on Windows don't work like I expected. Sh.
 
 I'll have to find a Windows VM to try this nonsense on, I'll try to
 get to it this weekend. Can we live with the noise for 2-3 days?
 
 Can anybody point me at a nice ready to rock-n-roll-with-solr-build
 Windows VM for VMWare?
 
 Erick
 
 On Fri, Sep 5, 2014 at 7:45 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4296/
 Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC
 
 2 tests failed.
 FAILED:  org.apache.solr.core.TestCoreDiscovery.testCoreDirCantRead
 
 Error Message:
 
 
 Stack Trace:
 java.lang.AssertionError
at 
 __randomizedtesting.SeedInfo.seed([3FB0AB2FF7FBDDEC:42F47F79887C139E]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
 org.apache.solr.core.TestCoreDiscovery.testCoreDirCantRead(TestCoreDiscovery.java:248)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)

[jira] [Commented] (LUCENE-5927) 4.9 - 4.10 change in StandardTokenizer behavior on \u1aa2

2014-09-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127061#comment-14127061
 ] 

Steve Rowe commented on LUCENE-5927:


bq. (again for this particular issue, I think simulating the old bug is 
overkill because it just will not be useful).

Ryan, are you okay with resolving this issue as won't fix?


 4.9 - 4.10 change in StandardTokenizer behavior on \u1aa2
 --

 Key: LUCENE-5927
 URL: https://issues.apache.org/jira/browse/LUCENE-5927
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst

 In 4.9, this string was broken into 2 tokens by StandardTokenizer:
 \u1aa2\u1a7f\u1a6f\u1a6f\u1a61\u1a72 = \u1aa2,  
 \u1a7f\u1a6f\u1a6f\u1a61\u1a72
 However, in 4.10, that has changed so it is now a single token returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3089) Make ResponseBuilder.isDistrib public

2014-09-09 Thread Frank Wesemann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127067#comment-14127067
 ] 

Frank Wesemann commented on SOLR-3089:
--

It should at least be public readable 

 Make ResponseBuilder.isDistrib public
 -

 Key: SOLR-3089
 URL: https://issues.apache.org/jira/browse/SOLR-3089
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Affects Versions: 4.0-ALPHA
Reporter: Rok Rejc
 Fix For: 4.9, 5.0


 Hi,
 i have posted this issue on a mailing list but i didn't get any response.
 I am trying to write distributed search component (a class that extends 
 SearchComponent). I have checked FacetComponent and TermsComponent. If I want 
 that search component works in a distributed environment I have to set 
 ResponseBuilder's isDistrib to true, like this (this is also done in 
 TermsComponent for example):
   public void prepare(ResponseBuilder rb) throws IOException {
   SolrParams params = rb.req.getParams();
   String shards = params.get(ShardParams.SHARDS);
   if (shards != null) {
   ListString lst = StrUtils.splitSmart(shards, ,, 
 true);
   rb.shards = lst.toArray(new String[lst.size()]);
   rb.isDistrib = true;
   }
   }
 If I have my component outside the package org.apache.solr.handler.component 
 this doesn't work. Is it possible to make isDistrib public (or is this the 
 wrong procedure/behaviour/design)?
 Many thanks,
 Rok



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127068#comment-14127068
 ] 

Mark Miller commented on SOLR-6485:
---

Good issue overall. I have not looked so closely, but I had the same initial 
worry about some code duplication.

Minor cleanup nits: rateLimiiter variable is misspelled, there is a System.out 
left in.

 ReplicationHandler should have an option to throttle the speed of replication
 -

 Key: SOLR-6485
 URL: https://issues.apache.org/jira/browse/SOLR-6485
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Noble Paul
 Attachments: SOLR-6485.patch, SOLR-6485.patch


 The ReplicationHandler should have an option to throttle the speed of 
 replication.
 It is useful for people who want bring up nodes in their SolrCloud cluster or 
 when have a backup-restore API and not eat up all their network bandwidth 
 while replicating.
 I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127074#comment-14127074
 ] 

Erick Erickson commented on SOLR-6491:
--

Right, Noble's comment on the dev list that it would be nice to explain _why_ 
this is desirable is well taken. Heck, _I_ know what I'm thinking, don't 
others? ;)

See the referenced JIRAs. But what I've seen in the wild is that depending on 
how a cluster comes up, all the leaders can wind up on a single (or small 
number) of machines. Since updates do put  some extra load on the leader, this 
can create an odd load distribution.

There's no really good external method to rebalance leaders without bouncing 
nodes and hoping that leaders come up in the right place. The idea here is to 
allow the sys admin to establish a model leader distribution via the 
preferredLeader attribute, and then be able to trigger something like 
rebalance leaders for collection X to bring the actuality close to the model. 
The preferredLeader role would also tend to bring the actual leader nodes for 
particular collections into congruence with the model over time I'd guess, b/c 
any time leader election takes place for a shard, the preferred leader would 
probably be elected as leader (if it's up).

Nothing about this is set in stone. By that I mean the preferredLeader role is 
a hint, not an absolute requirement. Really a try me first operation not 
require that I be the leader rule.

I'm a bit nervous about the rebalance leaders for collection X command, I'm 
not quite sure yet how/whether one needs to throttle this. I mean if a cluster 
has 150 shards, having them all re-elect leaders at the same time while under 
heavy indexing load seems fraught. I don' think this is insurmountable 
however, I'll see some more about this as I get deeper into the code.

 Add preferredLeader as a ROLE and a collections API command to respect this 
 role
 

 Key: SOLR-6491
 URL: https://issues.apache.org/jira/browse/SOLR-6491
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0, 4.11
Reporter: Erick Erickson
Assignee: Erick Erickson

 Leaders can currently get out of balance due to the sequence of how nodes are 
 brought up in a cluster. For very good reasons shard leadership cannot be 
 permanently assigned.
 However, it seems reasonable that a sys admin could optionally specify that a 
 particular node be the _preferred_ leader for a particular collection/shard. 
 During leader election, preference would be given to any node so marked when 
 electing any leader.
 So the proposal here is to add another role for preferredLeader to the 
 collections API, something like
 ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId
 Second, it would be good to have a new collections API call like 
 ELECTPREFERREDLEADERS?collection=collection_name
 (I really hate that name so far, but you see the idea). That command would 
 (asynchronously?) make an attempt to transfer leadership for each shard in a 
 collection to the leader labeled as the preferred leader by the new ADDROLE 
 role.
 I'm going to start working on this, any suggestions welcome!
 This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4296 - Failure!

2014-09-09 Thread Erick Erickson
Already been fixed, Uwe rescued me.

On Tue, Sep 9, 2014 at 7:57 AM, Mark Miller markrmil...@gmail.com wrote:
 I would revert the commit until it’s ready.

 - Mark

 http://about.me/markrmiller

 On Sep 5, 2014, at 11:18 PM, Erick Erickson erickerick...@gmail.com wrote:

 Crap! This is SOLR-5322 that I just checked in. Looks like the file
 permissions on Windows don't work like I expected. Sh.

 I'll have to find a Windows VM to try this nonsense on, I'll try to
 get to it this weekend. Can we live with the noise for 2-3 days?

 Can anybody point me at a nice ready to rock-n-roll-with-solr-build
 Windows VM for VMWare?

 Erick

 On Fri, Sep 5, 2014 at 7:45 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4296/
 Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC

 2 tests failed.
 FAILED:  org.apache.solr.core.TestCoreDiscovery.testCoreDirCantRead

 Error Message:


 Stack Trace:
 java.lang.AssertionError
at 
 __randomizedtesting.SeedInfo.seed([3FB0AB2FF7FBDDEC:42F47F79887C139E]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
 org.apache.solr.core.TestCoreDiscovery.testCoreDirCantRead(TestCoreDiscovery.java:248)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
 

[jira] [Updated] (SOLR-3089) Make ResponseBuilder.isDistrib public

2014-09-09 Thread Frank Wesemann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Wesemann updated SOLR-3089:
-
Attachment: Solr-3089.patch

Patch against trunk. No tests ( too simple to fail ).

 Make ResponseBuilder.isDistrib public
 -

 Key: SOLR-3089
 URL: https://issues.apache.org/jira/browse/SOLR-3089
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Affects Versions: 4.0-ALPHA
Reporter: Rok Rejc
 Fix For: 4.9, 5.0

 Attachments: Solr-3089.patch


 Hi,
 i have posted this issue on a mailing list but i didn't get any response.
 I am trying to write distributed search component (a class that extends 
 SearchComponent). I have checked FacetComponent and TermsComponent. If I want 
 that search component works in a distributed environment I have to set 
 ResponseBuilder's isDistrib to true, like this (this is also done in 
 TermsComponent for example):
   public void prepare(ResponseBuilder rb) throws IOException {
   SolrParams params = rb.req.getParams();
   String shards = params.get(ShardParams.SHARDS);
   if (shards != null) {
   ListString lst = StrUtils.splitSmart(shards, ,, 
 true);
   rb.shards = lst.toArray(new String[lst.size()]);
   rb.isDistrib = true;
   }
   }
 If I have my component outside the package org.apache.solr.handler.component 
 this doesn't work. Is it possible to make isDistrib public (or is this the 
 wrong procedure/behaviour/design)?
 Many thanks,
 Rok



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5927) 4.9 - 4.10 change in StandardTokenizer behavior on \u1aa2

2014-09-09 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-5927.

Resolution: Won't Fix

Yep, closing.

 4.9 - 4.10 change in StandardTokenizer behavior on \u1aa2
 --

 Key: LUCENE-5927
 URL: https://issues.apache.org/jira/browse/LUCENE-5927
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst

 In 4.9, this string was broken into 2 tokens by StandardTokenizer:
 \u1aa2\u1a7f\u1a6f\u1a6f\u1a61\u1a72 = \u1aa2,  
 \u1a7f\u1a6f\u1a6f\u1a61\u1a72
 However, in 4.10, that has changed so it is now a single token returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: September 2014 board report.

2014-09-09 Thread Uwe Schindler
OK will do that later!

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Mark Miller [mailto:markrmil...@gmail.com] 
Sent: Tuesday, September 09, 2014 4:43 PM
To: dev@lucene.apache.org
Subject: September 2014 board report.

 

I've committed a draft board report to svn. Please review if you have the time. 
https://svn.apache.org/repos/asf/lucene/board-reports

 

Uwe, since you have been leading the security issue charge this quarter, would 
you mind filling in that section?

 

Thanks,


 

-- 

- Mark

 

http://about.me/markrmiller



Re: September 2014 board report.

2014-09-09 Thread Steve Rowe
Mark, looks good, one minor nit, copy/paste-o I assume: should be two releases 
rather than four:

 In the last quarter we made four releases of both Lucene Core and Solr:
 
  - 4.9.0 on 25 June 2014
  - 4.10.0 on 3 Sept 2014

Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6494) Query filters applied in a wrong order

2014-09-09 Thread Alexander S. (JIRA)
Alexander S. created SOLR-6494:
--

 Summary: Query filters applied in a wrong order
 Key: SOLR-6494
 URL: https://issues.apache.org/jira/browse/SOLR-6494
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Alexander S.


This query:
{code}
{
  fq: [type:Award::Nomination],
  sort: score desc,
  start: 0,
  rows: 20,
  q: *:*
}
{code}
takes just a few milliseconds, but this one:
{code}
{
  fq: [
type:Award::Nomination,
created_at_d:[* TO 2014-09-08T23:59:59Z]
  ],
  sort: score desc,
  start: 0,
  rows: 20,
  q: *:*
}
{code}
takes almost 15 seconds.

I have just ≈12k of documents with type Award::Nomination, but around half a 
billion with created_at_d field set. And it seems Solr applies the created_at_d 
filter first going through all documents where this field is set, which is not 
very smart.

I think if it can't do anything better than applying filters in the alphabet 
order it should apply them in the order they were received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6141) Allow removing fields via Schema REST Api.

2014-09-09 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-6141:


Assignee: Steve Rowe

 Allow removing fields via Schema REST Api.
 --

 Key: SOLR-6141
 URL: https://issues.apache.org/jira/browse/SOLR-6141
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Christoph Strobl
Assignee: Steve Rowe
  Labels: rest_api

 It would be nice if it was possible to remove fields from the schema by 
 sending
 {{curl -X DELETE /collection/schema/fieldname}}
 This would affect solrj as well as the only available methods are {{GET, 
 POST}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-09-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127243#comment-14127243
 ] 

Noble Paul commented on SOLR-5473:
--

anything that stops us from committing this ?

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: SolrCloud
 Fix For: 5.0, 4.10

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473_no_ui.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node and watches state changes 
 selectively.
 https://reviews.apache.org/r/24220/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6494) Query filters applied in a wrong order

2014-09-09 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-6494.
--
Resolution: Invalid

First, please raise issues like this on the user's list first to insure that 
it's really a bug before raising JIRAs.

Second, I don't think you understand how filter queries work. By design, fq 
clauses like this are calculated for the entire document set and the results 
cached, there is no ordering for that part. Otherwise, how could they be 
re-used for a different query?

You can get around this by specifying non-cached filters, or just pay the 
price the first time and be able to re-use the cache later, perhaps with 
warming queries creating the filter (assuming it's a common one) to hide the 
pain of first-time use.

See: http://searchhub.org/2012/02/10/advanced-filter-caching-in-solr/

 Query filters applied in a wrong order
 --

 Key: SOLR-6494
 URL: https://issues.apache.org/jira/browse/SOLR-6494
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Alexander S.

 This query:
 {code}
 {
   fq: [type:Award::Nomination],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes just a few milliseconds, but this one:
 {code}
 {
   fq: [
 type:Award::Nomination,
 created_at_d:[* TO 2014-09-08T23:59:59Z]
   ],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes almost 15 seconds.
 I have just ≈12k of documents with type Award::Nomination, but around half 
 a billion with created_at_d field set. And it seems Solr applies the 
 created_at_d filter first going through all documents where this field is 
 set, which is not very smart.
 I think if it can't do anything better than applying filters in the alphabet 
 order it should apply them in the order they were received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6494) Query filters applied in a wrong order

2014-09-09 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127304#comment-14127304
 ] 

Alexander S. commented on SOLR-6494:


Hi, thank you for the explanation, but I think sometimes (like in this case) it 
would be much more efficient to run filters one by one. It seems that the cost 
parameter should do what I need, e.g.:
{code}
{!cost=1}type:Award::Nomination
{!cost=10}created_at_d:[* TO 2014-09-08T23:59:59Z]
{code}

 Query filters applied in a wrong order
 --

 Key: SOLR-6494
 URL: https://issues.apache.org/jira/browse/SOLR-6494
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Alexander S.

 This query:
 {code}
 {
   fq: [type:Award::Nomination],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes just a few milliseconds, but this one:
 {code}
 {
   fq: [
 type:Award::Nomination,
 created_at_d:[* TO 2014-09-08T23:59:59Z]
   ],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes almost 15 seconds.
 I have just ≈12k of documents with type Award::Nomination, but around half 
 a billion with created_at_d field set. And it seems Solr applies the 
 created_at_d filter first going through all documents where this field is 
 set, which is not very smart.
 I think if it can't do anything better than applying filters in the alphabet 
 order it should apply them in the order they were received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-6494) Query filters applied in a wrong order

2014-09-09 Thread Erick Erickson
I don't think so. Note the comment in the URL I sent you:
When a filter isn’t generated up front and cached, it’s executed in
parallel with the main query.

Isn't generated up front and cached means, AFAIK, that you've got to
specify cache=false.

On Tue, Sep 9, 2014 at 11:03 AM, Alexander S. (JIRA) j...@apache.org wrote:

 [ 
 https://issues.apache.org/jira/browse/SOLR-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127304#comment-14127304
  ]

 Alexander S. commented on SOLR-6494:
 

 Hi, thank you for the explanation, but I think sometimes (like in this case) 
 it would be much more efficient to run filters one by one. It seems that the 
 cost parameter should do what I need, e.g.:
 {code}
 {!cost=1}type:Award::Nomination
 {!cost=10}created_at_d:[* TO 2014-09-08T23:59:59Z]
 {code}

 Query filters applied in a wrong order
 --

 Key: SOLR-6494
 URL: https://issues.apache.org/jira/browse/SOLR-6494
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Alexander S.

 This query:
 {code}
 {
   fq: [type:Award::Nomination],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes just a few milliseconds, but this one:
 {code}
 {
   fq: [
 type:Award::Nomination,
 created_at_d:[* TO 2014-09-08T23:59:59Z]
   ],
   sort: score desc,
   start: 0,
   rows: 20,
   q: *:*
 }
 {code}
 takes almost 15 seconds.
 I have just ≈12k of documents with type Award::Nomination, but around half 
 a billion with created_at_d field set. And it seems Solr applies the 
 created_at_d filter first going through all documents where this field is 
 set, which is not very smart.
 I think if it can't do anything better than applying filters in the alphabet 
 order it should apply them in the order they were received.



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6323) ReRankingQParserPlugin should handle paging beyond number reranked

2014-09-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-6323:


Assignee: Joel Bernstein

 ReRankingQParserPlugin should handle paging beyond number reranked
 --

 Key: SOLR-6323
 URL: https://issues.apache.org/jira/browse/SOLR-6323
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.9
Reporter: Adair Kovac
Assignee: Joel Bernstein

 Discussed in this thread: 
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg100870.html
 Currently the ReRankingQParserPlugin requires the client to drop the re-rank 
 parameter during paging if it only wants the top N documents to be re-ranked 
 and is getting past that N. This also requires that the N must fall on page 
 borders. 
 ReRankingQParserPlugin should provide transparency for the client by 
 returning results beyond N in their regular (non-reranked) order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6323) ReRankingQParserPlugin should handle paging beyond number reranked

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127337#comment-14127337
 ] 

Joel Bernstein edited comment on SOLR-6323 at 9/9/14 6:30 PM:
--

First crack at a new paging implementation. Still needs more testing, 
especially the integration with QueryElevation


was (Author: joel.bernstein):
First crack at a paging implementation. Still needs more testing, especially 
the integration with QueryElevation

 ReRankingQParserPlugin should handle paging beyond number reranked
 --

 Key: SOLR-6323
 URL: https://issues.apache.org/jira/browse/SOLR-6323
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.9
Reporter: Adair Kovac
Assignee: Joel Bernstein
 Attachments: SOLR-6323.patch


 Discussed in this thread: 
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg100870.html
 Currently the ReRankingQParserPlugin requires the client to drop the re-rank 
 parameter during paging if it only wants the top N documents to be re-ranked 
 and is getting past that N. This also requires that the N must fall on page 
 borders. 
 ReRankingQParserPlugin should provide transparency for the client by 
 returning results beyond N in their regular (non-reranked) order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6323) ReRankingQParserPlugin should handle paging beyond number reranked

2014-09-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6323:
-
Attachment: SOLR-6323.patch

First crack at a paging implementation. Still needs more testing, especially 
the integration with QueryElevation

 ReRankingQParserPlugin should handle paging beyond number reranked
 --

 Key: SOLR-6323
 URL: https://issues.apache.org/jira/browse/SOLR-6323
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.9
Reporter: Adair Kovac
Assignee: Joel Bernstein
 Attachments: SOLR-6323.patch


 Discussed in this thread: 
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg100870.html
 Currently the ReRankingQParserPlugin requires the client to drop the re-rank 
 parameter during paging if it only wants the top N documents to be re-ranked 
 and is getting past that N. This also requires that the N must fall on page 
 borders. 
 ReRankingQParserPlugin should provide transparency for the client by 
 returning results beyond N in their regular (non-reranked) order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6495) Bad type on operand stack related to JavaBinCodec when running in Hadoop

2014-09-09 Thread Brett Hoerner (JIRA)
Brett Hoerner created SOLR-6495:
---

 Summary: Bad type on operand stack related to JavaBinCodec when 
running in Hadoop
 Key: SOLR-6495
 URL: https://issues.apache.org/jira/browse/SOLR-6495
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 4.10
Reporter: Brett Hoerner


This is 4.10 specific. I have been using the MapReduce integration for a while 
now. The only thing I need to do is change my project dependencies from 4.9 to 
4.10 and I receive the following in all of my mappers:

{code}
2014-09-09 18:27:19,150 FATAL [IPC Server handler 7 on 34191] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1410286590866_0001_m_00_0 - exited : Bad type on operand stack
Exception Details:
  Location:

org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
 @71: invokevirtual
  Reason:
Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
stack[1]) is not assignable to 
'org/apache/solr/common/util/DataInputInputStream'
  Current Frame:
bci: @71
flags: { }
locals: { 'org/apache/solr/common/util/JavaBinCodec', 
'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
stack: { 'org/apache/solr/common/util/JavaBinCodec', 
'org/apache/solr/common/util/FastInputStream' }
  Bytecode:
000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
020: 0013 1214 b600 15b2 000a b600 1612 17b6
030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
040: 19b7 001a bf2a 2cb6 001b b0
  Stackmap Table:
append_frame(@69,Object[#326])

2014-09-09 18:27:19,150 INFO [IPC Server handler 7 on 34191] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
attempt_1410286590866_0001_m_00_0: Error: Bad type on operand stack
Exception Details:
  Location:

org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
 @71: invokevirtual
  Reason:
Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
stack[1]) is not assignable to 
'org/apache/solr/common/util/DataInputInputStream'
  Current Frame:
bci: @71
flags: { }
locals: { 'org/apache/solr/common/util/JavaBinCodec', 
'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
stack: { 'org/apache/solr/common/util/JavaBinCodec', 
'org/apache/solr/common/util/FastInputStream' }
  Bytecode:
000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
020: 0013 1214 b600 15b2 000a b600 1612 17b6
030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
040: 19b7 001a bf2a 2cb6 001b b0
  Stackmap Table:
append_frame(@69,Object[#326])

2014-09-09 18:27:19,152 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report 
from attempt_1410286590866_0001_m_00_0: Error: Bad type on operand stack
Exception Details:
  Location:

org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
 @71: invokevirtual
  Reason:
Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
stack[1]) is not assignable to 
'org/apache/solr/common/util/DataInputInputStream'
  Current Frame:
bci: @71
flags: { }
locals: { 'org/apache/solr/common/util/JavaBinCodec', 
'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
stack: { 'org/apache/solr/common/util/JavaBinCodec', 
'org/apache/solr/common/util/FastInputStream' }
  Bytecode:
000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
020: 0013 1214 b600 15b2 000a b600 1612 17b6
030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
040: 19b7 001a bf2a 2cb6 001b b0
  Stackmap Table:
append_frame(@69,Object[#326])
{code}

There is no further detail available in the logs (no real stacks with this 
error, it seems).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6495) Bad type on operand stack related to JavaBinCodec when running in Hadoop

2014-09-09 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127424#comment-14127424
 ] 

Brett Hoerner commented on SOLR-6495:
-

Hoss in IRC has pointed out this is likely to do with how I'm building my job 
jars. I'll dig into that, but it is odd that I can take the exact same job 
project and change 4.10 to 4.9 (no other dep changes, code changes, 
reordering, exclusions, etc) and it works.

 Bad type on operand stack related to JavaBinCodec when running in Hadoop
 --

 Key: SOLR-6495
 URL: https://issues.apache.org/jira/browse/SOLR-6495
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 4.10
Reporter: Brett Hoerner

 This is 4.10 specific. I have been using the MapReduce integration for a 
 while now. The only thing I need to do is change my project dependencies from 
 4.9 to 4.10 and I receive the following in all of my mappers:
 {code}
 2014-09-09 18:27:19,150 FATAL [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
 attempt_1410286590866_0001_m_00_0 - exited : Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,150 INFO [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
 attempt_1410286590866_0001_m_00_0: Error: Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,152 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1410286590866_0001_m_00_0: Error: Bad type on operand 
 stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 {code}
 There is no further detail available in the logs (no real stacks with this 
 error, it seems).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5005) JavaScriptRequestHandler

2014-09-09 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127463#comment-14127463
 ] 

Walter Underwood commented on SOLR-5005:


Here is another use case. We currently do this in client code, but it would be 
nice to move it to Solr.

We run a query in strict mode, with mm=100%. If there are zero results, we 
run it in loose mode, with mm=1, and fuzzy matching.

 JavaScriptRequestHandler
 

 Key: SOLR-5005
 URL: https://issues.apache.org/jira/browse/SOLR-5005
 Project: Solr
  Issue Type: New Feature
Reporter: David Smiley
Assignee: Noble Paul
 Attachments: SOLR-5005.patch, SOLR-5005.patch, SOLR-5005.patch, 
 SOLR-5005_ScriptRequestHandler_take3.patch, 
 SOLR-5005_ScriptRequestHandler_take3.patch, patch


 A user customizable script based request handler would be very useful.  It's 
 inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
 could write a script that submits searches to Solr (in-VM) and can react to 
 the results of one search before making another that is formulated 
 dynamically.  And it can assemble the response data, potentially reducing 
 both the latency and data that would move over the wire if this feature 
 didn't exist.  It could also be used to easily add a user-specifiable search 
 API at the Solr server with request parameters governed by what the user 
 wants to advertise -- especially useful within enterprises.  And, it could be 
 used to enforce security requirements on allowable parameter valuables to 
 Solr, so a javascript based Solr client could be allowed to talk to only a 
 script based request handler which enforces the rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6495) Bad type on operand stack related to JavaBinCodec when running in Hadoop

2014-09-09 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127530#comment-14127530
 ] 

Brett Hoerner commented on SOLR-6495:
-

Sorry, disregard this. I guess because of the ordering of deps in my build tool 
I was building my code with one version of Solr and deploying it with another. 
Hopefully this is helpful to someone else in the future. :)

 Bad type on operand stack related to JavaBinCodec when running in Hadoop
 --

 Key: SOLR-6495
 URL: https://issues.apache.org/jira/browse/SOLR-6495
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 4.10
Reporter: Brett Hoerner

 This is 4.10 specific. I have been using the MapReduce integration for a 
 while now. The only thing I need to do is change my project dependencies from 
 4.9 to 4.10 and I receive the following in all of my mappers:
 {code}
 2014-09-09 18:27:19,150 FATAL [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
 attempt_1410286590866_0001_m_00_0 - exited : Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,150 INFO [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
 attempt_1410286590866_0001_m_00_0: Error: Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,152 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1410286590866_0001_m_00_0: Error: Bad type on operand 
 stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 {code}
 There is no further detail available in the logs (no real stacks with this 
 error, it seems).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6495) Bad type on operand stack related to JavaBinCodec when running in Hadoop

2014-09-09 Thread Brett Hoerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brett Hoerner closed SOLR-6495.
---
Resolution: Invalid

 Bad type on operand stack related to JavaBinCodec when running in Hadoop
 --

 Key: SOLR-6495
 URL: https://issues.apache.org/jira/browse/SOLR-6495
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 4.10
Reporter: Brett Hoerner

 This is 4.10 specific. I have been using the MapReduce integration for a 
 while now. The only thing I need to do is change my project dependencies from 
 4.9 to 4.10 and I receive the following in all of my mappers:
 {code}
 2014-09-09 18:27:19,150 FATAL [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
 attempt_1410286590866_0001_m_00_0 - exited : Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,150 INFO [IPC Server handler 7 on 34191] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
 attempt_1410286590866_0001_m_00_0: Error: Bad type on operand stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 2014-09-09 18:27:19,152 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1410286590866_0001_m_00_0: Error: Bad type on operand 
 stack
 Exception Details:
   Location:
 
 org/apache/solr/common/util/JavaBinCodec.unmarshal(Ljava/io/InputStream;)Ljava/lang/Object;
  @71: invokevirtual
   Reason:
 Type 'org/apache/solr/common/util/FastInputStream' (current frame, 
 stack[1]) is not assignable to 
 'org/apache/solr/common/util/DataInputInputStream'
   Current Frame:
 bci: @71
 flags: { }
 locals: { 'org/apache/solr/common/util/JavaBinCodec', 
 'java/io/InputStream', 'org/apache/solr/common/util/FastInputStream' }
 stack: { 'org/apache/solr/common/util/JavaBinCodec', 
 'org/apache/solr/common/util/FastInputStream' }
   Bytecode:
 000: 2bb8 000e 4d2a 2cb6 000f b500 102a b400
 010: 10b2 000a 9f00 31bb 0011 59bb 0012 59b7
 020: 0013 1214 b600 15b2 000a b600 1612 17b6
 030: 0015 2ab4 0010 b600 1612 18b6 0015 b600
 040: 19b7 001a bf2a 2cb6 001b b0
   Stackmap Table:
 append_frame(@69,Object[#326])
 {code}
 There is no further detail available in the logs (no real stacks with this 
 error, it seems).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127542#comment-14127542
 ] 

Tomás Fernández Löbbe commented on SOLR-6452:
-

Thanks for the patch Xu. As an optimization, why don't you make the 
{{missingStats}} a {{MapInteger, Integer}} and use the ords as keys instead 
of the terms. That way you don't need to do the lookupOrd for all docs, and you 
do it only once per term in the {{accumulateMissing()}} method. 

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127542#comment-14127542
 ] 

Tomás Fernández Löbbe edited comment on SOLR-6452 at 9/9/14 9:01 PM:
-

Thanks for the patch [~simpleBread]. As an optimization, why don't you make the 
{{missingStats}} a {{MapInteger, Integer}} and use the ords as keys instead 
of the terms. That way you don't need to do the lookupOrd for all docs, and you 
do it only once per term in the {{accumulateMissing()}} method. 


was (Author: tomasflobbe):
Thanks for the patch Xu. As an optimization, why don't you make the 
{{missingStats}} a {{MapInteger, Integer}} and use the ords as keys instead 
of the terms. That way you don't need to do the lookupOrd for all docs, and you 
do it only once per term in the {{accumulateMissing()}} method. 

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6452:
---
Attachment: SOLR-6452.patch

Update based on the comments.

Thanks Tomas

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)
Julie Tibshirani created LUCENE-5929:


 Summary: Standard highlighting doesn't work for 
ToParentBlockJoinQuery
 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani


Because WeightedSpanTermExtractor#extract doesn't check for 
ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
this type of query.

At first it may seem like there's no issue, because ToParentBlockJoinQuery only 
returns parent documents, while the highlighting applies to children. But if a 
client can directly supply the text from child documents (as elasticsearch does 
if _source is enabled), then highlighting will unexpectedly fail.

A test case that triggers the bug is attached. The same issue exists for 
ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julie Tibshirani updated LUCENE-5929:
-
Attachment: HighligherTest.patch

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6407) SortingResponseWriter String sorting broken on single segment indexes

2014-09-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-6407.
--
   Resolution: Fixed
Fix Version/s: 4.10

 SortingResponseWriter String sorting broken on single segment indexes
 -

 Key: SOLR-6407
 URL: https://issues.apache.org/jira/browse/SOLR-6407
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Joel Bernstein
 Fix For: 4.10

 Attachments: TestSortingResponseWriter.log


 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestSortingResponseWriter -Dtests.method=testSortingOutput 
 -Dtests.seed=9096ECEE5D29523B -Dtests.slow=true -Dtests.locale=en_GB 
 -Dtests.timezone=CTT -Dtests.file.encoding=ISO-8859-1
[junit4] ERROR   0.14s J3  | TestSortingResponseWriter.testSortingOutput 
 
[junit4] Throwable #1: java.lang.ClassCastException: 
 org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6 cannot be 
 cast to org.apache.lucene.index.MultiDocValues$MultiSortedDocValues
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([9096ECEE5D29523B:24667B8F5066589B]:0)
[junit4]  at 
 org.apache.solr.response.SortingResponseWriter$StringValue.init(SortingResponseWriter.java:1091)
[junit4]  at 
 org.apache.solr.response.SortingResponseWriter.getSortDoc(SortingResponseWriter.java:322)
[junit4]  at 
 org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:126)
[junit4]  at 
 org.apache.solr.util.TestHarness.query(TestHarness.java:301)
[junit4]  at 
 org.apache.solr.util.TestHarness.query(TestHarness.java:278)
[junit4]  at 
 org.apache.solr.response.TestSortingResponseWriter.testSortingOutput(TestSortingResponseWriter.java:116)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5244) Exporting Full Sorted Result Sets

2014-09-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-5244.
--
Resolution: Fixed

 Exporting Full Sorted Result Sets
 -

 Key: SOLR-5244
 URL: https://issues.apache.org/jira/browse/SOLR-5244
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: 0001-SOLR_5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
 SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
 SOLR-5244.patch


 This ticket allows Solr to export full sorted result sets. A new export 
 request handler has been created that sets up the default writer type 
 (SortingResponseWriter) and the required rank query (ExportQParserPlugin). 
 The syntax is:
 {code}
 /solr/collection1/export?q=*:*fl=a,b,csort=a desc,b desc
 {code}
 This capability will open up Solr for a whole range of uses that were 
 typically done using aggregation engines like Hadoop. For example:
 *Large Distributed Joins*
 A client outside of Solr calls two different Solr collections and returns the 
 results sorted by a join key. The client iterates through both streams and 
 performs a merge join.
 *Fully Distributed Field Collapsing/Grouping*
 A client outside of Solr makes individual calls to all the servers in a 
 single collection and returns results sorted by the collapse key. The client 
 merge joins the sorted lists on the collapse key to perform the field 
 collapse.
 *High Cardinality Distributed Aggregation*
 A client outside of Solr makes individual calls to all the servers in a 
 single collection and sorts on a high cardinality field. The client then 
 merge joins the sorted lists to perform the high cardinality aggregation.
 *Large Scale Time Series Rollups*
 A client outside Solr makes individual calls to all servers in a collection 
 and sorts on time dimensions. The client merge joins the sorted result sets 
 and rolls up the time dimensions as it iterates through the data.
 In these scenarios Solr is being used as a distributed sorting engine. 
 Developers can write clients that take advantage of this sorting capability 
 in any way they wish.
 *Session Analysis and Aggregation*
 A client outside Solr makes individual calls to all servers in a collection 
 and sorts on the sessionID. The client merge joins the sorted results and 
 aggregates sessions as it iterates through the results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127694#comment-14127694
 ] 

Julie Tibshirani commented on LUCENE-5929:
--

I have a simple fix ready that simply adds a check in WeightedSpanTermExtractor 
for ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or need to watch out for? 

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127694#comment-14127694
 ] 

Julie Tibshirani edited comment on LUCENE-5929 at 9/9/14 10:38 PM:
---

I have a simple fix ready that simply adds a check in WeightedSpanTermExtractor 
for ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or should watch out for? 


was (Author: jtibs):
I have a simple fix ready that simply adds a check in WeightedSpanTermExtractor 
for ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or need to watch out for? 

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2014-09-09 Thread Steve Davids (JIRA)
Steve Davids created SOLR-6496:
--

 Summary: LBHttpSolrServer should stop server retries after the 
timeAllowed threshold is met
 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Priority: Critical
 Fix For: 4.11


The LBHttpSolrServer will continue to perform retries for each server it was 
given without honoring the timeAllowed request parameter. Once the threshold 
has been met, you should no longer perform retries and allow the exception to 
bubble up and allow the request to either error out or return partial results 
per the shards.tolerant request parameter.

For a little more context on how this is can be extremely problematic please 
see the comment here: 
https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
 (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6493) stats on multivalued fields don't respect excluded filters

2014-09-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6493:
---
Attachment: SOLR-6493.patch

beefed up the tests

 stats on multivalued fields don't respect excluded filters
 --

 Key: SOLR-6493
 URL: https://issues.apache.org/jira/browse/SOLR-6493
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.9, 4.10
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-6493.patch, SOLR-6493.patch


 SOLR-3177 added support to StatsComponent for using the ex local param to 
 exclude tagged filters, but these exclusions have apparently never been 
 correct for multivalued fields



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6493) stats on multivalued fields don't respect excluded filters

2014-09-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127735#comment-14127735
 ] 

ASF subversion and git services commented on SOLR-6493:
---

Commit 1623884 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1623884 ]

SOLR-6493: Fix fq exclusion via ex local param in multivalued stats.field

 stats on multivalued fields don't respect excluded filters
 --

 Key: SOLR-6493
 URL: https://issues.apache.org/jira/browse/SOLR-6493
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.9, 4.10
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-6493.patch, SOLR-6493.patch


 SOLR-3177 added support to StatsComponent for using the ex local param to 
 exclude tagged filters, but these exclusions have apparently never been 
 correct for multivalued fields



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127741#comment-14127741
 ] 

David Boychuck commented on SOLR-6066:
--

I found a bug but I'm not sure if it's caused by my patch. If you elevate a 
product while and index is running that is doing auto soft commits solr will 
return an exception until the index is committed.

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2014-09-09 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6496:
---
Attachment: SOLR-6496.patch

Initial patch that honors the timeAllowed request parameter. There aren't any 
tests included -- is there any objections to perhaps using a mocking library, 
it sure would make it much easier to perform unit testing on these negative 
cases. Mockito is my personal preference and is currently being used in 
Morphlines, but it will need to be included in the SolrJ test dependencies.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Priority: Critical
 Fix For: 4.11

 Attachments: SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-09 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---
Attachment: SOLR-5986.patch

Updated patch with more things fixed and optimized. I still need to add more 
tests (working on them).

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127752#comment-14127752
 ] 

Anshum Gupta edited comment on SOLR-5986 at 9/9/14 11:12 PM:
-

Updated patch with more things fixed and optimized. I still need to add more 
tests (working on them).
* Removed unwanted Overrides
* Changed and fixed class names.
* Initialization of ThreadLocal variable to default value instead of a null 
check to make things easier to understand.
* Setting the log message for the ExitingReaderException().
* Removed unwanted null check in the ExitObject.reset() method.


was (Author: anshumg):
Updated patch with more things fixed and optimized. I still need to add more 
tests (working on them).

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127757#comment-14127757
 ] 

Joel Bernstein commented on SOLR-6066:
--

David,

Can you post this to the users list and include the stacktrace.

thanks,
Joel

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127761#comment-14127761
 ] 

Joel Bernstein commented on SOLR-6066:
--

On second thought because it might be related to your patch, post the 
statcktrace here and I'll take a look.

If it turns out that it's not related your patch we'll open a ticket for it.

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-09 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127767#comment-14127767
 ] 

Steve Davids commented on SOLR-5986:


bq. I think this should be ok, specially considering the intention is to make 
sure that the request is killed and doesn't run forever.
+1, this is a good starting point and can be further refined in the future if 
need be.

I went ahead and opened SOLR-6496 to account for the LBHttpSolrServer's 
continual retries. Also, I am a little concerned that the cursorMark doesn't 
honor the timeAllowed request parameter for some strange reason (the cursorMark 
ticket didn't provide any rational for it), we may want to revisit that 
decision in yet another ticket so people can be confident their cursor mark 
queries won't crash their clusters as well.

Thanks for taking this on Anshum!

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127785#comment-14127785
 ] 

Anshum Gupta commented on SOLR-5986:


I haven't looked at all of the implementation for the cursor mark but if it's 
query rewriting/expansion that takes time, this patch should fix the issue. 
I'll open another issue after I commit this one to use a single timeoutAt 
value. Ideally, it should be a single exitObject for a request that gets used 
by everything that needs to limit the processing time.

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127787#comment-14127787
 ] 

Joel Bernstein commented on SOLR-6066:
--

Also, some things to thing about with your application.

Solr provides a consistent view of the index for each query. So softCommits in 
the backround should not effect the execution of your query. 

Soft commits do open a new searcher though, and the CollapsingQParserPlugin 
relies on the Lucene FieldCache, which needs to be warmed when a new searcher 
is opened.

So adding a static warming query that exercises the CollapsingQParserPlugin 
will ensure that users will not see pauses after softCommits.

If you are softCommitting too frequently this can lead to overlapping searchers 
as they take time to open and warm. So be sure to space the softCommits far 
enough apart that you are not opening new searchers faster then they can be 
warmed.

When you post you're stack trace, it should tell us what's happening though. 

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127787#comment-14127787
 ] 

Joel Bernstein edited comment on SOLR-6066 at 9/9/14 11:37 PM:
---

Also, some things to think about with your application.

Solr provides a consistent view of the index for each query. So softCommits in 
the backround should not effect the execution of your query. 

Soft commits do open a new searcher though, and the CollapsingQParserPlugin 
relies on the Lucene FieldCache, which needs to be warmed when a new searcher 
is opened.

So adding a static warming query that exercises the CollapsingQParserPlugin 
will ensure that users will not see pauses after softCommits.

If you are softCommitting too frequently this can lead to overlapping searchers 
as they take time to open and warm. So be sure to space the softCommits far 
enough apart that you are not opening new searchers faster then they can be 
warmed.

When you post you're stack trace, it should tell us what's happening though. 


was (Author: joel.bernstein):
Also, some things to thing about with your application.

Solr provides a consistent view of the index for each query. So softCommits in 
the backround should not effect the execution of your query. 

Soft commits do open a new searcher though, and the CollapsingQParserPlugin 
relies on the Lucene FieldCache, which needs to be warmed when a new searcher 
is opened.

So adding a static warming query that exercises the CollapsingQParserPlugin 
will ensure that users will not see pauses after softCommits.

If you are softCommitting too frequently this can lead to overlapping searchers 
as they take time to open and warm. So be sure to space the softCommits far 
enough apart that you are not opening new searchers faster then they can be 
warmed.

When you post you're stack trace, it should tell us what's happening though. 

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127797#comment-14127797
 ] 

Tomás Fernández Löbbe commented on SOLR-6452:
-

I think the patch looks good, I'll commit it shortly

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-6452:
---

Assignee: Tomás Fernández Löbbe

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127801#comment-14127801
 ] 

David Boychuck commented on SOLR-6066:
--

null:java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at org.apache.solr.common.util.NamedList.getName(NamedList.java:131)
at 
org.apache.solr.handler.component.QueryComponent.unmarshalSortValues(QueryComponent.java:1058)
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:905)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:695)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:674)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:323)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:768)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:205)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.RequestFilterValve.process(RequestFilterValve.java:304)
at 
org.apache.catalina.valves.RemoteAddrValve.invoke(RemoteAddrValve.java:82)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse 

[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127815#comment-14127815
 ] 

David Boychuck commented on SOLR-6066:
--

326255 [http-bio-8080-exec-1] ERROR org.apache.solr.servlet.SolrDispatchFilter  
? null:java.lang.ArrayIndexOutOfBoundsException: 2147483645
at 
org.apache.lucene.util.packed.Packed8ThreeBlocks.get(Packed8ThreeBlocks.java:58)
at 
org.apache.lucene.search.FieldCacheImpl$SortedDocValuesImpl.getOrd(FieldCacheImpl.java:1132)
at 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingScoreCollector.finish(CollapsingQParserPlugin.java:525)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1741)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1391)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:476)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:461)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:217)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:768)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:205)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.valves.RequestFilterValve.process(RequestFilterValve.java:304)
at 
org.apache.catalina.valves.RemoteAddrValve.invoke(RemoteAddrValve.java:82)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 

[jira] [Comment Edited] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127694#comment-14127694
 ] 

Julie Tibshirani edited comment on LUCENE-5929 at 9/10/14 12:17 AM:


I uploaded a patch that simply adds a check in WeightedSpanTermExtractor for 
ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or should watch out for? 


was (Author: jtibs):
I have a simple fix ready that simply adds a check in WeightedSpanTermExtractor 
for ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or should watch out for? 

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julie Tibshirani updated LUCENE-5929:
-
Lucene Fields: Patch Available  (was: New)

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127850#comment-14127850
 ] 

Joel Bernstein commented on SOLR-6066:
--

Looks like you're running into this bug which was resolved in Solr 4.8:

https://issues.apache.org/jira/browse/SOLR-6029

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-09-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127856#comment-14127856
 ] 

Joel Bernstein commented on SOLR-6066:
--

You can see the changes with this commit:
https://svn.apache.org/viewvc?view=revisionrevision=r1592880

The patch was put up there by the reporter, but the commit is slightly 
different.



 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
 TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6452:

Attachment: SOLR-6452.patch

New patch against trunk plus minor changes

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, 4.10
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452.patch, SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5930) Change intellij setup to have 1 module per lucene module instead of 3

2014-09-09 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5930:
--

 Summary: Change intellij setup to have 1 module per lucene module 
instead of 3
 Key: LUCENE-5930
 URL: https://issues.apache.org/jira/browse/LUCENE-5930
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst


The number of intellij modules is getting out of hand.  Intellij supports 
marking subdirectories within a module as 
source/resources/tests/test-resources.  I think we should consolidate these 
modules so we have just one per lucene module.  Is there some reason I'm 
missing that this was not done in the first place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5930) Change intellij setup to have 1 module per lucene module instead of 3

2014-09-09 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127865#comment-14127865
 ] 

Ryan Ernst commented on LUCENE-5930:


I should clarify that not all modules are setup this way.  It looks like just 
the major ones (which is where the pain is for me, having to switch between 
intellij modules to find browse for what I'm looking for).  I see this patter 
for the following lucene modules:
* lucene core
* codecs
* solr core

...I guess that was it. I thought I had remember more.  Even so, lucene core is 
what bugs me the most.  Any objections?

 Change intellij setup to have 1 module per lucene module instead of 3
 -

 Key: LUCENE-5930
 URL: https://issues.apache.org/jira/browse/LUCENE-5930
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst

 The number of intellij modules is getting out of hand.  Intellij supports 
 marking subdirectories within a module as 
 source/resources/tests/test-resources.  I think we should consolidate these 
 modules so we have just one per lucene module.  Is there some reason I'm 
 missing that this was not done in the first place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6493) stats on multivalued fields don't respect excluded filters

2014-09-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127864#comment-14127864
 ] 

ASF subversion and git services commented on SOLR-6493:
---

Commit 1623893 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1623893 ]

SOLR-6493: Fix fq exclusion via ex local param in multivalued stats.field 
(merge r1623884)

 stats on multivalued fields don't respect excluded filters
 --

 Key: SOLR-6493
 URL: https://issues.apache.org/jira/browse/SOLR-6493
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.9, 4.10
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-6493.patch, SOLR-6493.patch


 SOLR-3177 added support to StatsComponent for using the ex local param to 
 exclude tagged filters, but these exclusions have apparently never been 
 correct for multivalued fields



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5930) Change intellij setup to have 1 module per lucene module instead of 3

2014-09-09 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127869#comment-14127869
 ] 

Ryan Ernst commented on LUCENE-5930:


I also think we should try grouping modules? That would make it a lot less 
cumbersome to navigate.
http://www.jetbrains.com/idea/webhelp/grouping-modules.html

 Change intellij setup to have 1 module per lucene module instead of 3
 -

 Key: LUCENE-5930
 URL: https://issues.apache.org/jira/browse/LUCENE-5930
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst

 The number of intellij modules is getting out of hand.  Intellij supports 
 marking subdirectories within a module as 
 source/resources/tests/test-resources.  I think we should consolidate these 
 modules so we have just one per lucene module.  Is there some reason I'm 
 missing that this was not done in the first place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2014-09-09 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6496:
---
Attachment: SOLR-6496.patch

Fixed patch for null safe SolrParams check.

 LBHttpSolrServer should stop server retries after the timeAllowed threshold 
 is met
 --

 Key: SOLR-6496
 URL: https://issues.apache.org/jira/browse/SOLR-6496
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
Priority: Critical
 Fix For: 4.11

 Attachments: SOLR-6496.patch, SOLR-6496.patch


 The LBHttpSolrServer will continue to perform retries for each server it was 
 given without honoring the timeAllowed request parameter. Once the threshold 
 has been met, you should no longer perform retries and allow the exception to 
 bubble up and allow the request to either error out or return partial results 
 per the shards.tolerant request parameter.
 For a little more context on how this is can be extremely problematic please 
 see the comment here: 
 https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6493) stats on multivalued fields don't respect excluded filters

2014-09-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6493.

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

 stats on multivalued fields don't respect excluded filters
 --

 Key: SOLR-6493
 URL: https://issues.apache.org/jira/browse/SOLR-6493
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.9, 4.10
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 5.0, 4.11

 Attachments: SOLR-6493.patch, SOLR-6493.patch


 SOLR-3177 added support to StatsComponent for using the ex local param to 
 exclude tagged filters, but these exclusions have apparently never been 
 correct for multivalued fields



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit points has deletes/field updates

2014-09-09 Thread Vitaly Funstein (JIRA)
Vitaly Funstein created LUCENE-5931:
---

 Summary: DirectoryReader.openIfChanged(oldReader, commit) 
incorrectly assumes given commit points has deletes/field updates
 Key: LUCENE-5931
 URL: https://issues.apache.org/jira/browse/LUCENE-5931
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein
Priority: Critical


{{StandardDirectoryReader}} assumes that the segments from commit point have 
deletes, when they may not, yet the original SegmentReader for the segment that 
we are trying to reuse does. This is evident when running attached JUnit test 
case with asserts enabled (default): 

{noformat}
java.lang.AssertionError
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:188)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
at 
org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
{noformat}

or, if asserts are disabled then it falls through into NPE:

{noformat}
java.lang.NullPointerException
at java.io.File.init(File.java:305)
at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80)
at 
org.apache.lucene.codecs.lucene40.BitVector.init(BitVector.java:327)
at 
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:131)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:194)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
at 
org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit points has deletes/field updates

2014-09-09 Thread Vitaly Funstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Funstein updated LUCENE-5931:

Attachment: CommitReuseTest.java

 DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given 
 commit points has deletes/field updates
 --

 Key: LUCENE-5931
 URL: https://issues.apache.org/jira/browse/LUCENE-5931
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein
Priority: Critical
 Attachments: CommitReuseTest.java


 {{StandardDirectoryReader}} assumes that the segments from commit point have 
 deletes, when they may not, yet the original SegmentReader for the segment 
 that we are trying to reuse does. This is evident when running attached JUnit 
 test case with asserts enabled (default): 
 {noformat}
 java.lang.AssertionError
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:188)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
   at 
 org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
   at 
 org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
 {noformat}
 or, if asserts are disabled then it falls through into NPE:
 {noformat}
 java.lang.NullPointerException
   at java.io.File.init(File.java:305)
   at 
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80)
   at 
 org.apache.lucene.codecs.lucene40.BitVector.init(BitVector.java:327)
   at 
 org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
   at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:131)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:194)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
   at 
 org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
   at 
 org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit point has deletes/field updates

2014-09-09 Thread Vitaly Funstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Funstein updated LUCENE-5931:

Summary: DirectoryReader.openIfChanged(oldReader, commit) incorrectly 
assumes given commit point has deletes/field updates  (was: 
DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given 
commit points has deletes/field updates)

 DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given 
 commit point has deletes/field updates
 -

 Key: LUCENE-5931
 URL: https://issues.apache.org/jira/browse/LUCENE-5931
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6.1
Reporter: Vitaly Funstein
Priority: Critical
 Attachments: CommitReuseTest.java


 {{StandardDirectoryReader}} assumes that the segments from commit point have 
 deletes, when they may not, yet the original SegmentReader for the segment 
 that we are trying to reuse does. This is evident when running attached JUnit 
 test case with asserts enabled (default): 
 {noformat}
 java.lang.AssertionError
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:188)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
   at 
 org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
   at 
 org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
 {noformat}
 or, if asserts are disabled then it falls through into NPE:
 {noformat}
 java.lang.NullPointerException
   at java.io.File.init(File.java:305)
   at 
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80)
   at 
 org.apache.lucene.codecs.lucene40.BitVector.init(BitVector.java:327)
   at 
 org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
   at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:131)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:194)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:326)
   at 
 org.apache.lucene.index.StandardDirectoryReader$2.doBody(StandardDirectoryReader.java:320)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:702)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenFromCommit(StandardDirectoryReader.java:315)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:311)
   at 
 org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:262)
   at 
 org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:183)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127694#comment-14127694
 ] 

Julie Tibshirani edited comment on LUCENE-5929 at 9/10/14 1:27 AM:
---

I uploaded a patch that simply adds a check in WeightedSpanTermExtractor for 
ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene --any suggestions would be much appreciated!


was (Author: jtibs):
I uploaded a patch that simply adds a check in WeightedSpanTermExtractor for 
ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- is there anything I'm missing or should watch out for? 

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch, LUCENE-5929.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julie Tibshirani updated LUCENE-5929:
-
Attachment: LUCENE-5929.patch

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch, LUCENE-5929.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5814) CoreContainer reports incorrect missleading path for solrconfig.xml when there are loading problems

2014-09-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5814.

   Resolution: Fixed
Fix Version/s: 4.11
   5.0

i could have sworn i resolved this a few days ago

 CoreContainer reports incorrect  missleading path for solrconfig.xml when 
 there are loading problems
 -

 Key: SOLR-5814
 URL: https://issues.apache.org/jira/browse/SOLR-5814
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 5.0, 4.11

 Attachments: SOLR-5814.patch, SOLR-5814.patch


 The error messages thrown by CoreContainer when there is a problem loading 
 solrconfig.xml refer to the wrong path (leaves out conf/).
 This is missleading users (who may not notice the root cause) into thinking 
 they need to move their solrconfig.xml from 
 {{collection_name/conf/solrconfig.xml}} to {{collection_name/solrconfig.xml}} 
 at which point they still get the same error message because solr still can't 
 find the file in the conf dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5928:
---
Attachment: top_java.jpg

hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT =900G).

see attachement, thanks.


 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar
Assignee: Uwe Schindler
 Attachments: top_java.jpg


 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127909#comment-14127909
 ] 

Littlestar edited comment on LUCENE-5928 at 9/10/14 1:45 AM:
-

hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT =900G).

see attachement, thanks.
!top_java.jpg!


was (Author: cnstar9988):
hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT =900G).

see attachement, thanks.


 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar
Assignee: Uwe Schindler
 Attachments: top_java.jpg


 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5932) SpanNearUnordered duplicate term counts itself as a match

2014-09-09 Thread Steve Davids (JIRA)
Steve Davids created LUCENE-5932:


 Summary: SpanNearUnordered duplicate term counts itself as a match
 Key: LUCENE-5932
 URL: https://issues.apache.org/jira/browse/LUCENE-5932
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.11


An unordered span near with the exact same term will count the first position 
as a match for the second term.

A document with values: w1 w2 w3 w4 w5

Query hit: spanNear([w1, w1], 1, false) -- SpanNearUnordered
Query miss: spanNear([w1, w1], 1, true) -- SpanNearOrdered (expected)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5932) SpanNearUnordered duplicate term counts itself as a match

2014-09-09 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated LUCENE-5932:
-
Attachment: LUCENE-5932.patch

Added patch with test case demonstrating the issue.

 SpanNearUnordered duplicate term counts itself as a match
 -

 Key: LUCENE-5932
 URL: https://issues.apache.org/jira/browse/LUCENE-5932
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.11

 Attachments: LUCENE-5932.patch


 An unordered span near with the exact same term will count the first position 
 as a match for the second term.
 A document with values: w1 w2 w3 w4 w5
 Query hit: spanNear([w1, w1], 1, false) -- SpanNearUnordered
 Query miss: spanNear([w1, w1], 1, true) -- SpanNearOrdered (expected)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5928:
---
Environment: SSD 1.5T, RAM 256G, records 15*1*1  (was: SSD 1.5T, 
RAM 256G 10*1)

 WildcardQuery may has memory leak
 -

 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G, records 15*1*1
Reporter: Littlestar
Assignee: Uwe Schindler
 Attachments: top_java.jpg


 data 800G, records 15*1*1.
 one search thread.
 content:???
 content:*
 content:*1
 content:*2
 content:*3
 jvm heap=96G, but the jvm memusage over 130g?
 run more wildcard, use memory more
 Does luence search/index use a lot of DirectMemory or Native Memory?
 I use -XX:MaxDirectMemorySize=4g, it does nothing better.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127694#comment-14127694
 ] 

Julie Tibshirani edited comment on LUCENE-5929 at 9/10/14 4:31 AM:
---

I uploaded a patch that simply adds a check in WeightedSpanTermExtractor for 
ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene -- any suggestions would be much appreciated!


was (Author: jtibs):
I uploaded a patch that simply adds a check in WeightedSpanTermExtractor for 
ToParentBlockJoinQuery and ToChildBlockJoinQuery. I'm new to committing to 
Lucene --any suggestions would be much appreciated!

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
 Attachments: HighligherTest.patch, LUCENE-5929.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5929) Standard highlighting doesn't work for ToParentBlockJoinQuery

2014-09-09 Thread Julie Tibshirani (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julie Tibshirani updated LUCENE-5929:
-
Priority: Critical  (was: Major)

 Standard highlighting doesn't work for ToParentBlockJoinQuery
 -

 Key: LUCENE-5929
 URL: https://issues.apache.org/jira/browse/LUCENE-5929
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: Julie Tibshirani
Priority: Critical
 Attachments: HighligherTest.patch, LUCENE-5929.patch


 Because WeightedSpanTermExtractor#extract doesn't check for 
 ToParentBlockJoinQuery, the Highlighter class fails to produce highlights for 
 this type of query.
 At first it may seem like there's no issue, because ToParentBlockJoinQuery 
 only returns parent documents, while the highlighting applies to children. 
 But if a client can directly supply the text from child documents (as 
 elasticsearch does if _source is enabled), then highlighting will 
 unexpectedly fail.
 A test case that triggers the bug is attached. The same issue exists for 
 ToChildBlockJoinQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6497) Solr 4.10 returning SolrDocument instances with empty map when dynamic fields are requested

2014-09-09 Thread Constantin Mitocaru (JIRA)
Constantin Mitocaru created SOLR-6497:
-

 Summary: Solr 4.10 returning SolrDocument instances with empty map 
when dynamic fields are requested
 Key: SOLR-6497
 URL: https://issues.apache.org/jira/browse/SOLR-6497
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10
 Environment: Windows 7, JDK8u11
Reporter: Constantin Mitocaru




I recently upgraded from Solr 4.9 to 4.10. At some point in the code I want to 
return the values for some dynamic fields. If I do this:
{code}
SolrQuery query = new SolrQuery();
query.addField(code);
query.addField(name);
{code}
it returns the right values in the fields {{code}} and {{name}}.

If I do this:
{code}
SolrQuery query = new SolrQuery();
query.addField(code);
query.addField(name);
query.addField(*_prop);
{code}
all the fields ,including {{code}} and {{name}}, are {{null}}.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6497) Solr 4.10 returning SolrDocument instances with empty map when dynamic fields are requested

2014-09-09 Thread Constantin Mitocaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constantin Mitocaru updated SOLR-6497:
--
Priority: Critical  (was: Major)

 Solr 4.10 returning SolrDocument instances with empty map when dynamic fields 
 are requested
 ---

 Key: SOLR-6497
 URL: https://issues.apache.org/jira/browse/SOLR-6497
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10
 Environment: Windows 7, JDK8u11
Reporter: Constantin Mitocaru
Priority: Critical
  Labels: patch

 I recently upgraded from Solr 4.9 to 4.10. At some point in the code I want 
 to return the values for some dynamic fields. If I do this:
 {code}
 SolrQuery query = new SolrQuery();
 query.addField(code);
 query.addField(name);
 {code}
 it returns the right values in the fields {{code}} and {{name}}.
 If I do this:
 {code}
 SolrQuery query = new SolrQuery();
 query.addField(code);
 query.addField(name);
 query.addField(*_prop);
 {code}
 all the fields ,including {{code}} and {{name}}, are {{null}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1822 - Failure!

2014-09-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1822/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:50306/_e

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:50306/_e
at 
__randomizedtesting.SeedInfo.seed([D67BE425CEF4B1AC:579D6A3DB9ABD190]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:151)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-5930) Change intellij setup to have 1 module per lucene module instead of 3

2014-09-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128111#comment-14128111
 ] 

Steve Rowe commented on LUCENE-5930:


LUCENE-4367 is where I added the extra test modules for lucene-core, 
lucene-codecs, solr-core and solr-solrj, because any module that uses the 
lucene or solr test-framework module makes (or made, anyway) IntelliJ think 
there was a circular dependency.  See the images I put up on the Maven version 
of that issue (LUCENE-4365) that show the situation 
[before|https://issues.apache.org/jira/secure/attachment/12543942/lucene.solr.dependency.cycles.png.jpg]and
 
[after|https://issues.apache.org/jira/secure/attachment/12543981/lucene.solr.cyclic.dependencies.removed.png].

Maybe IntelliJ is smarter now (IntelliJ v9 or v10 when LUCENE-4367 was put in 
place; v13 now and v14 EAP is available) about the difference between test and 
compile scope dependencies?  It's worth trying.

IIRC, some modules have (test-)resource-only fake modules in order to copy over 
resources or something like that.

bq. I also think we should try grouping modules? That would make it a lot less 
cumbersome to navigate.
http://www.jetbrains.com/idea/webhelp/grouping-modules.html

+1


 Change intellij setup to have 1 module per lucene module instead of 3
 -

 Key: LUCENE-5930
 URL: https://issues.apache.org/jira/browse/LUCENE-5930
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst

 The number of intellij modules is getting out of hand.  Intellij supports 
 marking subdirectories within a module as 
 source/resources/tests/test-resources.  I think we should consolidate these 
 modules so we have just one per lucene module.  Is there some reason I'm 
 missing that this was not done in the first place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5930) Change intellij setup to have 1 module per lucene module instead of 3

2014-09-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128111#comment-14128111
 ] 

Steve Rowe edited comment on LUCENE-5930 at 9/10/14 5:56 AM:
-

LUCENE-4367 is where I added the extra test modules for lucene-core, 
lucene-codecs, solr-core and solr-solrj, because any module that uses the 
lucene or solr test-framework module makes (or made, anyway) IntelliJ think 
there was a circular dependency.  See the images I put up on the Maven version 
of that issue (LUCENE-4365) that show the situation 
[before|https://issues.apache.org/jira/secure/attachment/12543942/lucene.solr.dependency.cycles.png.jpg]
 and 
[after|https://issues.apache.org/jira/secure/attachment/12543981/lucene.solr.cyclic.dependencies.removed.png].

Maybe IntelliJ is smarter now (IntelliJ v9 or v10 when LUCENE-4367 was put in 
place; v13 now and v14 EAP is available) about the difference between test and 
compile scope dependencies?  It's worth trying.

IIRC, some modules have (test-)resource-only fake modules in order to copy over 
resources or something like that.

bq. I also think we should try grouping modules? That would make it a lot less 
cumbersome to navigate.
http://www.jetbrains.com/idea/webhelp/grouping-modules.html

+1



was (Author: steve_rowe):
LUCENE-4367 is where I added the extra test modules for lucene-core, 
lucene-codecs, solr-core and solr-solrj, because any module that uses the 
lucene or solr test-framework module makes (or made, anyway) IntelliJ think 
there was a circular dependency.  See the images I put up on the Maven version 
of that issue (LUCENE-4365) that show the situation 
[before|https://issues.apache.org/jira/secure/attachment/12543942/lucene.solr.dependency.cycles.png.jpg]and
 
[after|https://issues.apache.org/jira/secure/attachment/12543981/lucene.solr.cyclic.dependencies.removed.png].

Maybe IntelliJ is smarter now (IntelliJ v9 or v10 when LUCENE-4367 was put in 
place; v13 now and v14 EAP is available) about the difference between test and 
compile scope dependencies?  It's worth trying.

IIRC, some modules have (test-)resource-only fake modules in order to copy over 
resources or something like that.

bq. I also think we should try grouping modules? That would make it a lot less 
cumbersome to navigate.
http://www.jetbrains.com/idea/webhelp/grouping-modules.html

+1


 Change intellij setup to have 1 module per lucene module instead of 3
 -

 Key: LUCENE-5930
 URL: https://issues.apache.org/jira/browse/LUCENE-5930
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst

 The number of intellij modules is getting out of hand.  Intellij supports 
 marking subdirectories within a module as 
 source/resources/tests/test-resources.  I think we should consolidate these 
 modules so we have just one per lucene module.  Is there some reason I'm 
 missing that this was not done in the first place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org