[jira] [Commented] (SOLR-4613) Move checkDistributed to SearchHandler

2013-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606096#comment-13606096
 ] 

Mark Miller commented on SOLR-4613:
---

I think part of the motivation for this was to allow a single pluggable point 
for controlling shard selection.

Been a while since I've looked closely, but this API has always looked like it 
could use some refactoring.

Hope to be able to take a closer look later.

 Move checkDistributed to SearchHandler
 --

 Key: SOLR-4613
 URL: https://issues.apache.org/jira/browse/SOLR-4613
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ryan Ernst
 Attachments: SOLR-4613.patch


 Currently a ShardHandler is created for a request even for non distributed 
 requests.  The checkDistributed function on ShardHandler has no special state 
 kept in the ShardHandler.  Historically it used to be in QueryComponent, but 
 it seems like SearchHandler would be the right place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4597) CachingDirectoryFactory#remove should not attempt to empty/remove the index right away but flag for removal after close.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606099#comment-13606099
 ] 

Commit Tag Bot commented on SOLR-4597:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458143

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 CachingDirectoryFactory#remove should not attempt to empty/remove the index 
 right away but flag for removal after close.
 

 Key: SOLR-4597
 URL: https://issues.apache.org/jira/browse/SOLR-4597
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4598) The Core Admin unload command's option 'deleteDataDir', should use the DirectoryFactory API to remove the data dir.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606100#comment-13606100
 ] 

Commit Tag Bot commented on SOLR-4598:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458143

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 The Core Admin unload command's option 'deleteDataDir', should use the 
 DirectoryFactory API to remove the data dir.
 ---

 Key: SOLR-4598
 URL: https://issues.apache.org/jira/browse/SOLR-4598
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4597) CachingDirectoryFactory#remove should not attempt to empty/remove the index right away but flag for removal after close.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606102#comment-13606102
 ] 

Commit Tag Bot commented on SOLR-4597:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458144

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 CachingDirectoryFactory#remove should not attempt to empty/remove the index 
 right away but flag for removal after close.
 

 Key: SOLR-4597
 URL: https://issues.apache.org/jira/browse/SOLR-4597
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4599) CachingDirectoryFactory calls close(Directory) on forceNew if the Directory has a refCnt of 0, but it should call closeDirectory(CacheValueValue)

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606101#comment-13606101
 ] 

Commit Tag Bot commented on SOLR-4599:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458143

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 CachingDirectoryFactory calls close(Directory) on forceNew if the Directory 
 has a refCnt of 0, but it should call closeDirectory(CacheValueValue) 
 --

 Key: SOLR-4599
 URL: https://issues.apache.org/jira/browse/SOLR-4599
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4599) CachingDirectoryFactory calls close(Directory) on forceNew if the Directory has a refCnt of 0, but it should call closeDirectory(CacheValueValue)

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606104#comment-13606104
 ] 

Commit Tag Bot commented on SOLR-4599:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458144

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 CachingDirectoryFactory calls close(Directory) on forceNew if the Directory 
 has a refCnt of 0, but it should call closeDirectory(CacheValueValue) 
 --

 Key: SOLR-4599
 URL: https://issues.apache.org/jira/browse/SOLR-4599
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4598) The Core Admin unload command's option 'deleteDataDir', should use the DirectoryFactory API to remove the data dir.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606103#comment-13606103
 ] 

Commit Tag Bot commented on SOLR-4598:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458144

SOLR-4597: Move CHANGES entry.
SOLR-4598: Move CHANGES entry.
SOLR-4599: Move CHANGES entry.


 The Core Admin unload command's option 'deleteDataDir', should use the 
 DirectoryFactory API to remove the data dir.
 ---

 Key: SOLR-4598
 URL: https://issues.apache.org/jira/browse/SOLR-4598
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Koji Sekiguchi

Welcome Shawn!

(13/03/19 13:31), Steve Rowe wrote:

I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation to 
become a committer.

Shawn, it's tradition that you introduce yourself with a brief bio.

Once your account has been created - could take a few days - you'll be able to add yourself to 
committers section of the Who We Are page on the website: 
http://lucene.apache.org/whoweare.html (use the ASF CMS bookmarklet at the bottom of the 
page here: https://cms.apache.org/#bookmark - more info here 
http://www.apache.org/dev/cms.html).

Check out the ASF dev page - lots of useful links: http://www.apache.org/dev/.

Congratulations and welcome!

Steve


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





--
http://soleami.com/blog/lucene-4-is-super-convenient-for-developing-nlp-tools.html

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4605) Rollback does not work correctly.

2013-03-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4605:
--

Fix Version/s: 4.2.1

 Rollback does not work correctly.
 -

 Key: SOLR-4605
 URL: https://issues.apache.org/jira/browse/SOLR-4605
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1, 4.2
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
Assignee: Mark Miller
  Labels: solrj
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4605.patch


 http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-td4046543.html
 I wrote a simple test to reproduce a very similar stack trace to the above 
 issue, where only some line numbers differences due to Solr 4.1 vs Solr 4.2.
 *Source of Exception*
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_1_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_2_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 {code:java} 
 catch (Exception e) {
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, Error 
 opening new searcher, e);
 }
 {code}
 Any ideas as to why the following happens?  Any help would be very 
 appreciated.
 * *The test case:*
 {code:java}
 @Test
 public void documentCommitAndRollbackTest() throws Exception {
 // Fix:  SolrException: Error opening new searcher
 server.rollback();
 server.commit();
 }
 {code}
 * *The similar stack trace (Which is repeated twice):*
 {quote}
 Mar 15, 2013 3:48:09 PM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error opening new searcher
 at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1415)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1527)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1304)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:570)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1797)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:637)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter 
 is closed
 at 
 org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:583)
 at 
 

[jira] [Commented] (SOLR-4609) The Collections API should only send the reload command to ACTIVE cores.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606128#comment-13606128
 ] 

Commit Tag Bot commented on SOLR-4609:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458156

SOLR-4604: Move CHANGES entry.
SOLR-4605: Move CHANGES entry.
SOLR-4609: Move CHANGES entry.


 The Collections API should only send the reload command to ACTIVE cores.
 

 Key: SOLR-4609
 URL: https://issues.apache.org/jira/browse/SOLR-4609
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1


 We don't want to send accidentally send reload command to a node that is 
 recovering, so it seems best to limit this to nodes that are currently seen 
 as ACTIVE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4604) UpdateLog#init is over called on SolrCore#reload

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606126#comment-13606126
 ] 

Commit Tag Bot commented on SOLR-4604:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458156

SOLR-4604: Move CHANGES entry.
SOLR-4605: Move CHANGES entry.
SOLR-4609: Move CHANGES entry.


 UpdateLog#init is over called on SolrCore#reload
 

 Key: SOLR-4604
 URL: https://issues.apache.org/jira/browse/SOLR-4604
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1


 I think this is why I have occasionally not been able to remove tlogs on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4605) Rollback does not work correctly.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606127#comment-13606127
 ] 

Commit Tag Bot commented on SOLR-4605:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458156

SOLR-4604: Move CHANGES entry.
SOLR-4605: Move CHANGES entry.
SOLR-4609: Move CHANGES entry.


 Rollback does not work correctly.
 -

 Key: SOLR-4605
 URL: https://issues.apache.org/jira/browse/SOLR-4605
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1, 4.2
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
Assignee: Mark Miller
  Labels: solrj
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4605.patch


 http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-td4046543.html
 I wrote a simple test to reproduce a very similar stack trace to the above 
 issue, where only some line numbers differences due to Solr 4.1 vs Solr 4.2.
 *Source of Exception*
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_1_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_2_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 {code:java} 
 catch (Exception e) {
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, Error 
 opening new searcher, e);
 }
 {code}
 Any ideas as to why the following happens?  Any help would be very 
 appreciated.
 * *The test case:*
 {code:java}
 @Test
 public void documentCommitAndRollbackTest() throws Exception {
 // Fix:  SolrException: Error opening new searcher
 server.rollback();
 server.commit();
 }
 {code}
 * *The similar stack trace (Which is repeated twice):*
 {quote}
 Mar 15, 2013 3:48:09 PM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error opening new searcher
 at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1415)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1527)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1304)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:570)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1797)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:637)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  

[jira] [Commented] (SOLR-4601) A Collection that is only partially created and then deleted will leave pre allocated shard information in ZooKeeper.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606129#comment-13606129
 ] 

Commit Tag Bot commented on SOLR-4601:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458151

SOLR-4601: Move CHANGES entry.


 A Collection that is only partially created and then deleted will leave pre 
 allocated shard information in ZooKeeper.
 -

 Key: SOLR-4601
 URL: https://issues.apache.org/jira/browse/SOLR-4601
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1


 This means you can't try and create the collection again as it will appear to 
 already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4602) ZkController#unregister should cancel it's election participation before asking the Overseer to delete the SolrCore information.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606130#comment-13606130
 ] 

Commit Tag Bot commented on SOLR-4602:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458149

SOLR-4602: Move CHANGES entry.


 ZkController#unregister should cancel it's election participation before 
 asking the Overseer to delete the SolrCore information.
 

 Key: SOLR-4602
 URL: https://issues.apache.org/jira/browse/SOLR-4602
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1


 Otherwise, the leader election is likely to do publishes that race with the 
 removal of the SolrCore from the clusterstate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4604) UpdateLog#init is over called on SolrCore#reload

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606131#comment-13606131
 ] 

Commit Tag Bot commented on SOLR-4604:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458155

SOLR-4604: Move CHANGES entry.
SOLR-4605: Move CHANGES entry.
SOLR-4609: Move CHANGES entry.


 UpdateLog#init is over called on SolrCore#reload
 

 Key: SOLR-4604
 URL: https://issues.apache.org/jira/browse/SOLR-4604
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1


 I think this is why I have occasionally not been able to remove tlogs on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4605) Rollback does not work correctly.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606132#comment-13606132
 ] 

Commit Tag Bot commented on SOLR-4605:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458155

SOLR-4604: Move CHANGES entry.
SOLR-4605: Move CHANGES entry.
SOLR-4609: Move CHANGES entry.


 Rollback does not work correctly.
 -

 Key: SOLR-4605
 URL: https://issues.apache.org/jira/browse/SOLR-4605
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1, 4.2
 Environment: Ubuntu 12.04.2 LTS
Reporter: Mark S
Assignee: Mark Miller
  Labels: solrj
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4605.patch


 http://lucene.472066.n3.nabble.com/Solr-4-1-4-2-SolrException-Error-opening-new-searcher-td4046543.html
 I wrote a simple test to reproduce a very similar stack trace to the above 
 issue, where only some line numbers differences due to Solr 4.1 vs Solr 4.2.
 *Source of Exception*
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_1_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 * 
 [http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_2_0/solr/core/src/java/org/apache/solr/core/SolrCore.java?view=markup]
 {code:java} 
 catch (Exception e) {
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, Error 
 opening new searcher, e);
 }
 {code}
 Any ideas as to why the following happens?  Any help would be very 
 appreciated.
 * *The test case:*
 {code:java}
 @Test
 public void documentCommitAndRollbackTest() throws Exception {
 // Fix:  SolrException: Error opening new searcher
 server.rollback();
 server.commit();
 }
 {code}
 * *The similar stack trace (Which is repeated twice):*
 {quote}
 Mar 15, 2013 3:48:09 PM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Error opening new searcher
 at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1415)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1527)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1304)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:570)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1797)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:637)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 

[jira] [Commented] (SOLR-4601) A Collection that is only partially created and then deleted will leave pre allocated shard information in ZooKeeper.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606134#comment-13606134
 ] 

Commit Tag Bot commented on SOLR-4601:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458150

SOLR-4601: Move CHANGES entry.


 A Collection that is only partially created and then deleted will leave pre 
 allocated shard information in ZooKeeper.
 -

 Key: SOLR-4601
 URL: https://issues.apache.org/jira/browse/SOLR-4601
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1


 This means you can't try and create the collection again as it will appear to 
 already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4602) ZkController#unregister should cancel it's election participation before asking the Overseer to delete the SolrCore information.

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606135#comment-13606135
 ] 

Commit Tag Bot commented on SOLR-4602:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458148

SOLR-4602: Move CHANGES entry.


 ZkController#unregister should cancel it's election participation before 
 asking the Overseer to delete the SolrCore information.
 

 Key: SOLR-4602
 URL: https://issues.apache.org/jira/browse/SOLR-4602
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1


 Otherwise, the leader election is likely to do publishes that race with the 
 removal of the SolrCore from the clusterstate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Dawid Weiss
Welcome Shawn!

On Tue, Mar 19, 2013 at 7:17 AM, Koji Sekiguchi k...@r.email.ne.jp wrote:

 Welcome Shawn!


 (13/03/19 13:31), Steve Rowe wrote:

 I'm pleased to announce that Shawn Heisey has accepted the PMC's
 invitation to become a committer.

 Shawn, it's tradition that you introduce yourself with a brief bio.

 Once your account has been created - could take a few days - you'll be
 able to add yourself to committers section of the Who We Are page on the
 website: 
 http://lucene.apache.org/**whoweare.htmlhttp://lucene.apache.org/whoweare.html
 (use the ASF CMS bookmarklet at the bottom of the page here: 
 https://cms.apache.org/#**bookmark https://cms.apache.org/#bookmark -
 more info here 
 http://www.apache.org/dev/**cms.htmlhttp://www.apache.org/dev/cms.html
 ).

 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.

 Congratulations and welcome!

 Steve


 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@lucene.apache.**orgdev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 http://soleami.com/blog/**lucene-4-is-super-convenient-**
 for-developing-nlp-tools.htmlhttp://soleami.com/blog/lucene-4-is-super-convenient-for-developing-nlp-tools.html


 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@lucene.apache.**orgdev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Tommaso Teofili
Welcome Shawn!

Tommaso


2013/3/19 Steve Rowe sar...@gmail.com

 I'm pleased to announce that Shawn Heisey has accepted the PMC's
 invitation to become a committer.

 Shawn, it's tradition that you introduce yourself with a brief bio.

 Once your account has been created - could take a few days - you'll be
 able to add yourself to committers section of the Who We Are page on the
 website: http://lucene.apache.org/whoweare.html (use the ASF CMS
 bookmarklet at the bottom of the page here: 
 https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).

 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.

 Congratulations and welcome!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Adrien Grand
Welcome aboard Shawn!

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-03-19 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606192#comment-13606192
 ] 

Christopher commented on SOLR-1913:
---

Hi,

I have the same problem as Ankur Goyal, does anyone have a solution please 
please ?

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.3

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606201#comment-13606201
 ] 

ludovic Boutros commented on SOLR-4608:
---

Thanks Mark and Yonik.

Yonik, could you please post the code of this change ? 
I could try to patch the 4.1/4.2 branches and then test it.

 

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Christian Moen
Congratulations!

On Mar 19, 2013, at 5:55 PM, Adrien Grand jpou...@gmail.com wrote:

 Welcome aboard Shawn!
 
 --
 Adrien
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4465) Configurable Collectors

2013-03-19 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606252#comment-13606252
 ] 

Erik Hatcher commented on SOLR-4465:


Joel - nice work!

A few comments:

  * Before trying the exact link you provided in the description to a working 
example, I tried 
http://localhost:8983/solr/select?q=*:*cl=oncl.delegating=sumcl.sum.0.column=price
 without specifying cl.topdocs=default and got an error.  Maybe if there is no 
cl.topdocs specified, it automatically uses the default?

  * The additional info in the response (lst name=cl.sum.0 in this example) 
is coming out before/above the results (result name=response...).  This 
should probably come out after the results to avoid any issues with clients 
that are looking for the results in a particular spot (which of course they 
shouldn't be, but if we easily move after the results that would be better)

  * I'm not fond of the ordinals.  Seems like we can do away with them somehow, 
leveraging local params.  I'm not sure how that would look just yet, and maybe 
ordinals is the best way here but it's a new kind of syntax for parameters that 
would be nice to avoid if possible.

 Configurable Collectors
 ---

 Key: SOLR-4465
 URL: https://issues.apache.org/jira/browse/SOLR-4465
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.1
Reporter: Joel Bernstein
 Fix For: 4.3

 Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch


 This ticket provides a patch to add pluggable collectors to Solr. This patch 
 was generated and tested with Solr 4.1.
 This is how the patch functions:
 Collectors are plugged into Solr in the solconfig.xml using the new 
 collectorFactory element. For example:
 collectorFactory name=default class=solr.CollectorFactory/
 collectorFactory name=sum class=solr.SumCollectorFactory/
 The elements above define two collector factories. The first one is the 
 default collectorFactory. The class attribute points to 
 org.apache.solr.handler.component.CollectorFactory, which implements logic 
 that returns the default TopScoreDocCollector and TopFieldCollector. 
 To create your own collectorFactory you must subclass the default 
 CollectorFactory and at a minimum override the getCollector method to return 
 your new collector. 
 You can tell Solr which collectorFactory to use at query time using http 
 parameters. All collector parameters start with the prefix cl. 
 The parameter cl turns on pluggable collectors:
 cl=true
 If cl is not in the parameters, Solr will automatically use the default 
 collectorFactory.
 *Pluggable doclist Sorting with Topdocs Collectors*
 You can specify two types of pluggable collectors. The first type is the 
 topdocs collector. For example:
 cl.topdocs=name
 The above param points to the named collectorFactory in the solrconfig.xml to 
 construct the collector. Topdocs collectorFactorys must return collectors 
 that extend the TopDocsCollector base class. Topdocs collectors are 
 responsible for collecting the doclist.
 You can pass parameters to the topdocs collectors by adding cl. http 
 parameters. By convention you can pass parameters to the topdocs collector 
 like this:
 cl.topdocs.max=100
 This parameter will be added to the collector spec because of the cl. 
 prefix and passed to the collectorFactory.
 *Pluggable Custom Analytics With Delegating Collectors*
 You can also specify any number of delegating collectors with the 
 cl.delegating parameter. Delegating collectors are designed to collect 
 something else besides the doclist. Typically this would be some type of 
 custom analytic. 
 cl.delegating=sum,ave
 The parameter above specifies two delegating collectors named sum and ave. 
 Like the topdocs collectors these point to named collectorFactories in the 
 solrconfig.xml. 
 Delegating collector factories must return Collector instances that extend 
 DelegatingCollector. 
 A sample delegating collector is provided in the patch through the 
 org.apache.solr.handler.component.SumCollectorFactory.
 This collectorFactory provides a very simple DelegatingCollector that groups 
 by a field and sums a column of floats. The sum collector is not designed to 
 be a fully functional sum function but to be a proof of concept for pluggable 
 analytics through delegating collectors.
 To communicate with delegating collectors you need to reference the name and 
 ordinal of the collector.
 The ordinal refers to the collectors ordinal in the comma separated list.
 For example:
 cl.delegating=sum,avecl.sum.0.groupby=field1
 The 

[jira] [Created] (LUCENE-4855) Potential exception in TermInfosWriter#initialize() swallowed makes debugging hard

2013-03-19 Thread Chris Gioran (JIRA)
Chris Gioran created LUCENE-4855:


 Summary: Potential exception in TermInfosWriter#initialize() 
swallowed makes debugging hard
 Key: LUCENE-4855
 URL: https://issues.apache.org/jira/browse/LUCENE-4855
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.6.2
 Environment: any
Reporter: Chris Gioran


TermInfosWriter#initialize() can potentially fail with an exception when trying 
to write any of the values in the try block. If that happens the finally clause 
will be called and that may also fail during close(). This exception will mask 
the original one potentially hiding the real cause and making debugging such 
failures difficult.

My particular case involves failing the first write in the initialize() and 
close() failing the seek. My code receives:

Caused by: java.io.IOException: Illegal seek
at java.io.RandomAccessFile.seek(Native Method) ~[na:1.6.0_31]
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.seek(FSDirectory.java:479)
at 
org.apache.lucene.index.TermInfosWriter.close(TermInfosWriter.java:244)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:141)

which provides no indication as to why the initialization failed. The above 
stack trace has been created with lucene version 3.5.0 but the exception 
handling is still the same in 3.6.2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4855) Potential exception in TermInfosWriter#initialize() swallowed makes debugging hard

2013-03-19 Thread Chris Gioran (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Gioran updated LUCENE-4855:
-

Priority: Minor  (was: Major)

 Potential exception in TermInfosWriter#initialize() swallowed makes debugging 
 hard
 --

 Key: LUCENE-4855
 URL: https://issues.apache.org/jira/browse/LUCENE-4855
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.6.2
 Environment: any
Reporter: Chris Gioran
Priority: Minor

 TermInfosWriter#initialize() can potentially fail with an exception when 
 trying to write any of the values in the try block. If that happens the 
 finally clause will be called and that may also fail during close(). This 
 exception will mask the original one potentially hiding the real cause and 
 making debugging such failures difficult.
 My particular case involves failing the first write in the initialize() and 
 close() failing the seek. My code receives:
 Caused by: java.io.IOException: Illegal seek
 at java.io.RandomAccessFile.seek(Native Method) ~[na:1.6.0_31]
 at 
 org.apache.lucene.store.FSDirectory$FSIndexOutput.seek(FSDirectory.java:479)
 at 
 org.apache.lucene.index.TermInfosWriter.close(TermInfosWriter.java:244)
 at org.apache.lucene.util.IOUtils.close(IOUtils.java:141)
 which provides no indication as to why the initialization failed. The above 
 stack trace has been created with lucene version 3.5.0 but the exception 
 handling is still the same in 3.6.2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4465) Configurable Collectors

2013-03-19 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606252#comment-13606252
 ] 

Erik Hatcher edited comment on SOLR-4465 at 3/19/13 12:23 PM:
--

Joel - nice work!

A few comments:

  * Before trying the exact link you provided in the description to a working 
example, I tried 
http://localhost:8983/solr/select?q=*:*cl=oncl.delegating=sumcl.sum.0.column=price
 without specifying cl.topdocs=default and got an error.  Maybe if there is no 
cl.topdocs specified, it should automatically use the default?

  * The additional info in the response (lst name=cl.sum.0 in this example) 
is coming out before/above the results (result name=response...).  This 
should probably come out after the results to avoid any issues with clients 
that are looking for the results in a particular spot (which of course they 
shouldn't be, but if we easily move after the results that would be better)

  * I'm not fond of the ordinals.  Seems like we can do away with them somehow, 
leveraging local params.  I'm not sure how that would look just yet, and maybe 
ordinals is the best way here but it's a new kind of syntax for parameters that 
would be nice to avoid if possible.

  was (Author: ehatcher):
Joel - nice work!

A few comments:

  * Before trying the exact link you provided in the description to a working 
example, I tried 
http://localhost:8983/solr/select?q=*:*cl=oncl.delegating=sumcl.sum.0.column=price
 without specifying cl.topdocs=default and got an error.  Maybe if there is no 
cl.topdocs specified, it automatically uses the default?

  * The additional info in the response (lst name=cl.sum.0 in this example) 
is coming out before/above the results (result name=response...).  This 
should probably come out after the results to avoid any issues with clients 
that are looking for the results in a particular spot (which of course they 
shouldn't be, but if we easily move after the results that would be better)

  * I'm not fond of the ordinals.  Seems like we can do away with them somehow, 
leveraging local params.  I'm not sure how that would look just yet, and maybe 
ordinals is the best way here but it's a new kind of syntax for parameters that 
would be nice to avoid if possible.
  
 Configurable Collectors
 ---

 Key: SOLR-4465
 URL: https://issues.apache.org/jira/browse/SOLR-4465
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.1
Reporter: Joel Bernstein
 Fix For: 4.3

 Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch


 This ticket provides a patch to add pluggable collectors to Solr. This patch 
 was generated and tested with Solr 4.1.
 This is how the patch functions:
 Collectors are plugged into Solr in the solconfig.xml using the new 
 collectorFactory element. For example:
 collectorFactory name=default class=solr.CollectorFactory/
 collectorFactory name=sum class=solr.SumCollectorFactory/
 The elements above define two collector factories. The first one is the 
 default collectorFactory. The class attribute points to 
 org.apache.solr.handler.component.CollectorFactory, which implements logic 
 that returns the default TopScoreDocCollector and TopFieldCollector. 
 To create your own collectorFactory you must subclass the default 
 CollectorFactory and at a minimum override the getCollector method to return 
 your new collector. 
 You can tell Solr which collectorFactory to use at query time using http 
 parameters. All collector parameters start with the prefix cl. 
 The parameter cl turns on pluggable collectors:
 cl=true
 If cl is not in the parameters, Solr will automatically use the default 
 collectorFactory.
 *Pluggable doclist Sorting with Topdocs Collectors*
 You can specify two types of pluggable collectors. The first type is the 
 topdocs collector. For example:
 cl.topdocs=name
 The above param points to the named collectorFactory in the solrconfig.xml to 
 construct the collector. Topdocs collectorFactorys must return collectors 
 that extend the TopDocsCollector base class. Topdocs collectors are 
 responsible for collecting the doclist.
 You can pass parameters to the topdocs collectors by adding cl. http 
 parameters. By convention you can pass parameters to the topdocs collector 
 like this:
 cl.topdocs.max=100
 This parameter will be added to the collector spec because of the cl. 
 prefix and passed to the collectorFactory.
 *Pluggable Custom Analytics With Delegating Collectors*
 You can also specify any number of delegating collectors 

[jira] [Comment Edited] (SOLR-4465) Configurable Collectors

2013-03-19 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606252#comment-13606252
 ] 

Erik Hatcher edited comment on SOLR-4465 at 3/19/13 12:24 PM:
--

Joel - nice work!

A few comments:

  * Before trying the exact link you provided in the description to a working 
example, I tried 
http://localhost:8983/solr/select?q=*:*cl=oncl.delegating=sumcl.sum.0.column=price
 without specifying cl.topdocs=default and got an error.  Maybe if there is no 
cl.topdocs specified, it should automatically use the default?

  * The additional info in the response (lst name=cl.sum.0 in this example) 
is coming out before/above the results (result name=response...).  This 
should probably come out after the results to avoid any issues with clients 
that are looking for the results in a particular spot (which of course they 
shouldn't be, but if we can easily move it after the results that would be 
better)

  * I'm not fond of the ordinals.  Seems like we can do away with them somehow, 
leveraging local params.  I'm not sure how that would look just yet, and maybe 
ordinals is the best way here but it's a new kind of syntax for parameters that 
would be nice to avoid if possible.

  was (Author: ehatcher):
Joel - nice work!

A few comments:

  * Before trying the exact link you provided in the description to a working 
example, I tried 
http://localhost:8983/solr/select?q=*:*cl=oncl.delegating=sumcl.sum.0.column=price
 without specifying cl.topdocs=default and got an error.  Maybe if there is no 
cl.topdocs specified, it should automatically use the default?

  * The additional info in the response (lst name=cl.sum.0 in this example) 
is coming out before/above the results (result name=response...).  This 
should probably come out after the results to avoid any issues with clients 
that are looking for the results in a particular spot (which of course they 
shouldn't be, but if we easily move after the results that would be better)

  * I'm not fond of the ordinals.  Seems like we can do away with them somehow, 
leveraging local params.  I'm not sure how that would look just yet, and maybe 
ordinals is the best way here but it's a new kind of syntax for parameters that 
would be nice to avoid if possible.
  
 Configurable Collectors
 ---

 Key: SOLR-4465
 URL: https://issues.apache.org/jira/browse/SOLR-4465
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.1
Reporter: Joel Bernstein
 Fix For: 4.3

 Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch


 This ticket provides a patch to add pluggable collectors to Solr. This patch 
 was generated and tested with Solr 4.1.
 This is how the patch functions:
 Collectors are plugged into Solr in the solconfig.xml using the new 
 collectorFactory element. For example:
 collectorFactory name=default class=solr.CollectorFactory/
 collectorFactory name=sum class=solr.SumCollectorFactory/
 The elements above define two collector factories. The first one is the 
 default collectorFactory. The class attribute points to 
 org.apache.solr.handler.component.CollectorFactory, which implements logic 
 that returns the default TopScoreDocCollector and TopFieldCollector. 
 To create your own collectorFactory you must subclass the default 
 CollectorFactory and at a minimum override the getCollector method to return 
 your new collector. 
 You can tell Solr which collectorFactory to use at query time using http 
 parameters. All collector parameters start with the prefix cl. 
 The parameter cl turns on pluggable collectors:
 cl=true
 If cl is not in the parameters, Solr will automatically use the default 
 collectorFactory.
 *Pluggable doclist Sorting with Topdocs Collectors*
 You can specify two types of pluggable collectors. The first type is the 
 topdocs collector. For example:
 cl.topdocs=name
 The above param points to the named collectorFactory in the solrconfig.xml to 
 construct the collector. Topdocs collectorFactorys must return collectors 
 that extend the TopDocsCollector base class. Topdocs collectors are 
 responsible for collecting the doclist.
 You can pass parameters to the topdocs collectors by adding cl. http 
 parameters. By convention you can pass parameters to the topdocs collector 
 like this:
 cl.topdocs.max=100
 This parameter will be added to the collector spec because of the cl. 
 prefix and passed to the collectorFactory.
 *Pluggable Custom Analytics With Delegating Collectors*
 You can also specify any number of 

[jira] [Created] (SOLR-4614) ClusterState#getSlices returns null causing NPE in ClientUtils#addSlices

2013-03-19 Thread David Arthur (JIRA)
David Arthur created SOLR-4614:
--

 Summary: ClusterState#getSlices returns null causing NPE in 
ClientUtils#addSlices
 Key: SOLR-4614
 URL: https://issues.apache.org/jira/browse/SOLR-4614
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.1
Reporter: David Arthur
Priority: Minor


When my program sends an UpdateRequest to a collection that has been deleted, I 
am getting a NPE

{code}
java.lang.NullPointerException
at 
org.apache.solr.client.solrj.util.ClientUtils.addSlices(ClientUtils.java:273)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:214)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
{code}

This appears to be caused by the fact that ClusterState#getSlices is returning 
null instead of an empty collection.

ClusterState returning null: 
https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java#L123
ClientUtil#addSlices iterating over a null: 
https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/client/solrj/util/ClientUtils.java#L273

I would attach a patch, but I'm not sure what the preferred style is within the 
project (empty collection vs null checks).



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3167) Make lucene/solr a OSGI bundle through Ant

2013-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606281#comment-13606281
 ] 

Robert Muir commented on LUCENE-3167:
-

{quote}
Maybe if the Lucene committers doesn't feel enough confortable with OSGi, they 
should let it to some external OSGi packagers, just like there are debian 
packagers. Here is a exemple of an Ivy repository maintained by 'packagers': 
https://code.google.com/p/ivyroundup/. There was a tentative for OSGi but it 
stalled: https://github.com/glyn/bundlerepo
{quote}

Yes, I think someone downstream should do it, outside of this project.

I said the same thing about maven, that one didnt work out. But this had to do 
more with maven advocates lying about the necessity of it taking place in this 
project: turned out later this wasn't true and pretty much anybody can release 
anybody else's shit on maven central.

I won't make the same mistake twice.


 Make lucene/solr a OSGI bundle through Ant
 --

 Key: LUCENE-3167
 URL: https://issues.apache.org/jira/browse/LUCENE-3167
 Project: Lucene - Core
  Issue Type: New Feature
 Environment: bndtools
Reporter: Luca Stancapiano
 Attachments: LUCENE-3167_20130108.patch, LUCENE-3167.patch, 
 LUCENE-3167.patch, LUCENE-3167.patch, lucene_trunk.patch, lucene_trunk.patch


 We need to make a bundle thriugh Ant, so the binary can be published and no 
 more need the download of the sources. Actually to get a OSGI bundle we need 
 to use maven tools and build the sources. Here the reference for the creation 
 of the OSGI bundle through Maven:
 https://issues.apache.org/jira/browse/LUCENE-1344
 Bndtools could be used inside Ant

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4848:
--

Attachment: LUCENE-4848-MMapDirectory.patch

For demonstartation puposes, I attached the simple patch for MMapDirectory that 
uses the new StandardOpenMode and FileChannel.open() provided by Java 7. I did 
not yet really test the deletion of open files on windows, but all tests pass 
(as they should).

It would also be interesting if this patch maybe solves the 
ClosedChannelException problem on interrupt? The time window in MMap is very 
short that the bug can happen (only after opening the channel, while mmap is 
doing its work before the channel is closed).

As you see, the Path API of Java 7 is not yet exposed to the public API. The 
whole code is still working with java.io.File, only when opening the channel it 
calls File.toPath(). 

Michael Poindexter: We should do the same and *no* other changes in NIO. Just 
move away from RAF and use FileChannel.

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Erik Hatcher
Glad to have you here, Shawn!

Erik

On Mar 18, 2013, at 21:31 , Steve Rowe wrote:

 I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation 
 to become a committer.
 
 Shawn, it's tradition that you introduce yourself with a brief bio.
 
 Once your account has been created - could take a few days - you'll be able 
 to add yourself to committers section of the Who We Are page on the website: 
 http://lucene.apache.org/whoweare.html (use the ASF CMS bookmarklet at the 
 bottom of the page here: https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).
 
 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.
 
 Congratulations and welcome!
 
 Steve
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-402) JSON response support

2013-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-402.


Resolution: Won't Fix

No need for JSON parsing in SolrJ. Closing outdated issue.

SPRING_CLEANING_2013

 JSON response support
 -

 Key: SOLR-402
 URL: https://issues.apache.org/jira/browse/SOLR-402
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 1.3
 Environment: all
Reporter: AABones
Priority: Minor
 Attachments: jsonPatch.patch


 The Solrj java client was missing response support for JSON.  I added an 
 JSONResponseParser class and the necessary changes elsewhere to support it.  
 I'm attaching the patch file. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-03-19 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606192#comment-13606192
 ] 

Christopher edited comment on SOLR-1913 at 3/19/13 1:13 PM:


Hi,

I have the same problem as Ankur Goyal, does anyone have a solution please ?

  was (Author: nekudot):
Hi,

I have the same problem as Ankur Goyal, does anyone have a solution please 
please ?
  
 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.3

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4854) DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow

2013-03-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606313#comment-13606313
 ] 

Michael McCandless commented on LUCENE-4854:


+1

 DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow 
 --

 Key: LUCENE-4854
 URL: https://issues.apache.org/jira/browse/LUCENE-4854
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4854.patch


 Investigating a test failure in grouping/ I found the current dv api needs 
 help for DocTermsOrds (this facet+grouping collector uses seekExact(BytesRef) 
 on the termsenum):
 * termsenum.seekExact is slow because the default implementation calls 
 lookupTerm, which is slow. but this thing already has an optimal termsenum it 
 can just return directly (since LUCENE-4819)
 * lookupTerm is slow because the default implementation binary-searches 
 ordinal space, calling lookupOrd and comparing to the target. However, 
 lookupOrd is slow for this thing (must binary-search ordinal space again, 
 then next() at most index_interval times).
 * its getOrdTermsEnum() method is buggy: doesn't position correctly on an 
 initial next(). Nothing uses this today, but if we want to return this thing 
 directly it needs to work: its just a trivial check contained within next()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1287) tests rely on internet connection

2013-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-1287.
-

Resolution: Cannot Reproduce

All tests pass without internet connection, closing

SPRING_CLEANING_2013

 tests rely on internet connection
 -

 Key: SOLR-1287
 URL: https://issues.apache.org/jira/browse/SOLR-1287
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley
Priority: Minor

 Our current unit tests don't work w/o internet connectivity... found out due 
 to ContentStreamTest.testURLStream failing because apache svn is down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: questions about solr.xml/solr.properties

2013-03-19 Thread Erick Erickson
So are you talking about backing all this out of 4x or just taking the
properties bits out? Because backing this all out of the 4x code line will
be...er...challenging, much more challenging than just yanking the
properties file bits out, this latter is much easier whereas reverting will
be a pain, I'll need some help there. I'm not really willing to re-do all
of the bits around preventing loading/unloading core operations occurring
at the same time from the ground up, it was too painful to do the first
time. But this isn't really a problem I don't think, since that code is
only relevant if you specify the lazy core loading options in your cores.

I don't quite get what you mean by solr.xml and solrconfig.xml being
similar, just both being xml files? And are we talking a discovery/no
discovery attribute in, say, the cores tag? Nor do I quite get the bits
around threading. The stress code does test the threading I think. It also
specifically does both properties and xml versions of solr.###, so it is
part of our normal testing. Whether we can harden it enough for 4.3 is
certainly a legitimate question.

So are we reconsidering what I had a couple of months ago with pluggable
core discovery? The default would be to do things as they're done now, we'd
provide an implementation that walked the directory structure, and dropping
in a different version that, say, queried ZK or a database would be pretty
straight-forward.

but if the solr.properties thing is to be killed, now is the time.




On Mon, Mar 18, 2013 at 8:42 PM, Mark Miller markrmil...@gmail.com wrote:

 I've been thinking about this in the background. I tossed out some of the
 idea's that were implemented here, and I also thought a properties file
 rather than xml might make the most sense for container config. After all,
 99% of the things you set will be simple key-value props that are
 attributes of the cores tag. I thought, wouldn't that just be simple as a
 properties file? Easy hand wavy thought at the time.

 I had forgotten that the shard handler configuration had crept into
 solr.xml. That made me think, what else will come creeping this way? Other
 solrconfig.xml stuff that doesn't belong per SolrCore. If my memory is
 right, shard handler was in solrconfig.xml at one point - so was the config
 for zookeeper in the very first integration attempt with Solr.

 Shouldn't the Container and SolrCore config mostly match? Wouldn't it be
 funny if was one yaml and the other xml? One a flat structure and the other
 nested? I think so.

 So I'm changing my mind on the properties files and favoring the use of
 the current solr.xml. It's a really easy switch for current users, it's
 consistent with what we have, and we can get 99% of the benefits of Erick's
 work even retaining solr.xml.

 I also think solr.xml and solrconfig.xml should be as similar as they can
 be.

 The real problem with solr.xml is still solved with Erick's work - the
 fact that cores are define there and Solr has to try and update the config
 file. The file is really not so bad once we remove that little ugly wart.

 I think there would be a few details to work out, but this path is very
 appealing to me.

 I don't think we should hurry this stuff - there is still some thread
 safety concerns I have, and I'm not sure it's gotten a lot of use since
 it's not part of the example or general tests. This is very important stuff
 IMO, and we want to get this change right, even if that means it doesn't
 make 4.3. 5x was almost invented for baking this type of change :)

 The faster we can come to a consensus, the faster we can get started
 baking things though.

 Anyone voting for the properties file?

 - Mark

 On Mar 14, 2013, at 3:46 PM, Robert Muir rcm...@gmail.com wrote:

  I'm late to the game here, so I apologize if a lot of this has already
  been discussed...
 
  I was looking recently and a little confused about the new .properties
  format, a few questions:
  * is the current plan to deprecate the solr.xml support in 4.x and drop
 for 5.0?
  * is there a real advantage to the .properties format over the
  existing .xml? When debugging it seemed I was in unknown territory a
  little bit, and this sorta means going forward that everything in here
  is assumed to be flat: but it isnt really today, and what if more
  stuff needed to be added in the future that wasnt flat. For example
  the shard handler stuff has nested elements, but i kinda had to guess
  at how this mapped to .properties (since xml differentiates between
  attributes and elements, but its not so clear with .properties).
 
  It seems to me there are two changes involved:
  1. ability to auto-discover cores from the filesystem so you don't
  need to explicitly list them
  2. changing .xml format to .properties
 
  I guess just brainstorming, like what if we just kept the existing
  .xml format? we could add a new autoDiscover=true attribute to the
  cores element for people who don't want to list them explicitly.

Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Erick Erickson
Welcome aboard!


On Tue, Mar 19, 2013 at 5:59 AM, Erik Hatcher erik.hatc...@gmail.comwrote:

 Glad to have you here, Shawn!

 Erik

 On Mar 18, 2013, at 21:31 , Steve Rowe wrote:

  I'm pleased to announce that Shawn Heisey has accepted the PMC's
 invitation to become a committer.
 
  Shawn, it's tradition that you introduce yourself with a brief bio.
 
  Once your account has been created - could take a few days - you'll be
 able to add yourself to committers section of the Who We Are page on the
 website: http://lucene.apache.org/whoweare.html (use the ASF CMS
 bookmarklet at the bottom of the page here: 
 https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).
 
  Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.
 
  Congratulations and welcome!
 
  Steve
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Yonik Seeley
Congrats Shawn!

-Yonik
http://lucidworks.com


On Tue, Mar 19, 2013 at 12:31 AM, Steve Rowe sar...@gmail.com wrote:
 I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation 
 to become a committer.

 Shawn, it's tradition that you introduce yourself with a brief bio.

 Once your account has been created - could take a few days - you'll be able 
 to add yourself to committers section of the Who We Are page on the website: 
 http://lucene.apache.org/whoweare.html (use the ASF CMS bookmarklet at the 
 bottom of the page here: https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).

 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.

 Congratulations and welcome!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Jan Høydahl
Welcome Shawn!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com

19. mars 2013 kl. 05:31 skrev Steve Rowe sar...@gmail.com:

 I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation 
 to become a committer.
 
 Shawn, it's tradition that you introduce yourself with a brief bio.
 
 Once your account has been created - could take a few days - you'll be able 
 to add yourself to committers section of the Who We Are page on the website: 
 http://lucene.apache.org/whoweare.html (use the ASF CMS bookmarklet at the 
 bottom of the page here: https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).
 
 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.
 
 Congratulations and welcome!
 
 Steve
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1287) tests rely on internet connection

2013-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606359#comment-13606359
 ] 

Uwe Schindler commented on SOLR-1287:
-

I can confirm that this was fixed long time ago. The mentioned test with 
ContentStreams now uses a file:-URL :-)

 tests rely on internet connection
 -

 Key: SOLR-1287
 URL: https://issues.apache.org/jira/browse/SOLR-1287
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley
Priority: Minor

 Our current unit tests don't work w/o internet connectivity... found out due 
 to ContentStreamTest.testURLStream failing because apache svn is down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Michael McCandless
Welcome Shawn!

Mike McCandless

http://blog.mikemccandless.com


On Tue, Mar 19, 2013 at 12:31 AM, Steve Rowe sar...@gmail.com wrote:
 I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation 
 to become a committer.

 Shawn, it's tradition that you introduce yourself with a brief bio.

 Once your account has been created - could take a few days - you'll be able 
 to add yourself to committers section of the Who We Are page on the website: 
 http://lucene.apache.org/whoweare.html (use the ASF CMS bookmarklet at the 
 bottom of the page here: https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).

 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.

 Congratulations and welcome!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4854) DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606363#comment-13606363
 ] 

Commit Tag Bot commented on LUCENE-4854:


[trunk commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1458303

LUCENE-4854: DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow


 DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow 
 --

 Key: LUCENE-4854
 URL: https://issues.apache.org/jira/browse/LUCENE-4854
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4854.patch


 Investigating a test failure in grouping/ I found the current dv api needs 
 help for DocTermsOrds (this facet+grouping collector uses seekExact(BytesRef) 
 on the termsenum):
 * termsenum.seekExact is slow because the default implementation calls 
 lookupTerm, which is slow. but this thing already has an optimal termsenum it 
 can just return directly (since LUCENE-4819)
 * lookupTerm is slow because the default implementation binary-searches 
 ordinal space, calling lookupOrd and comparing to the target. However, 
 lookupOrd is slow for this thing (must binary-search ordinal space again, 
 then next() at most index_interval times).
 * its getOrdTermsEnum() method is buggy: doesn't position correctly on an 
 initial next(). Nothing uses this today, but if we want to return this thing 
 directly it needs to work: its just a trivial check contained within next()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606368#comment-13606368
 ] 

Robert Muir commented on LUCENE-4848:
-

{quote}
For demonstartation puposes, I attached the simple patch for MMapDirectory that 
uses the new StandardOpenMode and FileChannel.open() provided by Java 7. I did 
not yet really test the deletion of open files on windows, but all tests pass 
(as they should).
{quote}

This patch looks great!

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-19 Thread Shawn Heisey

On 3/18/2013 10:31 PM, Steve Rowe wrote:

I'm pleased to announce that Shawn Heisey has accepted the PMC's invitation to 
become a committer.

Shawn, it's tradition that you introduce yourself with a brief bio.


I'm a system admin by trade, mostly Linux and Cisco.  I've been poking 
around computers since I was tiny ... 35 years ago, my dad brought home 
a TRS-80 Model I.  I learned BASIC on that.


My first experiences with Solr were in late 2009 and early 2010 with 
version 1.4.0.  I have little exposure to Lucene.  I have been employed 
at the same company from 2004 to the present.


I have a small amount of university training in programming (Scheme and 
C/C++), everything since then is self-taught.  The language I am most 
fluent in is Perl.  I got serious about learning Java less than a year ago.


I look after five children, three of which are legally adults.  Some of 
them were not originally mine, but it's all family.


I am honored by this invitation and I hope that I can do something 
useful with it.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4854) DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606380#comment-13606380
 ] 

Commit Tag Bot commented on LUCENE-4854:


[branch_4x commit] Robert Muir
http://svn.apache.org/viewvc?view=revisionrevision=1458315

LUCENE-4854: DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow


 DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow 
 --

 Key: LUCENE-4854
 URL: https://issues.apache.org/jira/browse/LUCENE-4854
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4854.patch


 Investigating a test failure in grouping/ I found the current dv api needs 
 help for DocTermsOrds (this facet+grouping collector uses seekExact(BytesRef) 
 on the termsenum):
 * termsenum.seekExact is slow because the default implementation calls 
 lookupTerm, which is slow. but this thing already has an optimal termsenum it 
 can just return directly (since LUCENE-4819)
 * lookupTerm is slow because the default implementation binary-searches 
 ordinal space, calling lookupOrd and comparing to the target. However, 
 lookupOrd is slow for this thing (must binary-search ordinal space again, 
 then next() at most index_interval times).
 * its getOrdTermsEnum() method is buggy: doesn't position correctly on an 
 initial next(). Nothing uses this today, but if we want to return this thing 
 directly it needs to work: its just a trivial check contained within next()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4854) DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow

2013-03-19 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4854.
-

Resolution: Fixed

 DocTermsOrd getOrdTermsEnum() buggy, lookupTerm/termsEnum is slow 
 --

 Key: LUCENE-4854
 URL: https://issues.apache.org/jira/browse/LUCENE-4854
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4854.patch


 Investigating a test failure in grouping/ I found the current dv api needs 
 help for DocTermsOrds (this facet+grouping collector uses seekExact(BytesRef) 
 on the termsenum):
 * termsenum.seekExact is slow because the default implementation calls 
 lookupTerm, which is slow. but this thing already has an optimal termsenum it 
 can just return directly (since LUCENE-4819)
 * lookupTerm is slow because the default implementation binary-searches 
 ordinal space, calling lookupOrd and comparing to the target. However, 
 lookupOrd is slow for this thing (must binary-search ordinal space again, 
 then next() at most index_interval times).
 * its getOrdTermsEnum() method is buggy: doesn't position correctly on an 
 initial next(). Nothing uses this today, but if we want to return this thing 
 directly it needs to work: its just a trivial check contained within next()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: questions about solr.xml/solr.properties

2013-03-19 Thread Mark Miller

On Mar 19, 2013, at 9:44 AM, Erick Erickson erickerick...@gmail.com wrote:

 So are you talking about backing all this out of 4x or just taking the 
 properties bits out? Because backing this all out of the 4x code line will 
 be...er...challenging, much more challenging than just yanking the properties 
 file bits out, this latter is much easier whereas reverting will be a pain, 
 I'll need some help there. I'm not really willing to re-do all of the bits 
 around preventing loading/unloading core operations occurring at the same 
 time from the ground up, it was too painful to do the first time. But this 
 isn't really a problem I don't think, since that code is only relevant if you 
 specify the lazy core loading options in your cores.

I'm bringing up both things. The fact that neither our tests or example use 
this new format yet scares me. It really feels like this didn't bake in 5x at 
all if it wasn't part of the example or driving tests. The only baking it had 
was it's own unit tests, but what it really needed was dev/user face time.

I'm open to driving this out in 4.3, but I'm pointing out where things are at 
and that people should weigh in so we can solidify this and actually get the 
changes into developers and users hands (beyond just the refactoring that has 
gone into the old style back compat) or pull it from 4x until it gets some real 
user time in 5x.

 
 I don't quite get what you mean by solr.xml and solrconfig.xml being similar, 
 just both being xml files?

I think they should both be xml files and with similar style, yes. Or 
eventually both changed to another format, but in my mind they should be 
consistent.

 And are we talking a discovery/no discovery attribute in, say, the cores 
 tag?

Something like that - Robert tossed out some ideas, but we would have to work 
through what is best.

 Nor do I quite get the bits around threading. The stress code does test the 
 threading I think.

There are shared variables that are accessed without locks - at least there 
were a couple days ago when I took a casual look. That's a no no. Might have to 
do with some of the random stress tests fails we still have, but not get 
tickled by the stress tests - it's a bug in either case, so I won't be happy 
with it until that code has gotten some more review.

In particular, there is some code that says, don't lock for these it causes 
deadlock - that's the biggest one I want to address - we can't sacrifice safety 
in accessing shared vars to avoid deadlock - we have to solve both. Now I have 
not looked in a few days, perhaps things have shifted, but I'd like that code 
to get a solid review by someone else that knows CoreContainer if possible.

 It also specifically does both properties and xml versions of solr.###, so it 
 is part of our normal testing. Whether we can harden it enough for 4.3 is 
 certainly a legitimate question.

This is my worry. I know I've discussed a 4.3 with Robert within a relatively 
short time. Meanwhile, this large change that we will be locked into with back 
compat is still not part of the example or driving our tests. I'm worried it 
should have been put in 5x first, and made part of the example at least. I 
realize it may be too later for that now, which is why I'm straddling this 
middle path of, I don't know if this is ready for 4.3 so let's either pull it 
or try and make it ready. I don't think you did anything wrong to get us here, 
I'm just stating what I feel given where we are.

The real key to making it ready in my mind is to change the example and force 
more people into facing the new setup.

 
 So are we reconsidering what I had a couple of months ago with pluggable core 
 discovery? The default would be to do things as they're done now, we'd 
 provide an implementation that walked the directory structure, and dropping 
 in a different version that, say, queried ZK or a database would be pretty 
 straight-forward.
 
 but if the solr.properties thing is to be killed, now is the time.

I think we are only talking about the format of the core container 
configuration. I don't think it really changes much of what you have done or 
wanted to do - it's simple the format of the prop file.

And getting more people to use that path so that we know it's right one and 
shake out more problems/bugs!

I'm less concerned with the stability (although that concerns me since the new 
way is mostly hidden right now and so not getting hit by devs or trunk users 
unless they really go out of the way) and more concerned around being locked 
into what we do here. So I'm hoping some other committers are forced to look 
closer at this before we release it.

- Mark

 
 
 
 
 On Mon, Mar 18, 2013 at 8:42 PM, Mark Miller markrmil...@gmail.com wrote:
 I've been thinking about this in the background. I tossed out some of the 
 idea's that were implemented here, and I also thought a properties file 
 rather than xml might make the most sense for container config. After all, 
 

[jira] [Commented] (SOLR-4589) 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = pathological CPU load response time

2013-03-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606388#comment-13606388
 ] 

Yonik Seeley commented on SOLR-4589:


Still trying to wrap my head around how all this works... particularly with 
synchronization and why the old implementation, or the patch, are thread safe.

I'm not sure we should be using weak references though.  They could cause 
problems at the rate they could be generated.

 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = 
 pathological CPU load  response time
 

 Key: SOLR-4589
 URL: https://issues.apache.org/jira/browse/SOLR-4589
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0, 4.1, 4.2
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2.1

 Attachments: SOLR-4589.patch, SOLR-4589.patch, 
 test-just-queries.out__4.0.0_mmap_lazy_using36index.txt, 
 test-just-queries.sh, test.out__3.6.1_mmap_lazy.txt, 
 test.out__3.6.1_mmap_nolazy.txt, test.out__3.6.1_nio_lazy.txt, 
 test.out__3.6.1_nio_nolazy.txt, test.out__4.0.0_mmap_lazy.txt, 
 test.out__4.0.0_mmap_nolazy.txt, test.out__4.0.0_nio_lazy.txt, 
 test.out__4.0.0_nio_nolazy.txt, test.out__4.2.0_mmap_lazy.txt, 
 test.out__4.2.0_mmap_nolazy.txt, test.out__4.2.0_nio_lazy.txt, 
 test.out__4.2.0_nio_nolazy.txt, test.sh


 Following up on a [user report of exterme CPU usage in 
 4.1|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c1362019882934-4043543.p...@n3.nabble.com%3E],
  I've discovered that the following combination of factors can result in 
 extreme CPU usage and excessively HTTP response times...
 * Solr 4.x (tested 3.6.1, 4.0.0, and 4.2.0)
 * enableLazyFieldLoading == true (included in example solrconfig.xml)
 * documents with a large number of values in multivalued fields (eg: tested 
 ~10-15K values)
 * multiple requests returning the same doc with different fl lists
 I haven't dug into the route cause yet, but the essential observations is: if 
 lazyloading is used in 4.x, then once a document has been fetched with an 
 initial fl list X, subsequent requests for that document using a differnet fl 
 list Y can be many orders of magnitute slower (while pegging the CPU) -- even 
 if those same requests using fl Y uncached (or w/o lazy laoding) would be 
 extremely fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4608:
---

Attachment: SOLR-4608.patch

Here's a patch that uses the default chain for both log replaying and peer sync 
replaying.

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4845) Add AnalyzingInfixSuggester

2013-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606405#comment-13606405
 ] 

Robert Muir commented on LUCENE-4845:
-

This seems to not blow up for title-like fields:
I did a quick test of geonames (8.3M place names, just using ID as the weight)

{noformat}
AnalyzingSuggester: 117444563 bytes, 74887ms build time
InfixingSuggester: 302127665 bytes, 125895ms build time
{noformat}

I think realistically an N limit can work well here. After such a limit, the 
infixing is
pretty crazy anyway, and really infixing should punish the weight in some way 
since its
a very scary edit operation to do to the user.

Plus you get optional fuzziness and real phrasing works too :)

 Add AnalyzingInfixSuggester
 ---

 Key: LUCENE-4845
 URL: https://issues.apache.org/jira/browse/LUCENE-4845
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spellchecker
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.3

 Attachments: infixSuggest.png, LUCENE-4845.patch, LUCENE-4845.patch, 
 LUCENE-4845.patch


 Our current suggester impls do prefix matching of the incoming text
 against all compiled suggestions, but in some cases it's useful to
 allow infix matching.  E.g, Netflix does infix suggestions in their
 search box.
 I did a straightforward impl, just using a normal Lucene index, and
 using PostingsHighlighter to highlight matching tokens in the
 suggestions.
 I think this likely only works well when your suggestions have a
 strong prior ranking (weight input to build), eg Netflix knows
 the popularity of movies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4836) SimpleRateLimiter#pause returns target time stamp instead of sleep time.

2013-03-19 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4836:


Fix Version/s: 4.2.1

 SimpleRateLimiter#pause returns target time stamp instead of sleep time.
 

 Key: LUCENE-4836
 URL: https://issues.apache.org/jira/browse/LUCENE-4836
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Affects Versions: 4.1, 4.2
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4836.patch


 SimpleRateLimiter#pause is supposed to return the time it actually spend 
 sleeping but it returns the actual time in nanos it is supposed to sleep 
 until. This cause some problems in ES due to long overflows here is the 
 original issue reported by a user: 
 https://github.com/elasticsearch/elasticsearch/issues/2785

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606411#comment-13606411
 ] 

Mark Miller commented on SOLR-4608:
---

Report back soon and hopefully we can get this in 4.2.1.

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: questions about solr.xml/solr.properties

2013-03-19 Thread Erick Erickson
bq: In particular, there is some code that says, don't lock for these it
causes deadlock

There's no place where I intentionally put in code that didn't lock shared
variables etc. to avoid deadlock and then crossed my fingers hoping all
went well. Which says nothing about whether there are inadvertent places of
course.

There are some notes about why locks are obtained long enough to copy from
shared variables to local variables since locking the shared stuff while,
say, closing cores would (and did) lead to deadlock, e.g.
CoreContainer.clearMaps(). That's a case where the rest of the locking is
supposed to suffice.

And I remember one spot, although I can't find it right now, where I
intentionally did not obtain a lock but the code is, as I remember,
inaccessible except from code that _does_ have a lock.

So point me at the code you're uncomfortable with and I'll be happy to look
it over again, it's possible that my cryptic comments which were mostly to
myself are misleading you. Of course it's also quite possible that you're
totally right and I screwed up, wouldn't be the first time.

I was kind of sad to see the pluggable core descriptor go, it seemed like
kind of a neat thing. But it didn't really have a compelling use-case over
auto-discovery so there's no good reason to bring it back. I suppose if we
bring it back (not suggesting it now, mind you) that we could use the
extracted manipulation in ConfigSolrXmlBackCompat (which will be renamed if
we pull the solr.properties file) as the template for an interface, but
that's for later.

But the properties way of doing things did seem awkward, so I'm not against
yanking it. Much of the other code is there (I'm thinking about all of the
pending core operations) to address shortcomings that have been there for a
while. We've been able to lazily load/unload cores since 4.1, I believe the
stress test running against 4.1 would _not_ be pretty so taking all that
out seems like a mistake.




On Tue, Mar 19, 2013 at 8:02 AM, Mark Miller markrmil...@gmail.com wrote:


 On Mar 19, 2013, at 9:44 AM, Erick Erickson erickerick...@gmail.com
 wrote:

  So are you talking about backing all this out of 4x or just taking the
 properties bits out? Because backing this all out of the 4x code line will
 be...er...challenging, much more challenging than just yanking the
 properties file bits out, this latter is much easier whereas reverting will
 be a pain, I'll need some help there. I'm not really willing to re-do all
 of the bits around preventing loading/unloading core operations occurring
 at the same time from the ground up, it was too painful to do the first
 time. But this isn't really a problem I don't think, since that code is
 only relevant if you specify the lazy core loading options in your cores.

 I'm bringing up both things. The fact that neither our tests or example
 use this new format yet scares me. It really feels like this didn't bake in
 5x at all if it wasn't part of the example or driving tests. The only
 baking it had was it's own unit tests, but what it really needed was
 dev/user face time.

 I'm open to driving this out in 4.3, but I'm pointing out where things are
 at and that people should weigh in so we can solidify this and actually get
 the changes into developers and users hands (beyond just the refactoring
 that has gone into the old style back compat) or pull it from 4x until it
 gets some real user time in 5x.

 
  I don't quite get what you mean by solr.xml and solrconfig.xml being
 similar, just both being xml files?

 I think they should both be xml files and with similar style, yes. Or
 eventually both changed to another format, but in my mind they should be
 consistent.

  And are we talking a discovery/no discovery attribute in, say, the
 cores tag?

 Something like that - Robert tossed out some ideas, but we would have to
 work through what is best.

  Nor do I quite get the bits around threading. The stress code does test
 the threading I think.

 There are shared variables that are accessed without locks - at least
 there were a couple days ago when I took a casual look. That's a no no.
 Might have to do with some of the random stress tests fails we still have,
 but not get tickled by the stress tests - it's a bug in either case, so I
 won't be happy with it until that code has gotten some more review.

 In particular, there is some code that says, don't lock for these it
 causes deadlock - that's the biggest one I want to address - we can't
 sacrifice safety in accessing shared vars to avoid deadlock - we have to
 solve both. Now I have not looked in a few days, perhaps things have
 shifted, but I'd like that code to get a solid review by someone else that
 knows CoreContainer if possible.

  It also specifically does both properties and xml versions of solr.###,
 so it is part of our normal testing. Whether we can harden it enough for
 4.3 is certainly a legitimate question.

 This is my worry. I know I've discussed a 4.3 

[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606427#comment-13606427
 ] 

Michael Poindexter commented on LUCENE-4848:


Thanks for the demonstration Uwe!  It was very helpful as I misunderstood our 
earlier conversation and was attempting to change the internals of FSDirectory 
to use Path (instead of File) while keeping the public interface the same 
(actually, I was done, but waiting for the tests to run before attaching the 
patch, so your timing was perfect :) )

I've attached a patch in the same spirit as your MMapDirectory patch that makes 
some minor changes to FSDirectory to allow different FSIndexInput and 
FSIndexOutput subclasses that use different methods of accessing the file (i.e. 
RandomAccessFile vs. FileChannel).  It updates MMapDirectory, SimpleFSDirectory 
and NIOFSDirectory to use appropriate subclasses, and adds a new 
AsyncFSDirectory class.

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: questions about solr.xml/solr.properties

2013-03-19 Thread Mark Miller

On Mar 19, 2013, at 11:40 AM, Erick Erickson erickerick...@gmail.com wrote:

 bq: In particular, there is some code that says, don't lock for these it 
 causes deadlock 
 
 There's no place where I intentionally put in code that didn't lock shared 
 variables etc. to avoid deadlock and then crossed my fingers hoping all went 
 well. Which says nothing about whether there are inadvertent places of course.
 
 There are some notes about why locks are obtained long enough to copy from 
 shared variables to local variables since locking the shared stuff while, 
 say, closing cores would (and did) lead to deadlock, e.g. 
 CoreContainer.clearMaps(). That's a case where the rest of the locking is 
 supposed to suffice.
 
 And I remember one spot, although I can't find it right now, where I 
 intentionally did not obtain a lock but the code is, as I remember, 
 inaccessible except from code that _does_ have a lock.
 
 So point me at the code you're uncomfortable with and I'll be happy to look 
 it over again, it's possible that my cryptic comments which were mostly to 
 myself are misleading you. Of course it's also quite possible that you're 
 totally right and I screwed up, wouldn't be the first time…..

I haven't had a chance to do a real review, so I'll wait to point out anything 
specifically - my main point is that I think the code could use some review and 
I think to this point it has not had any, at least that I've picked up. It's 
been on my list a long time and I still hope to get to it, but the fact that no 
one else has really looked yet, it makes me less comfortable rushing such a 
large change out. I agree that the stress test is comforting, but it still has 
some random fails - perhaps just test fails, but it all just calls out for some 
more eyes because it's a rather large, central change. This is a central part 
of Solr that everyone uses. 

It's not that I don't trust you, I just know how a big a change this is and I 
think it deserves a second pair of eyes at least in some spots. I'm mostly 
trying to frame the future when I talk about 5x - something like this seems 
like it should have been in 5x prominently for a while. At this point, it may 
be more work to go backward than forward on that front. I think about this 
because I've been into the idea of releasing fairly often on the 4.x branch - 
and Robert is a big releaser as well - so I'm going to be paying close 
attention to issues that make it a little harder to just release at any time.

 
 I was kind of sad to see the pluggable core descriptor go, it seemed like 
 kind of a neat thing. But it didn't really have a compelling use-case over 
 auto-discovery so there's no good reason to bring it back. I suppose if we 
 bring it back (not suggesting it now, mind you) that we could use the 
 extracted manipulation in ConfigSolrXmlBackCompat (which will be renamed if 
 we pull the solr.properties file) as the template for an interface, but 
 that's for later.
 
 But the properties way of doing things did seem awkward, so I'm not against 
 yanking it. Much of the other code is there (I'm thinking about all of the 
 pending core operations) to address shortcomings that have been there for a 
 while. We've been able to lazily load/unload cores since 4.1, I believe the 
 stress test running against 4.1 would _not_ be pretty so taking all that out 
 seems like a mistake.

If we can come to consensus on the next move, I'm happy to help dig into some 
of this. I'm still hopeful that it might be a somewhat minor change since it's 
really just altering the on disk format of the config file?

- Mark


 
 
 
 
 On Tue, Mar 19, 2013 at 8:02 AM, Mark Miller markrmil...@gmail.com wrote:
 
 On Mar 19, 2013, at 9:44 AM, Erick Erickson erickerick...@gmail.com wrote:
 
  So are you talking about backing all this out of 4x or just taking the 
  properties bits out? Because backing this all out of the 4x code line will 
  be...er...challenging, much more challenging than just yanking the 
  properties file bits out, this latter is much easier whereas reverting will 
  be a pain, I'll need some help there. I'm not really willing to re-do all 
  of the bits around preventing loading/unloading core operations occurring 
  at the same time from the ground up, it was too painful to do the first 
  time. But this isn't really a problem I don't think, since that code is 
  only relevant if you specify the lazy core loading options in your cores.
 
 I'm bringing up both things. The fact that neither our tests or example use 
 this new format yet scares me. It really feels like this didn't bake in 5x at 
 all if it wasn't part of the example or driving tests. The only baking it had 
 was it's own unit tests, but what it really needed was dev/user face time.
 
 I'm open to driving this out in 4.3, but I'm pointing out where things are at 
 and that people should weigh in so we can solidify this and actually get the 
 changes into developers and users hands (beyond just 

[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606439#comment-13606439
 ] 

Uwe Schindler commented on LUCENE-4848:
---

bq. Thanks for the demonstration Uwe! It was very helpful as I misunderstood 
our earlier conversation and was attempting to change the internals of 
FSDirectory to use Path (instead of File) while keeping the public interface 
the same (actually, I was done, but waiting for the tests to run before 
attaching the patch, so your timing was perfect )

We can move to Path later, but before doing that we should get this in as a 
first step. This issue is unrelated.

I just skimmed your patch, this looks quite good. I have to look closer into 
it, will report back later. I have seen that you almost completely reused my 
patch - thanks! But you used try-with-resources to open,mmap,close the channel 
- nice!

To run all Lucene+SOLR tests with a specific directory implementation use e.g.: 
ant test -Dtests.directory=MMapDirectory, otheriwse Lucene uses in most cases 
RAMDirectory and only rarely other ones. By that you should also be able to 
test your new directory (it might be needed that you add a hook for 
instantiating it inside LuceneTestCase where -Dtests.directory is parsed).

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606443#comment-13606443
 ] 

Uwe Schindler commented on LUCENE-4848:
---

One small thing: The protected method FSIndexInput#length() does not need the 
generic FD, it should be parameterless? The FD is known to the subclass, isnt 
it?

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4589) 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = pathological CPU load response time

2013-03-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606458#comment-13606458
 ] 

Hoss Man commented on SOLR-4589:


[~yo...@apache.org]: I can remove the weak references easy enough, but cna you 
elaborate on what concerns you have about the thread safety?

can you give me an example of a sequence of (parallel) events that you think 
would be problematic so i can try to address it?

 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = 
 pathological CPU load  response time
 

 Key: SOLR-4589
 URL: https://issues.apache.org/jira/browse/SOLR-4589
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0, 4.1, 4.2
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2.1

 Attachments: SOLR-4589.patch, SOLR-4589.patch, 
 test-just-queries.out__4.0.0_mmap_lazy_using36index.txt, 
 test-just-queries.sh, test.out__3.6.1_mmap_lazy.txt, 
 test.out__3.6.1_mmap_nolazy.txt, test.out__3.6.1_nio_lazy.txt, 
 test.out__3.6.1_nio_nolazy.txt, test.out__4.0.0_mmap_lazy.txt, 
 test.out__4.0.0_mmap_nolazy.txt, test.out__4.0.0_nio_lazy.txt, 
 test.out__4.0.0_nio_nolazy.txt, test.out__4.2.0_mmap_lazy.txt, 
 test.out__4.2.0_mmap_nolazy.txt, test.out__4.2.0_nio_lazy.txt, 
 test.out__4.2.0_nio_nolazy.txt, test.sh


 Following up on a [user report of exterme CPU usage in 
 4.1|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c1362019882934-4043543.p...@n3.nabble.com%3E],
  I've discovered that the following combination of factors can result in 
 extreme CPU usage and excessively HTTP response times...
 * Solr 4.x (tested 3.6.1, 4.0.0, and 4.2.0)
 * enableLazyFieldLoading == true (included in example solrconfig.xml)
 * documents with a large number of values in multivalued fields (eg: tested 
 ~10-15K values)
 * multiple requests returning the same doc with different fl lists
 I haven't dug into the route cause yet, but the essential observations is: if 
 lazyloading is used in 4.x, then once a document has been fetched with an 
 initial fl list X, subsequent requests for that document using a differnet fl 
 list Y can be many orders of magnitute slower (while pegging the CPU) -- even 
 if those same requests using fl Y uncached (or w/o lazy laoding) would be 
 extremely fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606462#comment-13606462
 ] 

Michael Poindexter commented on LUCENE-4848:


blockquote
One small thing: The protected method FSIndexInput#length() does not need the 
generic FD, it should be parameterless? The FD is known to the subclass, isnt 
it?
/blockquote

2 reasons not to:
1.) I think there is already a parameterless length() method that behaves 
slightly differently.  This length(T) is intended to extract the full length 
from the file accessor, while length() returns the configured length of the 
slice.
2.) It is called from the constructor, so it might be considered bad practice 
to access member variables since that can be error prone.



 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4614) ClusterState#getSlices returns null causing NPE in ClientUtils#addSlices

2013-03-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606465#comment-13606465
 ] 

Anshum Gupta commented on SOLR-4614:


Personally, I prefer not returning null. That'd not force the callers to check 
for null after every call.

 ClusterState#getSlices returns null causing NPE in ClientUtils#addSlices
 

 Key: SOLR-4614
 URL: https://issues.apache.org/jira/browse/SOLR-4614
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.1
Reporter: David Arthur
Priority: Minor

 When my program sends an UpdateRequest to a collection that has been deleted, 
 I am getting a NPE
 {code}
 java.lang.NullPointerException
 at 
 org.apache.solr.client.solrj.util.ClientUtils.addSlices(ClientUtils.java:273)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:214)
 at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
 {code}
 This appears to be caused by the fact that ClusterState#getSlices is 
 returning null instead of an empty collection.
 ClusterState returning null: 
 https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java#L123
 ClientUtil#addSlices iterating over a null: 
 https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/client/solrj/util/ClientUtils.java#L273
 I would attach a patch, but I'm not sure what the preferred style is within 
 the project (empty collection vs null checks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606462#comment-13606462
 ] 

Michael Poindexter edited comment on LUCENE-4848 at 3/19/13 4:25 PM:
-

bq. One small thing: The protected method FSIndexInput#length() does not need 
the generic FD, it should be parameterless? The FD is known to the subclass, 
isnt it?

2 reasons not to:
1.) I think there is already a parameterless length() method that behaves 
slightly differently.  This length(T) is intended to extract the full length 
from the file accessor, while length() returns the configured length of the 
slice.
2.) It is called from the constructor, so it might be considered bad practice 
to access member variables since that can be error prone.

It might be good to rename this to fileLength(T) or something similar.

  was (Author: mpoindexter):
bq. One small thing: The protected method FSIndexInput#length() does not 
need the generic FD, it should be parameterless? The FD is known to the 
subclass, isnt it?

2 reasons not to:
1.) I think there is already a parameterless length() method that behaves 
slightly differently.  This length(T) is intended to extract the full length 
from the file accessor, while length() returns the configured length of the 
slice.
2.) It is called from the constructor, so it might be considered bad practice 
to access member variables since that can be error prone.


  
 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606474#comment-13606474
 ] 

Michael Poindexter commented on LUCENE-4848:


bq. It would also be interesting if this patch maybe solves the 
ClosedChannelException problem on interrupt? The time window in MMap is very 
short that the bug can happen (only after opening the channel, while mmap is 
doing its work before the channel is closed).

I don't think this will change the behavior much at all.  Before the patch the 
channel was only open briefly (just long enough to do the map()), and after the 
change it is the same.

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4848) Add Directory implementations using NIO2 APIs

2013-03-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606475#comment-13606475
 ] 

Robert Muir commented on LUCENE-4848:
-

I like this patch, thanks Michael!

 Add Directory implementations using NIO2 APIs
 -

 Key: LUCENE-4848
 URL: https://issues.apache.org/jira/browse/LUCENE-4848
 Project: Lucene - Core
  Issue Type: Task
Reporter: Michael Poindexter
Assignee: Uwe Schindler
Priority: Minor
 Attachments: jdk7directory.zip, LUCENE-4848-MMapDirectory.patch, 
 LUCENE-4848.patch


 I have implemented 3 Directory subclasses using NIO2 API's (available on 
 JDK7).  These may be suitable for inclusion in a Lucene contrib module.
 See the mailing list at http://lucene.markmail.org/thread/lrv7miivzmjm3ml5 
 for more details about this code and the advantages it provides.
 The code is attached as a zip to this issue.  I'll be happy to make any 
 changes requested.  I've included some minimal smoke tests, but any help in 
 how to use the normal Lucene tests to perform more thorough testing would be 
 appreciated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4614) ClusterState#getSlices returns null causing NPE in ClientUtils#addSlices

2013-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606477#comment-13606477
 ] 

Mark Miller commented on SOLR-4614:
---

Perhaps a NullPointer is right? Or illegal argument exception? I have not 
looked into what that affects, but I think I hit this same thing and went with 
a fix higher up of:

{code}
if (colSlices == null) {
  throw new SolrServerException(Could not find collection: + 
collectionName);
}
{code}

I guess it depends on what adding slices to a null slice means - my first 
thought was that it was an error.

 ClusterState#getSlices returns null causing NPE in ClientUtils#addSlices
 

 Key: SOLR-4614
 URL: https://issues.apache.org/jira/browse/SOLR-4614
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.1
Reporter: David Arthur
Priority: Minor

 When my program sends an UpdateRequest to a collection that has been deleted, 
 I am getting a NPE
 {code}
 java.lang.NullPointerException
 at 
 org.apache.solr.client.solrj.util.ClientUtils.addSlices(ClientUtils.java:273)
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:214)
 at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
 {code}
 This appears to be caused by the fact that ClusterState#getSlices is 
 returning null instead of an empty collection.
 ClusterState returning null: 
 https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java#L123
 ClientUtil#addSlices iterating over a null: 
 https://github.com/apache/lucene-solr/blob/lucene_solr_4_1/solr/solrj/src/java/org/apache/solr/client/solrj/util/ClientUtils.java#L273
 I would attach a patch, but I'm not sure what the preferred style is within 
 the project (empty collection vs null checks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4615) Take out the possibility of having a solr.properties file

2013-03-19 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-4615:


 Summary: Take out the possibility of having a solr.properties file
 Key: SOLR-4615
 URL: https://issues.apache.org/jira/browse/SOLR-4615
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson


We seem to have re-thought whether deprecating Solr.xml is The Right Thing To 
Do or not, the consensus seems to be that we should keep solr.xml, _not_ be 
able to specify solr.properties but add an attribute to the cores tag in 
solr.xml, tentatively called autoDiscover=true|false (assume false for 4.x, 
true for 5.0?)

This really has to be done before 4.3 is cut, as in Real Soon Now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4297) Atomic update including set null=true throws uniqueKey error depending on order

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606482#comment-13606482
 ] 

Commit Tag Bot commented on SOLR-4297:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458392

SOLR-4297: Move CHANGES entry.


 Atomic update including set null=true throws uniqueKey error depending on 
 order
 ---

 Key: SOLR-4297
 URL: https://issues.apache.org/jira/browse/SOLR-4297
 Project: Solr
  Issue Type: Bug
  Components: clients - java, update
Affects Versions: 4.0, 4.1
Reporter: Ben Pennell
Assignee: Shalin Shekhar Mangar
 Fix For: 4.3, 5.0

 Attachments: SOLR-4297.patch


 There seems to be a field order issue going on when setting a field to null 
 with a partial update.  I am running the nightly Solr 
 4.1.0.2013.01.11.08.23.02 build.  Ran into this issue using the nightly build 
 version of Solrj, including the null field fix from Solr-4133.
 Null first, unique field second (this is what is being generated by Solrj)
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=timestamp update=set null=true/field 
 name=idtest/field
 /doc/add'
 {code}
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime0/int/lstlst name=errorstr name=msgDocument is 
 missing mandatory uniqueKey field: id/strint name=code400/int/lst
 /response
 {code}
 id first, then null field
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=idtest/field
 field name=timestamp update=set null=true/
 /doc/add'
 {code}
 {code}
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime30/int/lst
 /response
 {code}
 Real value first, then id
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=timestamp update=set1970-01-01T00:00:00Z/field
 field name=idtest/field
 /doc/add'
 {code}
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime28/int/lst
 /response
 {code}
 Unfortunately it is doing this field ordering every atomic update request via 
 Solrj I do now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4297) Atomic update including set null=true throws uniqueKey error depending on order

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606487#comment-13606487
 ] 

Commit Tag Bot commented on SOLR-4297:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458396

SOLR-4297: Move CHANGES entry.


 Atomic update including set null=true throws uniqueKey error depending on 
 order
 ---

 Key: SOLR-4297
 URL: https://issues.apache.org/jira/browse/SOLR-4297
 Project: Solr
  Issue Type: Bug
  Components: clients - java, update
Affects Versions: 4.0, 4.1
Reporter: Ben Pennell
Assignee: Shalin Shekhar Mangar
 Fix For: 4.3, 5.0

 Attachments: SOLR-4297.patch


 There seems to be a field order issue going on when setting a field to null 
 with a partial update.  I am running the nightly Solr 
 4.1.0.2013.01.11.08.23.02 build.  Ran into this issue using the nightly build 
 version of Solrj, including the null field fix from Solr-4133.
 Null first, unique field second (this is what is being generated by Solrj)
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=timestamp update=set null=true/field 
 name=idtest/field
 /doc/add'
 {code}
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime0/int/lstlst name=errorstr name=msgDocument is 
 missing mandatory uniqueKey field: id/strint name=code400/int/lst
 /response
 {code}
 id first, then null field
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=idtest/field
 field name=timestamp update=set null=true/
 /doc/add'
 {code}
 {code}
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime30/int/lst
 /response
 {code}
 Real value first, then id
 {code}
 curl 'http://localhost/solr/update?commit=true' -H 'Content-type:text/xml' -d 
 'adddoc boost=1.0
 field name=timestamp update=set1970-01-01T00:00:00Z/field
 field name=idtest/field
 /doc/add'
 {code}
 {code}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime28/int/lst
 /response
 {code}
 Unfortunately it is doing this field ordering every atomic update request via 
 Solrj I do now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2013-03-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606489#comment-13606489
 ] 

Erick Erickson commented on SOLR-4196:
--

bq: ...there are a variety of non thread safe accesses to shared variables. 
That's not really a valid way to avoid deadlocks.

Hmmm, where? I certainly didn't do this intentionally... but there's a lot of 
code here. Note that quite a bit of CoreContainer historically assumed that it 
was the only thread that was active, so some of this may be leftovers. But 
certainly the more eyes that look at this the better.

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.3, 5.0

 Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, 
 StressTest.zip, StressTest.zip, StressTest.zip


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: questions about solr.xml/solr.properties

2013-03-19 Thread Erick Erickson
Well, I can yank the solr.properties stuff quite quickly and put in a new
cores attribute autoDiscovery=true|false, which seems to be where we're
headed. Off the top of my head, SolrProperties would morph into something
like AutoCoreDiscovery and be called if the autoDiscovery=true property
were found in solr.xml. From there it's just yanking the detection of
solr.xml and renaming the SolrBackCompat class so it stays around,
un-deprecating it, etc. There would be some cleanup, but not much. See:
https://issues.apache.org/jira/browse/SOLR-4615

bq: I think it deserves a second pair of eyes
Three or four. I wound up changing waaay more code than I'd originally
thought I'd need to, you're not the only one with some discomfort about how
big this turned out to be

I think we should move the rest of this over to the JIRA so we have a
better record of what happened.


On Tue, Mar 19, 2013 at 8:54 AM, Mark Miller markrmil...@gmail.com wrote:


 On Mar 19, 2013, at 11:40 AM, Erick Erickson erickerick...@gmail.com
 wrote:

  bq: In particular, there is some code that says, don't lock for these it
 causes deadlock
 
  There's no place where I intentionally put in code that didn't lock
 shared variables etc. to avoid deadlock and then crossed my fingers hoping
 all went well. Which says nothing about whether there are inadvertent
 places of course.
 
  There are some notes about why locks are obtained long enough to copy
 from shared variables to local variables since locking the shared stuff
 while, say, closing cores would (and did) lead to deadlock, e.g.
 CoreContainer.clearMaps(). That's a case where the rest of the locking is
 supposed to suffice.
 
  And I remember one spot, although I can't find it right now, where I
 intentionally did not obtain a lock but the code is, as I remember,
 inaccessible except from code that _does_ have a lock.
 
  So point me at the code you're uncomfortable with and I'll be happy to
 look it over again, it's possible that my cryptic comments which were
 mostly to myself are misleading you. Of course it's also quite possible
 that you're totally right and I screwed up, wouldn't be the first time…..

 I haven't had a chance to do a real review, so I'll wait to point out
 anything specifically - my main point is that I think the code could use
 some review and I think to this point it has not had any, at least that
 I've picked up. It's been on my list a long time and I still hope to get to
 it, but the fact that no one else has really looked yet, it makes me less
 comfortable rushing such a large change out. I agree that the stress test
 is comforting, but it still has some random fails - perhaps just test
 fails, but it all just calls out for some more eyes because it's a rather
 large, central change. This is a central part of Solr that everyone uses.

 It's not that I don't trust you, I just know how a big a change this is
 and I think it deserves a second pair of eyes at least in some spots. I'm
 mostly trying to frame the future when I talk about 5x - something like
 this seems like it should have been in 5x prominently for a while. At this
 point, it may be more work to go backward than forward on that front. I
 think about this because I've been into the idea of releasing fairly often
 on the 4.x branch - and Robert is a big releaser as well - so I'm going to
 be paying close attention to issues that make it a little harder to just
 release at any time.

 
  I was kind of sad to see the pluggable core descriptor go, it seemed
 like kind of a neat thing. But it didn't really have a compelling use-case
 over auto-discovery so there's no good reason to bring it back. I suppose
 if we bring it back (not suggesting it now, mind you) that we could use the
 extracted manipulation in ConfigSolrXmlBackCompat (which will be renamed if
 we pull the solr.properties file) as the template for an interface, but
 that's for later.
 
  But the properties way of doing things did seem awkward, so I'm not
 against yanking it. Much of the other code is there (I'm thinking about all
 of the pending core operations) to address shortcomings that have been
 there for a while. We've been able to lazily load/unload cores since 4.1, I
 believe the stress test running against 4.1 would _not_ be pretty so taking
 all that out seems like a mistake.

 If we can come to consensus on the next move, I'm happy to help dig into
 some of this. I'm still hopeful that it might be a somewhat minor change
 since it's really just altering the on disk format of the config file?

 - Mark


 
 
 
 
  On Tue, Mar 19, 2013 at 8:02 AM, Mark Miller markrmil...@gmail.com
 wrote:
 
  On Mar 19, 2013, at 9:44 AM, Erick Erickson erickerick...@gmail.com
 wrote:
 
   So are you talking about backing all this out of 4x or just taking the
 properties bits out? Because backing this all out of the 4x code line will
 be...er...challenging, much more challenging than just yanking the
 properties file bits out, this latter is much 

[jira] [Commented] (SOLR-4615) Take out the possibility of having a solr.properties file

2013-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606492#comment-13606492
 ] 

Mark Miller commented on SOLR-4615:
---

I actually think the current format is not right - almost everything is an 
attribute in cores tag. That seems less than ideal...I wonder if we could 
just design the solr.xml from the ground up - as if it was a properties file. I 
have not donated any sweat to this yet, so just an idea, but I think we can do 
better than the current solr.xml - cores.

 Take out the possibility of having a solr.properties file
 -

 Key: SOLR-4615
 URL: https://issues.apache.org/jira/browse/SOLR-4615
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.3, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson

 We seem to have re-thought whether deprecating Solr.xml is The Right Thing To 
 Do or not, the consensus seems to be that we should keep solr.xml, _not_ be 
 able to specify solr.properties but add an attribute to the cores tag in 
 solr.xml, tentatively called autoDiscover=true|false (assume false for 4.x, 
 true for 5.0?)
 This really has to be done before 4.3 is cut, as in Real Soon Now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2013-03-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606497#comment-13606497
 ] 

Mark Miller commented on SOLR-4196:
---

I'm going to do a full review soon and I'll report back.

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.3, 5.0

 Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, 
 SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, 
 StressTest.zip, StressTest.zip, StressTest.zip


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ability to specify 2 different query analyzers for same indexed field in Solr

2013-03-19 Thread Chris Hostetter

:   I would still like the ability to specify two different query analysis
: chains with one index, rather than having to write a custom parser for each

I'm not sure if this is a good idea, I certainly haven't thought it  
through very hard, but ...

i wonder if you could create a new FieldType subclassing TextField in 
which you would not only specify an analyzer, but also another field name 
(or prefix or something) and that FieldType would use it's analyzer to 
build queries against the other field.

So for example you might configure...

  fieldType name=no_sym_ft class=SpoofingTextField prefix=nosym_
analyzer   /
  /fieldType
  dynamicField type=no_sym_ft name=nosym_* indexed=false /

...and then at query time, any use of a field name like nosym_foo would 
cause the no_sym_ft field type to use it's analyzer to build queries 
against the foo field.

I think the implementation would could be fairly simple, but i'm not 
certian.


-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606510#comment-13606510
 ] 

ludovic Boutros commented on SOLR-4608:
---

Anything after the DistributedUpdateProcessor will not be applied, right ?

Do I need to create one default processor chain with my custom processor before 
the DistributedUpdateProcessor, and the real one used by the update handler 
with my custom processor after the DistributedUpdateProcessor ?

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4616) HitRatio in mbean is of type String instead should be float/double.

2013-03-19 Thread Aditya (JIRA)
Aditya created SOLR-4616:


 Summary: HitRatio in mbean is of type String instead should be 
float/double.
 Key: SOLR-4616
 URL: https://issues.apache.org/jira/browse/SOLR-4616
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.2
 Environment: Solr 4.2 on JBoss7.1.1
Reporter: Aditya
Priority: Minor


While using our existing System Monitoring tool with solr using JMX we noticed 
that the stats values for Cache is not consistence w.r.t data type. 
decimal values are returned as string instead should be of type float/double.

e.g hitratio 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4361) DIH request parameters with dots throws UnsupportedOperationException

2013-03-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4361:
--

Fix Version/s: 4.2.1

 DIH request parameters with dots throws UnsupportedOperationException
 -

 Key: SOLR-4361
 URL: https://issues.apache.org/jira/browse/SOLR-4361
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4361.patch


 If the user puts placeholders for request parameters and these contain dots, 
 DIH fails.  Current workaround is to either use no dots or use the 4.0 DIH 
 jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4752) Merge segments to sort them

2013-03-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4752:
-

Attachment: LUCENE-4752.patch

bq. I think these are not bad numbers.

Me neither! I'm rather happy with them actually.

bq. As for search, perhaps we can quickly hack up IndexSearcher to allow 
terminating per-segment and then compare two Collectors TopFields and 
TopSortedFields [...] but in order to do that, we must make sure that each 
segment is sorted (i.e. those that are not hit by MP are still in random 
order), or we somehow mark on each segment whether it's sorted or not

The attached patch contains a different approach, the idea is to use together 
SortingMergePolicy and IndexWriterConfig.getMaxBufferedDocs: this guarantees 
that all segments whose size is above maxBufferedDocs are sorted. Then there is 
a new EarlyTerminationIndexSearcher that extends search to collect normally 
segments in random order and to early terminate collection on segments which 
are sorted.

bq. Accessing close documents together ... we can make an artificial test 
which accesses documents with sort-by-value in a specific range. But that's a 
too artificial test, not sure what it will tell us.

Yes, I think the important thing to validate here is that merging does not get 
exponentially slower as segments grow. Other checks are just bonus.

 Merge segments to sort them
 ---

 Key: LUCENE-4752
 URL: https://issues.apache.org/jira/browse/LUCENE-4752
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/index
Reporter: David Smiley
Assignee: Adrien Grand
 Attachments: LUCENE-4752.patch, LUCENE-4752.patch, LUCENE-4752.patch, 
 LUCENE-4752.patch, LUCENE-4752.patch, LUCENE-4752.patch, 
 natural_10M_ingestion.log, sorting_10M_ingestion.log


 It would be awesome if Lucene could write the documents out in a segment 
 based on a configurable order.  This of course applies to merging segments 
 to. The benefit is increased locality on disk of documents that are likely to 
 be accessed together.  This often applies to documents near each other in 
 time, but also spatially.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4530) DIH: Provide configuration to use Tika's IdentityHtmlMapper

2013-03-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606588#comment-13606588
 ] 

Hoss Man commented on SOLR-4530:


Hmmm...

I applied this path URL to trunk and got a failure in the modified tests...
https://github.com/arafalov/lucene-solr/commit/bef2f84fd6943241c0f720f17011e5e42d919914.patch

{noformat}
[junit4:junit4]   2 2321 T10 oas.SolrTestCaseJ4.tearDown ###Ending 
testTikaHTMLMapperIdentity
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestTikaEntityProcessor -Dtests.method=testTikaHTMLMapperIdentity 
-Dtests.seed=699D812F169C4A5E -Dtests.slow=true -Dtests.locale=el 
-Dtests.timezone=America/Noronha -Dtests.file.encoding=UTF-8
[junit4:junit4] ERROR   0.11s J0 | 
TestTikaEntityProcessor.testTikaHTMLMapperIdentity 
[junit4:junit4] Throwable #1: java.lang.RuntimeException: Exception during 
query
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([699D812F169C4A5E:39E205BEDFA8BFA3]:0)
[junit4:junit4]at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:524)
[junit4:junit4]at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:491)
[junit4:junit4]at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testTikaHTMLMapperIdentity(TestTikaEntityProcessor.java:101)
...
[junit4:junit4] Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//str[@name='text'][contains(.,'H1')]
[junit4:junit4]xml response was: ?xml version=1.0 encoding=UTF-8?
[junit4:junit4] response
[junit4:junit4] lst name=responseHeaderint name=status0/intint 
name=QTime1/intlst name=paramsstr name=start0/strstr 
name=q*:*/strstr name=qtstandard/strstr name=rows20/strstr 
name=version2.2/str/lst/lstresult name=response numFound=1 
start=0docstr name=textlt;?xml version=1.0 
encoding=UTF-8?gt;lt;html xmlns=http://www.w3.org/1999/xhtmlgt;
[junit4:junit4] lt;headgt;
[junit4:junit4] lt;meta name=Content-Encoding content=ISO-8859-1/gt;
[junit4:junit4] lt;meta name=Content-Type content=text/html; 
charset=ISO-8859-1/gt;
[junit4:junit4] lt;meta name=dc:title content=Title in the header/gt;
[junit4:junit4] lt;titlegt;Title in the headerlt;/titlegt;
[junit4:junit4] lt;/headgt;
[junit4:junit4] lt;bodygt;
[junit4:junit4] lt;h1gt;H1 Headerlt;/h1gt;
[junit4:junit4] 
[junit4:junit4] lt;divgt;Basic divlt;/divgt;
[junit4:junit4] 
[junit4:junit4] lt;div class=classAttributegt;Div with 
attributelt;/divgt;
[junit4:junit4] 
[junit4:junit4] lt;/bodygt;lt;/htmlgt;/str/doc/result
[junit4:junit4] /response
[junit4:junit4] 
[junit4:junit4]request 
was:start=0q=*:*qt=standardrows=20version=2.2
[junit4:junit4]at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:517)
[junit4:junit4]... 42 more
{noformat}

...suggesting maybe the comment about uppercasing/lowercasing tags in tika 
isn't consistent across platforms?  (or maybe you previously tested against a 
slightly diff version of tika and the behavior has changed?

 DIH: Provide configuration to use Tika's IdentityHtmlMapper
 ---

 Key: SOLR-4530
 URL: https://issues.apache.org/jira/browse/SOLR-4530
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Minor
 Fix For: 4.3


 When using TikaEntityProcessor in DIH, the default HTML Mapper strips out 
 most of the HTML. It may make sense when the expectation is just to store the 
 extracted content as a text blob, but DIH allows more fine-tuned content 
 extraction (e.g. with nested XPathEntityProcessor).
 Recent Tika versions allow to set an alternative HTML Mapper implementation 
 that passes all the HTML in. It would be useful to be able to set that 
 implementation from DIH configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4530) DIH: Provide configuration to use Tika's IdentityHtmlMapper

2013-03-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606594#comment-13606594
 ] 

Alexandre Rafalovitch commented on SOLR-4530:
-

Could be different version of Tika, as I tested it against Solr 4.1 originally. 
I will retest. Should I be retesting against trunk or against 4.2 (4.2.1? 4.3?) 
if I want this make it to a 4.x sub-release?

 DIH: Provide configuration to use Tika's IdentityHtmlMapper
 ---

 Key: SOLR-4530
 URL: https://issues.apache.org/jira/browse/SOLR-4530
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Minor
 Fix For: 4.3


 When using TikaEntityProcessor in DIH, the default HTML Mapper strips out 
 most of the HTML. It may make sense when the expectation is just to store the 
 extracted content as a text blob, but DIH allows more fine-tuned content 
 extraction (e.g. with nested XPathEntityProcessor).
 Recent Tika versions allow to set an alternative HTML Mapper implementation 
 that passes all the HTML in. It would be useful to be able to set that 
 implementation from DIH configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4530) DIH: Provide configuration to use Tika's IdentityHtmlMapper

2013-03-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606596#comment-13606596
 ] 

Hoss Man commented on SOLR-4530:


features are _always_ added to trunk first, and then backported to 4 based on 
feasibility  stability.

 DIH: Provide configuration to use Tika's IdentityHtmlMapper
 ---

 Key: SOLR-4530
 URL: https://issues.apache.org/jira/browse/SOLR-4530
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: Alexandre Rafalovitch
Priority: Minor
 Fix For: 4.3


 When using TikaEntityProcessor in DIH, the default HTML Mapper strips out 
 most of the HTML. It may make sense when the expectation is just to store the 
 extracted content as a text blob, but DIH allows more fine-tuned content 
 extraction (e.g. with nested XPathEntityProcessor).
 Recent Tika versions allow to set an alternative HTML Mapper implementation 
 that passes all the HTML in. It would be useful to be able to set that 
 implementation from DIH configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4295) SolrQuery setFacet*() and getFacet*() should have versions that specify the field

2013-03-19 Thread Colin Bartolome (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Bartolome updated SOLR-4295:
--

Affects Version/s: 4.2

 SolrQuery setFacet*() and getFacet*() should have versions that specify the 
 field
 -

 Key: SOLR-4295
 URL: https://issues.apache.org/jira/browse/SOLR-4295
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0, 4.2
Reporter: Colin Bartolome
Priority: Minor

 Since the parameter names for field-specific faceting parameters are a little 
 odd, such as f.field_name.facet.prefix, the SolrQuery class should have 
 methods that take a field parameter. The SolrQuery.setFacetPrefix() method 
 already takes such a parameter. It would be great if the rest of the 
 setFacet*() and getFacet*() methods did, too.
 The workaround is trivial, albeit clumsy: just create the parameter names by 
 hand, as necessary.
 Also, as far as I can tell, there isn't a constant for the f. prefix. That 
 would be helpful, too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606639#comment-13606639
 ] 

Yonik Seeley commented on SOLR-4608:


bq. Anything after the DistributedUpdateProcessor will not be applied, right ?

Everything before the distributed update processor will be applied before 
buffering, and everything after should be applied while replaying.

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 4.2.1 - Last call for issue back porting.

2013-03-19 Thread Mark Miller
Last call.

- Mark Miller

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 4.2.1 - Last call for issue back porting.

2013-03-19 Thread Yonik Seeley
I'm looking at https://issues.apache.org/jira/browse/SOLR-4589
which seems to be a pretty serious performance bug.

-Yonik
http://lucidworks.com


On Tue, Mar 19, 2013 at 3:04 PM, Mark Miller markrmil...@gmail.com wrote:
 Last call.

 - Mark Miller

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #279: POMs out of sync

2013-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/279/

1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
shard1 is not consistent.  Got 305 from 
http://127.0.0.1:52121/_a/er/collection1lastClient and got 5 from 
http://127.0.0.1:43521/_a/er/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 305 from 
http://127.0.0.1:52121/_a/er/collection1lastClient and got 5 from 
http://127.0.0.1:43521/_a/er/collection1
at 
__randomizedtesting.SeedInfo.seed([888C5808003E2398:96AD610776143A4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:963)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:233)




Build Log:
[...truncated 23049 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4311) Admin UI - Optimize Caching Behaviour

2013-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606702#comment-13606702
 ] 

David Smiley commented on SOLR-4311:


+1 to get this into 4.2.1

 Admin UI - Optimize Caching Behaviour
 -

 Key: SOLR-4311
 URL: https://issues.apache.org/jira/browse/SOLR-4311
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Chris Bleakley
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.3

 Attachments: SOLR-4311.patch


   Although both the luke and core admin handlers set http caching to false in 
 the response headers** I believe the Cache-Control settings are ignored 
 during ajax requests in certain browsers. This can be a problem if you're 
 refreshing admin to get the latest do count. It can also be a problem when 
 you compare the count of Num Docs on the main index page (/solr/#/CORE) vs 
 the count on the core admin page (/solr/#/~cores/CORE). Consider that if you 
 first visit the main index page, add and commit 100 docs, and then visit core 
 admin the doc count will be off by 100.
   
   If this is an issue the ajax requests can explictly set caching to false ( 
 http://api.jquery.com/jQuery.ajax/#jQuery-ajax-settings ) ... for example, 
 inserting 'cache: false,' after line 91 here: 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/webapp/web/js/scripts/dashboard.js#L91
   
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L167
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java#L216
   
   Tested using Chrome Version 24.0.1312.52

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4828) BooleanQuery.extractTerms should not recurse into MUST_NOT clauses

2013-03-19 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4828:
---

Fix Version/s: 4.2.1

 BooleanQuery.extractTerms should not recurse into MUST_NOT clauses
 --

 Key: LUCENE-4828
 URL: https://issues.apache.org/jira/browse/LUCENE-4828
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4828.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4828) BooleanQuery.extractTerms should not recurse into MUST_NOT clauses

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606710#comment-13606710
 ] 

Commit Tag Bot commented on LUCENE-4828:


[trunk commit] Michael McCandless
http://svn.apache.org/viewvc?view=revisionrevision=1458472

LUCENE-4828: move CHANGES entry to 4.2.1


 BooleanQuery.extractTerms should not recurse into MUST_NOT clauses
 --

 Key: LUCENE-4828
 URL: https://issues.apache.org/jira/browse/LUCENE-4828
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.3, 4.2.1

 Attachments: LUCENE-4828.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4311) Admin UI - Optimize Caching Behaviour

2013-03-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4311:
--

Fix Version/s: 4.2.1
   5.0

 Admin UI - Optimize Caching Behaviour
 -

 Key: SOLR-4311
 URL: https://issues.apache.org/jira/browse/SOLR-4311
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Chris Bleakley
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4311.patch


   Although both the luke and core admin handlers set http caching to false in 
 the response headers** I believe the Cache-Control settings are ignored 
 during ajax requests in certain browsers. This can be a problem if you're 
 refreshing admin to get the latest do count. It can also be a problem when 
 you compare the count of Num Docs on the main index page (/solr/#/CORE) vs 
 the count on the core admin page (/solr/#/~cores/CORE). Consider that if you 
 first visit the main index page, add and commit 100 docs, and then visit core 
 admin the doc count will be off by 100.
   
   If this is an issue the ajax requests can explictly set caching to false ( 
 http://api.jquery.com/jQuery.ajax/#jQuery-ajax-settings ) ... for example, 
 inserting 'cache: false,' after line 91 here: 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/webapp/web/js/scripts/dashboard.js#L91
   
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L167
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java#L216
   
   Tested using Chrome Version 24.0.1312.52

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1252) A lighter version of Solr for distribution

2013-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606787#comment-13606787
 ] 

Jan Høydahl commented on SOLR-1252:
---

So, any new thoughts on this? The original idea of /contrib was a place to put 
non-core things. But there's only 9 contrib modules out there which tells me we 
may have been putting too much stuff into core. I don't know why, perhaps it's 
just extra hassle creating a new contrib module? Should we consider a better, 
more streamlined plugin mechanism?

SPRING_CLEANING_2013

 A lighter version of Solr for distribution
 --

 Key: SOLR-1252
 URL: https://issues.apache.org/jira/browse/SOLR-1252
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 see the thread
 http://markmail.org/thread/z3ukgcowzdsdp3i3
 Let us decide on what all could be included in the lite version. 
 I guess it should containe
  *  solr.war 
  * a single core example 
  * a multicore example  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4311) Admin UI - Optimize Caching Behaviour

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606798#comment-13606798
 ] 

Commit Tag Bot commented on SOLR-4311:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458510

SOLR-4311: move CHANGES entry.


 Admin UI - Optimize Caching Behaviour
 -

 Key: SOLR-4311
 URL: https://issues.apache.org/jira/browse/SOLR-4311
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Chris Bleakley
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4311.patch


   Although both the luke and core admin handlers set http caching to false in 
 the response headers** I believe the Cache-Control settings are ignored 
 during ajax requests in certain browsers. This can be a problem if you're 
 refreshing admin to get the latest do count. It can also be a problem when 
 you compare the count of Num Docs on the main index page (/solr/#/CORE) vs 
 the count on the core admin page (/solr/#/~cores/CORE). Consider that if you 
 first visit the main index page, add and commit 100 docs, and then visit core 
 admin the doc count will be off by 100.
   
   If this is an issue the ajax requests can explictly set caching to false ( 
 http://api.jquery.com/jQuery.ajax/#jQuery-ajax-settings ) ... for example, 
 inserting 'cache: false,' after line 91 here: 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/webapp/web/js/scripts/dashboard.js#L91
   
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L167
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java#L216
   
   Tested using Chrome Version 24.0.1312.52

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4311) Admin UI - Optimize Caching Behaviour

2013-03-19 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606799#comment-13606799
 ] 

Commit Tag Bot commented on SOLR-4311:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1458509

SOLR-4311: move CHANGES entry.


 Admin UI - Optimize Caching Behaviour
 -

 Key: SOLR-4311
 URL: https://issues.apache.org/jira/browse/SOLR-4311
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Chris Bleakley
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4311.patch


   Although both the luke and core admin handlers set http caching to false in 
 the response headers** I believe the Cache-Control settings are ignored 
 during ajax requests in certain browsers. This can be a problem if you're 
 refreshing admin to get the latest do count. It can also be a problem when 
 you compare the count of Num Docs on the main index page (/solr/#/CORE) vs 
 the count on the core admin page (/solr/#/~cores/CORE). Consider that if you 
 first visit the main index page, add and commit 100 docs, and then visit core 
 admin the doc count will be off by 100.
   
   If this is an issue the ajax requests can explictly set caching to false ( 
 http://api.jquery.com/jQuery.ajax/#jQuery-ajax-settings ) ... for example, 
 inserting 'cache: false,' after line 91 here: 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/webapp/web/js/scripts/dashboard.js#L91
   
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L167
   ** 
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java#L216
   
   Tested using Chrome Version 24.0.1312.52

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)

2013-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606806#comment-13606806
 ] 

Jan Høydahl commented on SOLR-247:
--

Seems like there has not been much demand for this the last 4 years :) Could 
this not be a nice task to do at the same time as SOLR-650 ?

SPRING_CLEANING_2013

 Allow facet.field=* to facet on all fields (without knowing what they are)
 --

 Key: SOLR-247
 URL: https://issues.apache.org/jira/browse/SOLR-247
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, 
 SOLR-247.patch, SOLR-247.patch


 I don't know if this is a good idea to include -- it is potentially a bad 
 idea to use it, but that can be ok.
 This came out of trying to use faceting for the LukeRequestHandler top term 
 collecting.
 http://www.nabble.com/Luke-request-handler-issue-tf3762155.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)

2013-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-247:
-

Labels: beginners  (was: )

 Allow facet.field=* to facet on all fields (without knowing what they are)
 --

 Key: SOLR-247
 URL: https://issues.apache.org/jira/browse/SOLR-247
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
  Labels: beginners
 Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, 
 SOLR-247.patch, SOLR-247.patch


 I don't know if this is a good idea to include -- it is potentially a bad 
 idea to use it, but that can be ok.
 This came out of trying to use faceting for the LukeRequestHandler top term 
 collecting.
 http://www.nabble.com/Luke-request-handler-issue-tf3762155.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)

2013-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-247:
-

Labels: beginners newdev  (was: beginners)

 Allow facet.field=* to facet on all fields (without knowing what they are)
 --

 Key: SOLR-247
 URL: https://issues.apache.org/jira/browse/SOLR-247
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
  Labels: beginners, newdev
 Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, 
 SOLR-247.patch, SOLR-247.patch


 I don't know if this is a good idea to include -- it is potentially a bad 
 idea to use it, but that can be ok.
 This came out of trying to use faceting for the LukeRequestHandler top term 
 collecting.
 http://www.nabble.com/Luke-request-handler-issue-tf3762155.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4589) 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = pathological CPU load response time

2013-03-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4589:
--

Fix Version/s: 5.0

 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = 
 pathological CPU load  response time
 

 Key: SOLR-4589
 URL: https://issues.apache.org/jira/browse/SOLR-4589
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0, 4.1, 4.2
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 5.0, 4.2.1

 Attachments: SOLR-4589.patch, SOLR-4589.patch, 
 test-just-queries.out__4.0.0_mmap_lazy_using36index.txt, 
 test-just-queries.sh, test.out__3.6.1_mmap_lazy.txt, 
 test.out__3.6.1_mmap_nolazy.txt, test.out__3.6.1_nio_lazy.txt, 
 test.out__3.6.1_nio_nolazy.txt, test.out__4.0.0_mmap_lazy.txt, 
 test.out__4.0.0_mmap_nolazy.txt, test.out__4.0.0_nio_lazy.txt, 
 test.out__4.0.0_nio_nolazy.txt, test.out__4.2.0_mmap_lazy.txt, 
 test.out__4.2.0_mmap_nolazy.txt, test.out__4.2.0_nio_lazy.txt, 
 test.out__4.2.0_nio_nolazy.txt, test.sh


 Following up on a [user report of exterme CPU usage in 
 4.1|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c1362019882934-4043543.p...@n3.nabble.com%3E],
  I've discovered that the following combination of factors can result in 
 extreme CPU usage and excessively HTTP response times...
 * Solr 4.x (tested 3.6.1, 4.0.0, and 4.2.0)
 * enableLazyFieldLoading == true (included in example solrconfig.xml)
 * documents with a large number of values in multivalued fields (eg: tested 
 ~10-15K values)
 * multiple requests returning the same doc with different fl lists
 I haven't dug into the route cause yet, but the essential observations is: if 
 lazyloading is used in 4.x, then once a document has been fetched with an 
 initial fl list X, subsequent requests for that document using a differnet fl 
 list Y can be many orders of magnitute slower (while pegging the CPU) -- even 
 if those same requests using fl Y uncached (or w/o lazy laoding) would be 
 extremely fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4589) 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = pathological CPU load response time

2013-03-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4589:
--

Fix Version/s: 4.3

 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = 
 pathological CPU load  response time
 

 Key: SOLR-4589
 URL: https://issues.apache.org/jira/browse/SOLR-4589
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0, 4.1, 4.2
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4589.patch, SOLR-4589.patch, 
 test-just-queries.out__4.0.0_mmap_lazy_using36index.txt, 
 test-just-queries.sh, test.out__3.6.1_mmap_lazy.txt, 
 test.out__3.6.1_mmap_nolazy.txt, test.out__3.6.1_nio_lazy.txt, 
 test.out__3.6.1_nio_nolazy.txt, test.out__4.0.0_mmap_lazy.txt, 
 test.out__4.0.0_mmap_nolazy.txt, test.out__4.0.0_nio_lazy.txt, 
 test.out__4.0.0_nio_nolazy.txt, test.out__4.2.0_mmap_lazy.txt, 
 test.out__4.2.0_mmap_nolazy.txt, test.out__4.2.0_nio_lazy.txt, 
 test.out__4.2.0_nio_nolazy.txt, test.sh


 Following up on a [user report of exterme CPU usage in 
 4.1|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c1362019882934-4043543.p...@n3.nabble.com%3E],
  I've discovered that the following combination of factors can result in 
 extreme CPU usage and excessively HTTP response times...
 * Solr 4.x (tested 3.6.1, 4.0.0, and 4.2.0)
 * enableLazyFieldLoading == true (included in example solrconfig.xml)
 * documents with a large number of values in multivalued fields (eg: tested 
 ~10-15K values)
 * multiple requests returning the same doc with different fl lists
 I haven't dug into the route cause yet, but the essential observations is: if 
 lazyloading is used in 4.x, then once a document has been fetched with an 
 initial fl list X, subsequent requests for that document using a differnet fl 
 list Y can be many orders of magnitute slower (while pegging the CPU) -- even 
 if those same requests using fl Y uncached (or w/o lazy laoding) would be 
 extremely fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-247) Allow facet.field=* to facet on all fields (without knowing what they are)

2013-03-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606832#comment-13606832
 ] 

Erick Erickson commented on SOLR-247:
-

My first reaction to this is that while it might have some limited use-cases 
with small indexes, as soon as one went to any decent size corpus it'd blow 
memory up. Not sure it's worth the effort, but I could be convinced otherwise...

SOLR-650 seems something of a separate issue, it's much more controlled. That 
said, they're both really about now to specify the list of fields for faceting, 
so you're right in that they're part of the same concept

 Allow facet.field=* to facet on all fields (without knowing what they are)
 --

 Key: SOLR-247
 URL: https://issues.apache.org/jira/browse/SOLR-247
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
  Labels: beginners, newdev
 Attachments: SOLR-247-FacetAllFields.patch, SOLR-247.patch, 
 SOLR-247.patch, SOLR-247.patch


 I don't know if this is a good idea to include -- it is potentially a bad 
 idea to use it, but that can be ok.
 This came out of trying to use faceting for the LukeRequestHandler top term 
 collecting.
 http://www.nabble.com/Luke-request-handler-issue-tf3762155.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4589) 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = pathological CPU load response time

2013-03-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4589:
---

Attachment: SOLR-4589.patch

updated patch removing the weak refs and fixing a few doc typoes

 4.x + enableLazyFieldLoading + large multivalued fields + varying fl = 
 pathological CPU load  response time
 

 Key: SOLR-4589
 URL: https://issues.apache.org/jira/browse/SOLR-4589
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0, 4.1, 4.2
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4589.patch, SOLR-4589.patch, SOLR-4589.patch, 
 test-just-queries.out__4.0.0_mmap_lazy_using36index.txt, 
 test-just-queries.sh, test.out__3.6.1_mmap_lazy.txt, 
 test.out__3.6.1_mmap_nolazy.txt, test.out__3.6.1_nio_lazy.txt, 
 test.out__3.6.1_nio_nolazy.txt, test.out__4.0.0_mmap_lazy.txt, 
 test.out__4.0.0_mmap_nolazy.txt, test.out__4.0.0_nio_lazy.txt, 
 test.out__4.0.0_nio_nolazy.txt, test.out__4.2.0_mmap_lazy.txt, 
 test.out__4.2.0_mmap_nolazy.txt, test.out__4.2.0_nio_lazy.txt, 
 test.out__4.2.0_nio_nolazy.txt, test.sh


 Following up on a [user report of exterme CPU usage in 
 4.1|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3c1362019882934-4043543.p...@n3.nabble.com%3E],
  I've discovered that the following combination of factors can result in 
 extreme CPU usage and excessively HTTP response times...
 * Solr 4.x (tested 3.6.1, 4.0.0, and 4.2.0)
 * enableLazyFieldLoading == true (included in example solrconfig.xml)
 * documents with a large number of values in multivalued fields (eg: tested 
 ~10-15K values)
 * multiple requests returning the same doc with different fl lists
 I haven't dug into the route cause yet, but the essential observations is: if 
 lazyloading is used in 4.x, then once a document has been fetched with an 
 initial fl list X, subsequent requests for that document using a differnet fl 
 list Y can be many orders of magnitute slower (while pegging the CPU) -- even 
 if those same requests using fl Y uncached (or w/o lazy laoding) would be 
 extremely fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >