[jira] [Updated] (SOLR-6151) Intermittent TestReplicationHandlerBackup failures

2014-06-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6151:


Attachment: SOLR-6151.patch

It doesn't seem like there is any way currently to figure of backup progress 
unless we add hooks to the file copy code to check how many bytes have been 
written etc.

Looking at SnapShooter#createSnapshot the only two responses which are possible 
are - success or snapShootException. 

In the patch CheckBackupStatus#fetchStatus fails on snapShootException and 
removed the 20 time retry logic. Similarly for delete snapshot a similar logic 
is applied.

Is this the right approach? Maybe we should still keep a hard limit to prevent 
it from running infinitely?

 Intermittent TestReplicationHandlerBackup failures
 --

 Key: SOLR-6151
 URL: https://issues.apache.org/jira/browse/SOLR-6151
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: SOLR-6151.patch


 {code}
 [junit4]   2 4236563 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4236567 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=backupname=cphlpigzwamrxxekj} 
 status=0 QTime=5 
[junit4]   2 4236567 T14511 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4236682 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237270 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237275 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication 
 params={command=backupname=zviqwpynhbjdbiqofwa} status=0 QTime=4 
[junit4]   2 4237277 T14513 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4237390 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237508 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237626 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4237743 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237861 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237979 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238097 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238214 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238332 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238450 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4238567 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238686 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238804 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238922 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239039 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239158 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239276 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239394 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239511 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication 

[jira] [Commented] (SOLR-6151) Intermittent TestReplicationHandlerBackup failures

2014-06-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026175#comment-14026175
 ] 

Dawid Weiss commented on SOLR-6151:
---

Hi Varun, 

I think we can let it run indefinitely since there is already a timeout for any 
test at the suite level (and this should be way longer than needed to create 
that backup).

 Intermittent TestReplicationHandlerBackup failures
 --

 Key: SOLR-6151
 URL: https://issues.apache.org/jira/browse/SOLR-6151
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: SOLR-6151.patch


 {code}
 [junit4]   2 4236563 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4236567 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=backupname=cphlpigzwamrxxekj} 
 status=0 QTime=5 
[junit4]   2 4236567 T14511 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4236682 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237270 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237275 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication 
 params={command=backupname=zviqwpynhbjdbiqofwa} status=0 QTime=4 
[junit4]   2 4237277 T14513 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4237390 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237508 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237626 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4237743 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237861 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237979 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238097 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238214 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238332 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238450 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4238567 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238686 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238804 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238922 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239039 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239158 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239276 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239394 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239511 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239629 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239747 T14496 oas.SolrTestCaseJ4.tearDown ###Ending 
 doTestBackup
[junit4]   2 4239756 T14496 oasc.CoreContainer.shutdown Shutting down 
 CoreContainer 

[jira] [Created] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Jamie (JIRA)
Jamie created LUCENE-5749:
-

 Summary: analyzers should be further customizable to allow for 
better code reuse
 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie


To promote code reuse, the customizability of the analyzers included with 
Lucene (e.g. EnglishAnalyzer) ought to be further improved. 

To illustrate, it is currently difficult to specify general stemming behavior 
without having to modify each and every analyzer class. In our case, we had to 
change the constructors of every analyzer class to accept an AnalyzerOption 
argument. 

The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
defined as follows:

public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 

We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Jamie (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jamie updated LUCENE-5749:
--

Priority: Minor  (was: Major)

 analyzers should be further customizable to allow for better code reuse
 ---

 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie
Priority: Minor
  Labels: analyzers

 To promote code reuse, the customizability of the analyzers included with 
 Lucene (e.g. EnglishAnalyzer) ought to be further improved. 
 To illustrate, it is currently difficult to specify general stemming behavior 
 without having to modify each and every analyzer class. In our case, we had 
 to change the constructors of every analyzer class to accept an 
 AnalyzerOption argument. 
 The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
 defined as follows:
 public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 
 We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
 from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 7285 - Failure!

2014-06-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/7285/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:920)
at hudson.FilePath.act(FilePath.java:893)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1252)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:513)
at hudson.model.Run.execute(Run.java:1710)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: timed out waiting for server
at 

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 87140 - Failure!

2014-06-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/87140/

No tests ran.

Build Log:
[...truncated 13 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/trunk
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS 
/repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:388)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:373)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:361)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:707)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:627)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:102)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1020)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getRepositoryUUID(DAVRepository.java:148)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:339)
at 
org.tmatesoft.svn.core.internal.wc16.SVNBasicDelegate.createRepository(SVNBasicDelegate.java:328)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.update(SVNUpdateClient16.java:482)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:364)
at 
org.tmatesoft.svn.core.internal.wc16.SVNUpdateClient16.doUpdate(SVNUpdateClient16.java:274)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:27)
at 
org.tmatesoft.svn.core.internal.wc2.old.SvnOldUpdate.run(SvnOldUpdate.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:311)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:291)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:387)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:157)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:161)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:910)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:891)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:874)
at hudson.FilePath.act(FilePath.java:920)
at hudson.FilePath.act(FilePath.java:893)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:850)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:788)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1252)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:513)
at hudson.model.Run.execute(Run.java:1710)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: svn: E175002: OPTIONS /repos/asf/lucene/dev/trunk failed
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:154)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:97)
... 38 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS request 
failed on '/repos/asf/lucene/dev/trunk'
svn: E175002: timed out waiting for server
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:777)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:382)
... 37 more
Caused by: svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/trunk'
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:208)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:775)
... 38 more
Caused by: svn: E175002: timed out waiting for server
at 

[jira] [Created] (SOLR-6156) Exception while using group with timeAllowed on SolrCloud.

2014-06-10 Thread Modassar Ather (JIRA)
Modassar Ather created SOLR-6156:


 Summary: Exception while using group with timeAllowed on SolrCloud.
 Key: SOLR-6156
 URL: https://issues.apache.org/jira/browse/SOLR-6156
 Project: Solr
  Issue Type: Bug
Reporter: Modassar Ather


Following exception is thrown when using grouping with timeAllowed. Solr 
version used is 4.8.0.
SEVERE: 
null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.lang.NullPointerException
at 
org.apache.lucene.search.TimeLimitingCollector.setNextReader(TimeLimitingCollector.java:158)
at 
org.apache.lucene.search.MultiCollector.setNextReader(MultiCollector.java:113)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:612)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
at 
org.apache.solr.search.grouping.CommandHandler.searchWithTimeLimiter(CommandHandler.java:219)
at 
org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:156)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:338)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6156) Exception while using group with timeAllowed on SolrCloud.

2014-06-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026280#comment-14026280
 ] 

Christine Poerschke commented on SOLR-6156:
---

Hi [~modassar] - i have come across something similar with a test solr 
instance, the crucial combination there was 
{noformat}group=truetimeAllowed=...rows=0{noformat} and non-zero rows values 
was fine. Does your request use rows=0 as well and/or could you share an 
example request that reproduces the exception?

 Exception while using group with timeAllowed on SolrCloud.
 --

 Key: SOLR-6156
 URL: https://issues.apache.org/jira/browse/SOLR-6156
 Project: Solr
  Issue Type: Bug
Reporter: Modassar Ather

 Following exception is thrown when using grouping with timeAllowed. Solr 
 version used is 4.8.0.
 SEVERE: 
 null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
 java.lang.NullPointerException
 at 
 org.apache.lucene.search.TimeLimitingCollector.setNextReader(TimeLimitingCollector.java:158)
 at 
 org.apache.lucene.search.MultiCollector.setNextReader(MultiCollector.java:113)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:612)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.grouping.CommandHandler.searchWithTimeLimiter(CommandHandler.java:219)
 at 
 org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:156)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:338)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026292#comment-14026292
 ] 

Robert Muir commented on LUCENE-5749:
-

Personally I don't think the analyzers should have all the options they have 
today: they should just be simple, practical examples.

its really like 2 or 3 lines of code to make your own analyzer. 

Maybe the problem is that since Analyzer is java, people see it as code when 
its really just a definition. I would rather change the analyzers to just be 
default configurations in a text file or something so that people won't want to 
extend them anymore :)

 analyzers should be further customizable to allow for better code reuse
 ---

 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie
Priority: Minor
  Labels: analyzers

 To promote code reuse, the customizability of the analyzers included with 
 Lucene (e.g. EnglishAnalyzer) ought to be further improved. 
 To illustrate, it is currently difficult to specify general stemming behavior 
 without having to modify each and every analyzer class. In our case, we had 
 to change the constructors of every analyzer class to accept an 
 AnalyzerOption argument. 
 The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
 defined as follows:
 public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 
 We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
 from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Jamie (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026294#comment-14026294
 ] 

Jamie commented on LUCENE-5749:
---

Its not really two lines of code. There are many analyzers.

 analyzers should be further customizable to allow for better code reuse
 ---

 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie
Priority: Minor
  Labels: analyzers

 To promote code reuse, the customizability of the analyzers included with 
 Lucene (e.g. EnglishAnalyzer) ought to be further improved. 
 To illustrate, it is currently difficult to specify general stemming behavior 
 without having to modify each and every analyzer class. In our case, we had 
 to change the constructors of every analyzer class to accept an 
 AnalyzerOption argument. 
 The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
 defined as follows:
 public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 
 We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
 from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6156) Exception while using group with timeAllowed on SolrCloud.

2014-06-10 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026297#comment-14026297
 ] 

Modassar Ather commented on SOLR-6156:
--

Hi [~cpoerschke]

With timeAllowed following query is used without rows parameter which caused 
exception as in description of this ticket.
group=truegroup.field=FIELD_NAME

 Exception while using group with timeAllowed on SolrCloud.
 --

 Key: SOLR-6156
 URL: https://issues.apache.org/jira/browse/SOLR-6156
 Project: Solr
  Issue Type: Bug
Reporter: Modassar Ather

 Following exception is thrown when using grouping with timeAllowed. Solr 
 version used is 4.8.0.
 SEVERE: 
 null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
 java.lang.NullPointerException
 at 
 org.apache.lucene.search.TimeLimitingCollector.setNextReader(TimeLimitingCollector.java:158)
 at 
 org.apache.lucene.search.MultiCollector.setNextReader(MultiCollector.java:113)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:612)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.grouping.CommandHandler.searchWithTimeLimiter(CommandHandler.java:219)
 at 
 org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:156)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:338)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026299#comment-14026299
 ] 

Robert Muir commented on LUCENE-5749:
-

Right, its zero lines of code actually. These analyzers are just 
default/example chains. They arent code.
Its just not feasible or even wanted to add such options to them, its too 
difficult to maintain. The current analyzers already look hellacious because of 
the existing constraints like backwards compatibility that we have to lug 
around for years. 

Personally, stuff like back compat on what are just default definitions, 
Versions, stopword options, etc totally discourages me from improving any of 
the existing analyzers (I would rather avoid the hassle), even though quite a 
few aren't in great shape and could use better defaults or algorithms. 

if you want to do something expert like change the default stemming algorithm, 
please define your own chain. Its really not that hard.


 analyzers should be further customizable to allow for better code reuse
 ---

 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie
Priority: Minor
  Labels: analyzers

 To promote code reuse, the customizability of the analyzers included with 
 Lucene (e.g. EnglishAnalyzer) ought to be further improved. 
 To illustrate, it is currently difficult to specify general stemming behavior 
 without having to modify each and every analyzer class. In our case, we had 
 to change the constructors of every analyzer class to accept an 
 AnalyzerOption argument. 
 The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
 defined as follows:
 public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 
 We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
 from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5749) analyzers should be further customizable to allow for better code reuse

2014-06-10 Thread Jamie (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026302#comment-14026302
 ] 

Jamie commented on LUCENE-5749:
---

Robert, I've already completed the exercise. It wasn't hard at all, just 
laborious and time consuming. There are something like twenty or more classes 
that need to be changed.

 analyzers should be further customizable to allow for better code reuse
 ---

 Key: LUCENE-5749
 URL: https://issues.apache.org/jira/browse/LUCENE-5749
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.8.1
 Environment: All
Reporter: Jamie
Priority: Minor
  Labels: analyzers

 To promote code reuse, the customizability of the analyzers included with 
 Lucene (e.g. EnglishAnalyzer) ought to be further improved. 
 To illustrate, it is currently difficult to specify general stemming behavior 
 without having to modify each and every analyzer class. In our case, we had 
 to change the constructors of every analyzer class to accept an 
 AnalyzerOption argument. 
 The AnalyzerOption class has a getStemStrategy() method. StemStrategy is 
 defined as follows:
 public enum StemStrategy { AGGRESSIVE,  LIGHT, NONE }; 
 We needed to modify over 20 or so Lucene classes. This is obviously not ideal 
 from a code reuse and maintainability standpoint. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SolrIndexSearcher makes no DelegatingCol...

2014-06-10 Thread cpoerschke
GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/57

SolrIndexSearcher makes no DelegatingCollector.finish() call when ...

SolrIndexSearcher makes no DelegatingCollector.finish() call when 
IndexSearcher throws an expected exception. This seems like an omission.

This pull request is for https://issues.apache.org/jira/i#browse/SOLR-6087 
re-baselined against trunk which now contains the SOLR-6067 refactor.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-solr-6087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/57.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #57


commit 6f16304a044307a37af58eb92361f8a0e20a5be1
Author: Christine Poerschke cpoersc...@bloomberg.net
Date:   2014-05-16T13:33:59Z

solr: SolrIndexSearcher makes no DelegatingCollector.finish() call when 
IndexSearcher throws an expected exception. This seems like an omission.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6087) SolrIndexSearcher makes no DelegatingCollector.finish() call when IndexSearcher throws an expected exception.

2014-06-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026308#comment-14026308
 ] 

Christine Poerschke commented on SOLR-6087:
---

https://github.com/apache/lucene-solr/pull/57 is rebaselining against trunk 
which now contains SOLR-6067 refactor changes.

An alternative change would be to have a 'finally' block which calls the 
'finish()' method.

However on the basis that exceptions other than TimeExceededException and 
EarlyTerminatingCollectorException may well have been caused or thrown by one 
of the collectors in the collector chain i think it's best to call finish only 
for known, expected code paths i.e. don't potentially call finish on a 
collector that is already in trouble. Also the 'finish()' method itself could 
throw.


 SolrIndexSearcher makes no DelegatingCollector.finish() call when 
 IndexSearcher throws an expected exception.
 -

 Key: SOLR-6087
 URL: https://issues.apache.org/jira/browse/SOLR-6087
 Project: Solr
  Issue Type: Bug
Reporter: Christine Poerschke
Priority: Minor

 This seems like an omission. github pull request with proposed change to 
 follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6087) SolrIndexSearcher makes no DelegatingCollector.finish() call when IndexSearcher throws an expected exception.

2014-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026306#comment-14026306
 ] 

ASF GitHub Bot commented on SOLR-6087:
--

GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/57

SolrIndexSearcher makes no DelegatingCollector.finish() call when ...

SolrIndexSearcher makes no DelegatingCollector.finish() call when 
IndexSearcher throws an expected exception. This seems like an omission.

This pull request is for https://issues.apache.org/jira/i#browse/SOLR-6087 
re-baselined against trunk which now contains the SOLR-6067 refactor.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-solr-6087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/57.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #57


commit 6f16304a044307a37af58eb92361f8a0e20a5be1
Author: Christine Poerschke cpoersc...@bloomberg.net
Date:   2014-05-16T13:33:59Z

solr: SolrIndexSearcher makes no DelegatingCollector.finish() call when 
IndexSearcher throws an expected exception. This seems like an omission.




 SolrIndexSearcher makes no DelegatingCollector.finish() call when 
 IndexSearcher throws an expected exception.
 -

 Key: SOLR-6087
 URL: https://issues.apache.org/jira/browse/SOLR-6087
 Project: Solr
  Issue Type: Bug
Reporter: Christine Poerschke
Priority: Minor

 This seems like an omission. github pull request with proposed change to 
 follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6067) add buildAndRunCollectorChain methods to reduce code duplication in SolrIndexSearcher

2014-06-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026311#comment-14026311
 ] 

Christine Poerschke commented on SOLR-6067:
---

Thanks Hoss. Sure, the Grouping.java changes can become a separate issue.

 add buildAndRunCollectorChain methods to reduce code duplication in 
 SolrIndexSearcher
 -

 Key: SOLR-6067
 URL: https://issues.apache.org/jira/browse/SOLR-6067
 Project: Solr
  Issue Type: Improvement
Reporter: Christine Poerschke
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-6067.patch, SOLR-6067.patch


 https://github.com/apache/lucene-solr/pull/48 has the proposed change. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5743) new 4.9 norms format

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026350#comment-14026350
 ] 

ASF subversion and git services commented on LUCENE-5743:
-

Commit 1601606 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1601606 ]

LUCENE-5743: Add Lucene49NormsFormat

 new 4.9 norms format
 

 Key: LUCENE-5743
 URL: https://issues.apache.org/jira/browse/LUCENE-5743
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5743.patch


 Norms can eat up a lot of RAM, since by default its 8 bits per field per 
 document. We rely upon users to omit them to not blow up RAM, but its a 
 constant trap.
 Previously in 4.2, I tried to compress these by default, but it was too slow. 
 My mistakes were:
 * allowing slow bits per value like bpv=5 that are implemented with expensive 
 operations.
 * trying to wedge norms into the generalized docvalues numeric case
 * not handling simple degraded cases like constant norm the same norm 
 value for every document.
 Instead, we can just have a separate norms format that is very careful about 
 what it does, since we understand in general the patterns in the data:
 * uses CONSTANT compression (just writes the single value to metadata) when 
 all values are the same.
 * only compresses to bitsPerValue = 1,2,4 (this also happens often, for very 
 short text fields like person names and other stuff in structured data)
 * otherwise, if you would need 5,6,7,8 bits per value, we just continue to do 
 what we do today, encode as byte[]. Maybe we can improve this later, but this 
 ensures we don't have a performance impact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5748) SORTED_NUMERIC dv type

2014-06-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026380#comment-14026380
 ] 

Adrien Grand commented on LUCENE-5748:
--

+1 I like it!

 SORTED_NUMERIC dv type
 --

 Key: LUCENE-5748
 URL: https://issues.apache.org/jira/browse/LUCENE-5748
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5748.patch


 Currently for Strings you have SORTED and SORTED_SET, capable of single and 
 multiple values per document respectively.
 For multi-numerics, there are only a few choices:
 * encode with NumericUtils into byte[]'s and store with SORTED_SET.
 * encode yourself per-document into BINARY.
 Both of these techniques have problems: 
 SORTED_SET isn't bad if you just want to do basic sorting (e.g. min/max) or 
 faceting counts: most of the bloat in the terms dict is compressed away, 
 and it optimizes the case where the data is actually single-valued, but it 
 falls apart performance-wise if you want to do more complex stuff like solr's 
 analytics component or elasticsearch's aggregations: the ordinals just get in 
 your way and cause additional work, deref'ing each to a byte[] and then 
 decoding that back to a number. Worst of all, any mathematical calculations 
 are off because it discards frequency (deduplicates).
 using your own custom encoding in BINARY removes the unnecessary ordinal 
 dereferencing, but you trade off bad compression and access: you have no real 
 choice but to do something like vInt within each byte[] for the doc, which 
 means even basic sorting (e.g. max) is slow as its not constant time. There 
 is no chance for the codec to optimize things like dates with GCD compression 
 or optimize the single-valued case because its just an opaque byte[].
 So I think it would be good to explore a simple long[] type that solves these 
 problems.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6146) Leak in CloudSolrServer causing Too many open files

2014-06-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6146:


Attachment: SOLR-6146.patch

In this patch, I made sure that the zkStateReader is closed on all exceptions 
while making sure that all exceptions are not wrapped (to preserve 
back-compat). Varun shared a test which reproduces the problem and it is 
included with this patch.

I'll commit this shortly.

 Leak in CloudSolrServer causing Too many open files
 -

 Key: SOLR-6146
 URL: https://issues.apache.org/jira/browse/SOLR-6146
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud, solrj
 Attachments: SOLR-6146.patch, SOLR-6146.patch, SOLR-6146.patch


 Due to a misconfiguration in one of our QA clusters, we uncovered a leak in 
 CloudSolrServer. If this line throws:
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L242
 then the instantiated ZkStateReader is leaked.
 Here's the stacktrace of the Exception (we're using a custom build so the 
 line numbers won't quite match up, but it gives the idea):
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:304)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:568)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:557)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:33)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:100)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:33)
  at 
 com.apple.coda.queueing.CodaQueueConsumer$StreamProcessor.run(CodaQueueConsumer.java:147)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722) Caused by: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /live_nodes at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
 org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:253) at 
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 
 org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:253) 
 at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:305)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.createZkStateReader(CloudSolrServer.java:935)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:298)
  ... 10 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6146) Leak in CloudSolrServer causing Too many open files

2014-06-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6146:


Attachment: SOLR-6146.patch

Added a comment to the test to explain how/where it fails exactly.

 Leak in CloudSolrServer causing Too many open files
 -

 Key: SOLR-6146
 URL: https://issues.apache.org/jira/browse/SOLR-6146
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud, solrj
 Attachments: SOLR-6146.patch, SOLR-6146.patch, SOLR-6146.patch, 
 SOLR-6146.patch


 Due to a misconfiguration in one of our QA clusters, we uncovered a leak in 
 CloudSolrServer. If this line throws:
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L242
 then the instantiated ZkStateReader is leaked.
 Here's the stacktrace of the Exception (we're using a custom build so the 
 line numbers won't quite match up, but it gives the idea):
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:304)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:568)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:557)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:33)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:100)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:33)
  at 
 com.apple.coda.queueing.CodaQueueConsumer$StreamProcessor.run(CodaQueueConsumer.java:147)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722) Caused by: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /live_nodes at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
 org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:253) at 
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 
 org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:253) 
 at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:305)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.createZkStateReader(CloudSolrServer.java:935)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:298)
  ... 10 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6146) Leak in CloudSolrServer causing Too many open files

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026391#comment-14026391
 ] 

ASF subversion and git services commented on SOLR-6146:
---

Commit 1601621 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1601621 ]

SOLR-6146: Incorrect configuration such as wrong chroot in zk server address 
can cause CloudSolrServer to leak resources

 Leak in CloudSolrServer causing Too many open files
 -

 Key: SOLR-6146
 URL: https://issues.apache.org/jira/browse/SOLR-6146
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud, solrj
 Attachments: SOLR-6146.patch, SOLR-6146.patch, SOLR-6146.patch, 
 SOLR-6146.patch


 Due to a misconfiguration in one of our QA clusters, we uncovered a leak in 
 CloudSolrServer. If this line throws:
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L242
 then the instantiated ZkStateReader is leaked.
 Here's the stacktrace of the Exception (we're using a custom build so the 
 line numbers won't quite match up, but it gives the idea):
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:304)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:568)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:557)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:33)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:100)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:33)
  at 
 com.apple.coda.queueing.CodaQueueConsumer$StreamProcessor.run(CodaQueueConsumer.java:147)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722) Caused by: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /live_nodes at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
 org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:253) at 
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 
 org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:253) 
 at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:305)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.createZkStateReader(CloudSolrServer.java:935)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:298)
  ... 10 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6146) Leak in CloudSolrServer causing Too many open files

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026393#comment-14026393
 ] 

ASF subversion and git services commented on SOLR-6146:
---

Commit 1601622 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601622 ]

SOLR-6146: Incorrect configuration such as wrong chroot in zk server address 
can cause CloudSolrServer to leak resources

 Leak in CloudSolrServer causing Too many open files
 -

 Key: SOLR-6146
 URL: https://issues.apache.org/jira/browse/SOLR-6146
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud, solrj
 Attachments: SOLR-6146.patch, SOLR-6146.patch, SOLR-6146.patch, 
 SOLR-6146.patch


 Due to a misconfiguration in one of our QA clusters, we uncovered a leak in 
 CloudSolrServer. If this line throws:
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L242
 then the instantiated ZkStateReader is leaked.
 Here's the stacktrace of the Exception (we're using a custom build so the 
 line numbers won't quite match up, but it gives the idea):
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:304)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:568)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:557)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:33)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:100)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:33)
  at 
 com.apple.coda.queueing.CodaQueueConsumer$StreamProcessor.run(CodaQueueConsumer.java:147)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722) Caused by: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /live_nodes at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
 org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:253) at 
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 
 org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:253) 
 at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:305)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.createZkStateReader(CloudSolrServer.java:935)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:298)
  ... 10 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6146) Leak in CloudSolrServer causing Too many open files

2014-06-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6146.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.9

Thanks Jessica and Varun!

 Leak in CloudSolrServer causing Too many open files
 -

 Key: SOLR-6146
 URL: https://issues.apache.org/jira/browse/SOLR-6146
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.7
Reporter: Jessica Cheng
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud, solrj
 Fix For: 4.9, 5.0

 Attachments: SOLR-6146.patch, SOLR-6146.patch, SOLR-6146.patch, 
 SOLR-6146.patch


 Due to a misconfiguration in one of our QA clusters, we uncovered a leak in 
 CloudSolrServer. If this line throws:
 https://github.com/apache/lucene-solr/blob/branch_4x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L242
 then the instantiated ZkStateReader is leaked.
 Here's the stacktrace of the Exception (we're using a custom build so the 
 line numbers won't quite match up, but it gives the idea):
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:304)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:568)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:557)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
  at 
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:33)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:100)
  at 
 com.apple.cie.search.client.crossdc.MirroredSolrRequestHandler.handleItem(MirroredSolrRequestHandler.java:33)
  at 
 com.apple.coda.queueing.CodaQueueConsumer$StreamProcessor.run(CodaQueueConsumer.java:147)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722) Caused by: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /live_nodes at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
 org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256) at 
 org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:253) at 
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 
 org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:253) 
 at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:305)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.createZkStateReader(CloudSolrServer.java:935)
  at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:298)
  ... 10 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5743) new 4.9 norms format

2014-06-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5743.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.9

I added the Arrays.sort(), also a step towards a BaseNormsFormatTestCase. I've 
always been concerned that we didnt have enough stuff testing the norms 
directly...  

 new 4.9 norms format
 

 Key: LUCENE-5743
 URL: https://issues.apache.org/jira/browse/LUCENE-5743
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Fix For: 4.9, 5.0

 Attachments: LUCENE-5743.patch


 Norms can eat up a lot of RAM, since by default its 8 bits per field per 
 document. We rely upon users to omit them to not blow up RAM, but its a 
 constant trap.
 Previously in 4.2, I tried to compress these by default, but it was too slow. 
 My mistakes were:
 * allowing slow bits per value like bpv=5 that are implemented with expensive 
 operations.
 * trying to wedge norms into the generalized docvalues numeric case
 * not handling simple degraded cases like constant norm the same norm 
 value for every document.
 Instead, we can just have a separate norms format that is very careful about 
 what it does, since we understand in general the patterns in the data:
 * uses CONSTANT compression (just writes the single value to metadata) when 
 all values are the same.
 * only compresses to bitsPerValue = 1,2,4 (this also happens often, for very 
 short text fields like person names and other stuff in structured data)
 * otherwise, if you would need 5,6,7,8 bits per value, we just continue to do 
 what we do today, encode as byte[]. Maybe we can improve this later, but this 
 ensures we don't have a performance impact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5743) new 4.9 norms format

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026400#comment-14026400
 ] 

ASF subversion and git services commented on LUCENE-5743:
-

Commit 1601625 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601625 ]

LUCENE-5743: Add Lucene49NormsFormat

 new 4.9 norms format
 

 Key: LUCENE-5743
 URL: https://issues.apache.org/jira/browse/LUCENE-5743
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Fix For: 4.9, 5.0

 Attachments: LUCENE-5743.patch


 Norms can eat up a lot of RAM, since by default its 8 bits per field per 
 document. We rely upon users to omit them to not blow up RAM, but its a 
 constant trap.
 Previously in 4.2, I tried to compress these by default, but it was too slow. 
 My mistakes were:
 * allowing slow bits per value like bpv=5 that are implemented with expensive 
 operations.
 * trying to wedge norms into the generalized docvalues numeric case
 * not handling simple degraded cases like constant norm the same norm 
 value for every document.
 Instead, we can just have a separate norms format that is very careful about 
 what it does, since we understand in general the patterns in the data:
 * uses CONSTANT compression (just writes the single value to metadata) when 
 all values are the same.
 * only compresses to bitsPerValue = 1,2,4 (this also happens often, for very 
 short text fields like person names and other stuff in structured data)
 * otherwise, if you would need 5,6,7,8 bits per value, we just continue to do 
 what we do today, encode as byte[]. Maybe we can improve this later, but this 
 ensures we don't have a performance impact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr and Kerberos

2014-06-10 Thread Tom Chen
Hi,

I wonder how to secure Solr with Kerberos.

We can Kerberos secure Solr by configuring the AuthenticationFilter from
the hadoop-auth.jar that is packaged in solr.war in the web.xml

But after we do that,

1) How does a SolrJ client connect to the secured Solr server?
2) In SolrCloud environment, how one Solr node connect to other secured
Solr node?

Thanks,
Tom


[jira] [Commented] (SOLR-5940) Make post.jar report back detailed error in case of 400 responses

2014-06-10 Thread Sameer Maggon (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026420#comment-14026420
 ] 

Sameer Maggon commented on SOLR-5940:
-

Thanks Shalin for picking it up - let me know of any feedback!

 Make post.jar report back detailed error in case of 400 responses
 -

 Key: SOLR-5940
 URL: https://issues.apache.org/jira/browse/SOLR-5940
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon
Assignee: Shalin Shekhar Mangar
 Attachments: solr-5940.patch


 Currently post.jar does not print detailed error message that is encountered 
 during indexing. In certain use cases, it's helpful to see the error message 
 so that clients can take appropriate actions.
 In 4.7, here's what gets shown if there is an error during indexing:
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 It would be helpful to print out the msg that is returned from Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6150) Add new AnalyticsQuery to support pluggable analytics.

2014-06-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6150:
-

Attachment: SOLR-6150.patch

Fixed NPE during autowarming.

 Add new AnalyticsQuery to support pluggable analytics.
 --

 Key: SOLR-6150
 URL: https://issues.apache.org/jira/browse/SOLR-6150
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-6150.patch, SOLR-6150.patch, SOLR-6150.patch, 
 SOLR-6150.patch


 It would be great if there was a clean simple approach to plugin custom 
 analytics to Solr.
 This ticket introduces the AnalyticsQuery class which makes this possible.
 To add a custom analytic query you extend AnalyticsQuery and implement:
 {code}
   public abstract DelegatingCollector getAnalyticsCollector(ResponseBuilder 
 rb, IndexSearcher searcher);
 {code}
 This method returns a custom DelegatingCollector which handles the collection 
 of the analytics.
 The DelegatingCollector.finish() method can be used to conveniently finish 
 your analytics and place the output onto the response.
 The AnalyticsQuery also has a nifty constructor that allows you to pass in a 
 MergeStrategy (see SOLR-5973). So, when you extend AnalyticsQuery you can 
 pass in a custom MergeStrategy to handle merging of analytic output from the 
 shards during a distributed search.
 This design is a natural extension of the PostFilter framework. So you can 
 plugin your AnalyticsQuery with a custom QParserPlugin, for example:
 {code}
 q=*:*fq={!myanalytic param1=p1}
 {code}
 Just like PostFilters, AnalyticsQueries can be ordered using the cost 
 parameter. This allows for analytic pipe-lining, where the result of one 
 AnalyticsQuery can be pipe-lined to another AnalyticsQuery. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Uwe Schindler (JIRA)
Uwe Schindler created SOLR-6157:
---

 Summary: ReplicationFactorTest hangs
 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler


See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-06-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026508#comment-14026508
 ] 

Michael McCandless commented on LUCENE-4396:


bq. Hmm. I mean ConjunctionScorer does not use PQ

A right.  Only Disjunctions OK.

bq. As for .advance, I'm not sure whether its cost can exceed .next much 
enough, so that using .advance will be slower than using .next in this case.

OK, let's not explore this ...

 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: And.tasks, AndOr.tasks, AndOr.tasks, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, luceneutil-score-equal.patch, luceneutil-score-equal.patch


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6150) Add new AnalyticsQuery to support pluggable analytics.

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026544#comment-14026544
 ] 

ASF subversion and git services commented on SOLR-6150:
---

Commit 1601664 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1601664 ]

SOLR-6150: Add new AnalyticsQuery to support pluggable analytics

 Add new AnalyticsQuery to support pluggable analytics.
 --

 Key: SOLR-6150
 URL: https://issues.apache.org/jira/browse/SOLR-6150
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-6150.patch, SOLR-6150.patch, SOLR-6150.patch, 
 SOLR-6150.patch


 It would be great if there was a clean simple approach to plugin custom 
 analytics to Solr.
 This ticket introduces the AnalyticsQuery class which makes this possible.
 To add a custom analytic query you extend AnalyticsQuery and implement:
 {code}
   public abstract DelegatingCollector getAnalyticsCollector(ResponseBuilder 
 rb, IndexSearcher searcher);
 {code}
 This method returns a custom DelegatingCollector which handles the collection 
 of the analytics.
 The DelegatingCollector.finish() method can be used to conveniently finish 
 your analytics and place the output onto the response.
 The AnalyticsQuery also has a nifty constructor that allows you to pass in a 
 MergeStrategy (see SOLR-5973). So, when you extend AnalyticsQuery you can 
 pass in a custom MergeStrategy to handle merging of analytic output from the 
 shards during a distributed search.
 This design is a natural extension of the PostFilter framework. So you can 
 plugin your AnalyticsQuery with a custom QParserPlugin, for example:
 {code}
 q=*:*fq={!myanalytic param1=p1}
 {code}
 Just like PostFilters, AnalyticsQueries can be ordered using the cost 
 parameter. This allows for analytic pipe-lining, where the result of one 
 AnalyticsQuery can be pipe-lined to another AnalyticsQuery. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Query-time multi-word synonym expansion

2014-06-10 Thread Jeremy Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Anderson updated SOLR-5379:
--

Attachment: quoted-4_8_1.patch
conf-test-files-4_8_1.patch

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.9, 5.0

 Attachments: conf-test-files-4_8_1.patch, quoted-4_8_1.patch, 
 quoted.patch, synonym-expander-4_8_1.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Query-time multi-word synonym expansion

2014-06-10 Thread Jeremy Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Anderson updated SOLR-5379:
--

Attachment: (was: quoted-4_8_1.patch)

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.9, 5.0

 Attachments: conf-test-files-4_8_1.patch, quoted-4_8_1.patch, 
 quoted.patch, synonym-expander-4_8_1.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5379) Query-time multi-word synonym expansion

2014-06-10 Thread Jeremy Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Anderson updated SOLR-5379:
--

Attachment: (was: conf-test-files-4_8_1.patch)

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.9, 5.0

 Attachments: conf-test-files-4_8_1.patch, quoted-4_8_1.patch, 
 quoted.patch, synonym-expander-4_8_1.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6151) Intermittent TestReplicationHandlerBackup failures

2014-06-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6151:


Attachment: SOLR-6151.patch

Patch which removes the retry logic and depends on error response/test suite 
timeout for failure. Also in my manual testing I realised that the check in 
CheckDeleteBackupStatus#fetchStatus {code}if(response.contains(str 
name=\status\success/str)) {code} is not enough as that is the response 
left behind from the 2nd backup output. Fixed that by adding 
snapshotDeletedAt' and checking against that too.

 Intermittent TestReplicationHandlerBackup failures
 --

 Key: SOLR-6151
 URL: https://issues.apache.org/jira/browse/SOLR-6151
 Project: Solr
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: SOLR-6151.patch, SOLR-6151.patch


 {code}
 [junit4]   2 4236563 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4236567 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=backupname=cphlpigzwamrxxekj} 
 status=0 QTime=5 
[junit4]   2 4236567 T14511 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4236682 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237270 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237275 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication 
 params={command=backupname=zviqwpynhbjdbiqofwa} status=0 QTime=4 
[junit4]   2 4237277 T14513 oash.SnapShooter.createSnapshot Creating 
 backup snapshot...
[junit4]   2 4237390 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237508 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237626 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4237743 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237861 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4237979 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238097 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238214 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238332 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238450 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4238567 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238686 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238804 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4238922 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239039 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239158 T14504 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239276 T14500 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239394 T14505 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=1 
[junit4]   2 4239511 T14503 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication params={command=details} status=0 QTime=0 
[junit4]   2 4239629 T14502 C3403 oasc.SolrCore.execute [collection1] 
 webapp=/solr path=/replication 

Re: Possible regression in 8u20 build 15

2014-06-10 Thread dalibor topic



On 07.06.2014 13:09, Uwe Schindler wrote:

Hi Dalibor,
It might be good to put a hint in the release notes that there might be a 
different, but more correct behavior, in compiling code that uses final fields 
in ctors


Hi Uwe,

That's a good suggestion. 
https://bugs.openjdk.java.net/browse/JDK-8039026 now carries a 
release-notes=yes label.


cheers,
dalibor topic
--
http://www.oracle.com Dalibor Topic | Principal Product Manager
Phone: +494089091214 tel:+494089091214 | Mobile: +491737185961
tel:+491737185961

ORACLE Deutschland B.V.  Co. KG | Kühnehöfe 5 | 22761 Hamburg

ORACLE Deutschland B.V.  Co. KG
Hauptverwaltung: Riesstr. 25, D-80992 München
Registergericht: Amtsgericht München, HRA 95603
Geschäftsführer: Jürgen Kunz

Komplementärin: ORACLE Deutschland Verwaltung B.V.
Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher

http://www.oracle.com/commitment Oracle is committed to developing
practices and products that help protect the environment

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Simon Endele (JIRA)
Simon Endele created SOLR-6158:
--

 Summary: Solr looks up configSets in the wrong directory
 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1, 4.8
Reporter: Simon Endele


I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
(like it's done in the tutorial, same problem with 4.8.0).
Creating a new core with a configSet seems to work (directory 'books' and 
'books/core.properties' are created correctly).

But loading the new core does not work:
{code:none}67446 [qtp25155085-11] INFO  
org.apache.solr.handler.admin.CoreAdminHandler  core create command 
configSet=genericname=booksaction=CREATE
67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
create core: books
org.apache.solr.common.SolrException: Could not load configuration from 
directory C:\dev\solr-4.8.1\example\configsets\generic
at 
org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
at 
org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
...
{code}

It seems like Solr looks up the config sets in the wrong directory:
C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the tutorial 
and the documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets)

Moving the configsets directory one level up (into 'example') will work.
But as of the documentation (and the tutorial) it should be located in the solr 
home directory.

In case I'm completely wrong and everythings works as expected, how can one 
configure the configsets directory be configured?
The documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
configurable configset base directory, but I can't find any information on 
the web.

Another thing: If it would work as I expect, the references lib 
dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Simon Endele (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Endele updated SOLR-6158:
---

Description: 
I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
(like it's done in the tutorial, same problem with 4.8.0).
Creating a new core with a configSet seems to work (directory 'books' and 
'books/core.properties' are created correctly).

But loading the new core does not work:
{code:none}67446 [qtp25155085-11] INFO  
org.apache.solr.handler.admin.CoreAdminHandler  core create command 
configSet=genericname=booksaction=CREATE
67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
create core: books
org.apache.solr.common.SolrException: Could not load configuration from 
directory C:\dev\solr-4.8.1\example\configsets\generic
at 
org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
at 
org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
...
{code}

It seems like Solr looks up the config sets in the wrong directory:
C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the tutorial 
and the documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets)

Moving the configsets directory one level up (into 'example') will work.
But as of the documentation (and the tutorial) it should be located in the solr 
home directory.

In case I'm completely wrong and everythings works as expected, how can the 
configsets directory be configured?
The documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
configurable configset base directory, but I can't find any information on 
the web.

Another thing: If it would work as I expect, the references lib 
dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
more ../ added, I guess (missing in the tutorial).

  was:
I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
(like it's done in the tutorial, same problem with 4.8.0).
Creating a new core with a configSet seems to work (directory 'books' and 
'books/core.properties' are created correctly).

But loading the new core does not work:
{code:none}67446 [qtp25155085-11] INFO  
org.apache.solr.handler.admin.CoreAdminHandler  core create command 
configSet=genericname=booksaction=CREATE
67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
create core: books
org.apache.solr.common.SolrException: Could not load configuration from 
directory C:\dev\solr-4.8.1\example\configsets\generic
at 
org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
at 
org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
...
{code}

It seems like Solr looks up the config sets in the wrong directory:
C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the tutorial 
and the documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets)

Moving the configsets directory one level up (into 'example') will work.
But as of the documentation (and the tutorial) it should be located in the solr 
home directory.

In case I'm completely wrong and everythings works as expected, how can one 
configure the configsets directory be configured?
The documentation on 
https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
configurable configset base directory, but I can't find any information on 
the web.

Another thing: If it would work as I expect, the references lib 
dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
more ../ added, I guess (missing in the tutorial).


 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele

 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create 

[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026643#comment-14026643
 ] 

Uwe Schindler commented on SOLR-6157:
-

Next one is also hanging: 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10518/console

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026650#comment-14026650
 ] 

ASF subversion and git services commented on SOLR-6157:
---

Commit 1601679 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601679 ]

SOLR-6157: Disable test that hangs indefinitely

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026648#comment-14026648
 ] 

ASF subversion and git services commented on SOLR-6157:
---

Commit 1601678 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1601678 ]

SOLR-6157: Disable test that hangs indefinitely

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6150) Add new AnalyticsQuery to support pluggable analytics.

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026671#comment-14026671
 ] 

ASF subversion and git services commented on SOLR-6150:
---

Commit 1601681 from [~joel.bernstein] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601681 ]

SOLR-6150: Add new AnalyticsQuery to support pluggable analytics

 Add new AnalyticsQuery to support pluggable analytics.
 --

 Key: SOLR-6150
 URL: https://issues.apache.org/jira/browse/SOLR-6150
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-6150.patch, SOLR-6150.patch, SOLR-6150.patch, 
 SOLR-6150.patch


 It would be great if there was a clean simple approach to plugin custom 
 analytics to Solr.
 This ticket introduces the AnalyticsQuery class which makes this possible.
 To add a custom analytic query you extend AnalyticsQuery and implement:
 {code}
   public abstract DelegatingCollector getAnalyticsCollector(ResponseBuilder 
 rb, IndexSearcher searcher);
 {code}
 This method returns a custom DelegatingCollector which handles the collection 
 of the analytics.
 The DelegatingCollector.finish() method can be used to conveniently finish 
 your analytics and place the output onto the response.
 The AnalyticsQuery also has a nifty constructor that allows you to pass in a 
 MergeStrategy (see SOLR-5973). So, when you extend AnalyticsQuery you can 
 pass in a custom MergeStrategy to handle merging of analytic output from the 
 shards during a distributed search.
 This design is a natural extension of the PostFilter framework. So you can 
 plugin your AnalyticsQuery with a custom QParserPlugin, for example:
 {code}
 q=*:*fq={!myanalytic param1=p1}
 {code}
 Just like PostFilters, AnalyticsQueries can be ordered using the cost 
 parameter. This allows for analytic pipe-lining, where the result of one 
 AnalyticsQuery can be pipe-lined to another AnalyticsQuery. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6150) Add new AnalyticsQuery to support pluggable analytics.

2014-06-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-6150.
--

Resolution: Fixed

 Add new AnalyticsQuery to support pluggable analytics.
 --

 Key: SOLR-6150
 URL: https://issues.apache.org/jira/browse/SOLR-6150
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-6150.patch, SOLR-6150.patch, SOLR-6150.patch, 
 SOLR-6150.patch


 It would be great if there was a clean simple approach to plugin custom 
 analytics to Solr.
 This ticket introduces the AnalyticsQuery class which makes this possible.
 To add a custom analytic query you extend AnalyticsQuery and implement:
 {code}
   public abstract DelegatingCollector getAnalyticsCollector(ResponseBuilder 
 rb, IndexSearcher searcher);
 {code}
 This method returns a custom DelegatingCollector which handles the collection 
 of the analytics.
 The DelegatingCollector.finish() method can be used to conveniently finish 
 your analytics and place the output onto the response.
 The AnalyticsQuery also has a nifty constructor that allows you to pass in a 
 MergeStrategy (see SOLR-5973). So, when you extend AnalyticsQuery you can 
 pass in a custom MergeStrategy to handle merging of analytic output from the 
 shards during a distributed search.
 This design is a natural extension of the PostFilter framework. So you can 
 plugin your AnalyticsQuery with a custom QParserPlugin, for example:
 {code}
 q=*:*fq={!myanalytic param1=p1}
 {code}
 Just like PostFilters, AnalyticsQueries can be ordered using the cost 
 parameter. This allows for analytic pipe-lining, where the result of one 
 AnalyticsQuery can be pipe-lined to another AnalyticsQuery. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6157:


Assignee: Timothy Potter

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler
Assignee: Timothy Potter

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026690#comment-14026690
 ] 

Timothy Potter commented on SOLR-6157:
--

Nothing special about this test so not sure why it would hang ... seems like a 
problem in the test framework itself.

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler
Assignee: Timothy Potter

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5750:
---

 Summary: Speed up monotonic address access in BINARY/SORTED_SET
 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch

I found this while exploring LUCENE-5748, but it currently applies to both 
variable length BINARY and SORTED_SET, so I think its worth it to do here first.

I think its just a holdover from before MonotonicBlockPackedWriter that to 
access element N we currently do:
{code}
startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
endOffset = ordIndex.get(docID);
{code}

Thats because previously we didnt have packed ints that supported  
Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
write a 0 first and do this:
{code}
startOffset = ordIndex.get(docID);
endOffset = ordIndex.get(docID+1);
{code}

The access is then much faster. For sorting i see around 20% improvement. We 
don't lose any compression because we should assume the delta from 0 .. 1 is 
similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5750:


Attachment: LUCENE-5750.patch

patch (we have a new DV format for 4.9 so its a good time to fix it)

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-06-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2894:
---

Attachment: SOLR-2894.patch

Started getting back into this yesterday (i should have several large blocks of 
time for this issue this week  next week)...

bq. Me and Brett discovered serveral bugs with our mincount and the changes I 
made to our refinement requests that resulted in the odd behavior you were 
seeing. 

Awesome! ... glad to see the test was useful.

bq. Not everything is super happy. I get what look like solrcloud errors when 
running certain seeds

Hmmm... that is a weird error.  People sometimes see errors in solr tests that 
use threads related to timing and/or assertions of things that haven't happened 
yet - but i don't remember ever seeing anything like this type of problem with 
initialization of the cores.

do these failures reproduce for you with the same seeds?  can you post the full 
reproduce line that you get with these failures?

bq. I forgot this patch also comments out the randomUsableUnicodeString to just 
be a simple string, BUT I've changed it back on my box and It seems to be fine.

yep -- it also still had one of my nocommits so that it was _only_ pivoting on 
string fields, but even w/o that it's worked great for me on many iterations.



Revised patch - mostly cleaning up the lingering issues in TestCloudPivotFacet 
but a few other minor fixes of stuff i noticed.

Detailed changes compared to previous patch...

* removed TestDistributedSearch.java.orig that seems to have been included in 
patch by mistake
* cleanup TestCloudPivotFacet
** fixed randomUsableUnicodeString()
** fix nocommit about testing pivot on non-string fields
** fixed the depth checking (we can assert the *max* depth, but that's it)
** removed weird (unused) int ss = 2 that got added to assertNumFound
*** was also in some dead code in PivotFacetProcessor?
** refactored cut/pate methods from Cursor test into baseclass
* I removed the NullGoesLastComparator class and replaced it with a 
compareWithNullLast helper method in PivotFacetField (and added a unit test for 
it)
** the Comparator contract is pretty explicit about null, and this class 
violated that
** it was only being used for simple method calls, not passed to anything that 
explicitly needed a Comparator, so there wasn't a strong need for a standalone 
class



My next step plans...

* review DistributedFacetPivotTest in depth more - add more strong assertions
** at first glance, it looks like a lot of the test is following the example of 
most existing distrib tests of relying on comparisons between the controlClient 
and the distrib client -- in my opinion that's a bad pattern, and i'd like to 
add some explicit assertions on the results of all the {{this.query(...)}} calls
* re-review the new pivot code (and the changes to facet code) in general
** it's been a while since my last skim, and i know you've tweaked a bunch 
based on my previous comments
** i'll take a stab at adding more javadocs to some of the new methods as i 
make sense of them
** where possible, i'm going to try to add unit tests for some of the new low 
level methods you've introduced -- largely as a way to help ensure i understand 
what they do


 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-mincount-minification.patch, 
 SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894_cloud_test.patch, 
 dateToObject.patch, pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026708#comment-14026708
 ] 

Michael McCandless commented on LUCENE-5750:


+1

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5750:


Attachment: LUCENE-5750.patch

add +1L to the SORTED_SET case (its special and takes 'int' docid versus BINARY 
which already uses long addressing)

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch, LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026733#comment-14026733
 ] 

Adrien Grand commented on LUCENE-5750:
--

+ 1

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch, LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned SOLR-6158:
---

Assignee: Alan Woodward

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward

 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026776#comment-14026776
 ] 

Alan Woodward commented on SOLR-6158:
-

Ah, looks like if the configSetBaseDir isn't specified it defaults to 
```configsets``` underneath the CWD, rather than under solr home.  Should be an 
easy fix.

As a workaround, you can set ```configSetBaseDir``` in solr.xml, see 
https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml.

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward

 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-6158:


Attachment: SOLR-6158.patch

Fix, with a couple of tests.

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward
 Attachments: SOLR-6158.patch


 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5648) Index/search multi-valued time durations

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026822#comment-14026822
 ] 

ASF subversion and git services commented on LUCENE-5648:
-

Commit 1601734 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1601734 ]

LUCENE-5648: Bug fix for detecting Contains relation when on the edge.

 Index/search multi-valued time durations
 

 Key: LUCENE-5648
 URL: https://issues.apache.org/jira/browse/LUCENE-5648
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: LUCENE-5648.patch, LUCENE-5648.patch, LUCENE-5648.patch, 
 LUCENE-5648.patch


 If you need to index a date/time duration, then the way to do that is to have 
 a pair of date fields; one for the start and one for the end -- pretty 
 straight-forward. But if you need to index a variable number of durations per 
 document, then the options aren't pretty, ranging from denormalization, to 
 joins, to using Lucene spatial with 2D as described 
 [here|http://wiki.apache.org/solr/SpatialForTimeDurations].  Ideally it would 
 be easier to index durations, and work in a more optimal way.
 This issue implements the aforementioned feature using Lucene-spatial with a 
 new single-dimensional SpatialPrefixTree implementation. Unlike the other two 
 SPT implementations, it's not based on floating point numbers. It will have a 
 Date based customization that indexes levels at meaningful quantities like 
 seconds, minutes, hours, etc.  The point of that alignment is to make it 
 faster to query across meaningful ranges (i.e. [2000 TO 2014]) and to enable 
 a follow-on issue to facet on the data in a really fast way.
 I'll expect to have a working patch up this week.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026868#comment-14026868
 ] 

ASF subversion and git services commented on LUCENE-5750:
-

Commit 1601750 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1601750 ]

LUCENE-5750: speed up monotonic address in BINARY/SORTED_SET

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5750.patch, LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5750.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.9

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.9, 5.0

 Attachments: LUCENE-5750.patch, LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5750) Speed up monotonic address access in BINARY/SORTED_SET

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026882#comment-14026882
 ] 

ASF subversion and git services commented on LUCENE-5750:
-

Commit 1601755 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601755 ]

LUCENE-5750: speed up monotonic address in BINARY/SORTED_SET

 Speed up monotonic address access in BINARY/SORTED_SET
 --

 Key: LUCENE-5750
 URL: https://issues.apache.org/jira/browse/LUCENE-5750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.9, 5.0

 Attachments: LUCENE-5750.patch, LUCENE-5750.patch


 I found this while exploring LUCENE-5748, but it currently applies to both 
 variable length BINARY and SORTED_SET, so I think its worth it to do here 
 first.
 I think its just a holdover from before MonotonicBlockPackedWriter that to 
 access element N we currently do:
 {code}
 startOffset = (docID == 0 ? 0 : ordIndex.get(docID-1));
 endOffset = ordIndex.get(docID);
 {code}
 Thats because previously we didnt have packed ints that supported  
 Integer.MAX_VALUE elements. But thats been fixed for a long time. If we just 
 write a 0 first and do this:
 {code}
 startOffset = ordIndex.get(docID);
 endOffset = ordIndex.get(docID+1);
 {code}
 The access is then much faster. For sorting i see around 20% improvement. We 
 don't lose any compression because we should assume the delta from 0 .. 1 is 
 similar to any other gap N .. N+1



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6145) Concurrent Schema API field additions can result in endless loop

2014-06-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6145:
-

Attachment: SOLR-6145.patch

Patch, removes the useless optimistic concurrency loops from 
ManagedIndexSchema.add(Copy)Fields(), and also modifies the consumers of those 
methods to handle failures that won't benefit from retrying.

Committing shortly. 

 Concurrent Schema API field additions can result in endless loop
 

 Key: SOLR-6145
 URL: https://issues.apache.org/jira/browse/SOLR-6145
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Critical
 Attachments: SOLR-6145-tests.patch, SOLR-6145.patch, SOLR-6145.patch, 
 SOLR-6145.patch, SOLR-6145v2.patch, concurrent_updates_and_schema_api.patch


 The optimistic concurrency loop in {{ManagedIndexSchema.addFields()}} is the 
 likely culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026898#comment-14026898
 ] 

ASF subversion and git services commented on SOLR-6158:
---

Commit 1601758 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1601758 ]

SOLR-6158: Fix configSetBaseDir path resolution

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward
 Attachments: SOLR-6158.patch


 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026899#comment-14026899
 ] 

ASF subversion and git services commented on SOLR-6158:
---

Commit 1601759 from [~romseygeek] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601759 ]

SOLR-6158: Fix configSetBaseDir path resolution

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward
 Attachments: SOLR-6158.patch


 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6158) Solr looks up configSets in the wrong directory

2014-06-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-6158.
-

Resolution: Fixed

Thanks for reporting, Simon!

 Solr looks up configSets in the wrong directory
 ---

 Key: SOLR-6158
 URL: https://issues.apache.org/jira/browse/SOLR-6158
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8, 4.8.1
Reporter: Simon Endele
Assignee: Alan Woodward
 Attachments: SOLR-6158.patch


 I tried the small tutorial on http://heliosearch.org/solr-4-8-features/ to 
 create Named Config Sets based on the Solr example shipped with Solr 4.8.1 
 (like it's done in the tutorial, same problem with 4.8.0).
 Creating a new core with a configSet seems to work (directory 'books' and 
 'books/core.properties' are created correctly).
 But loading the new core does not work:
 {code:none}67446 [qtp25155085-11] INFO  
 org.apache.solr.handler.admin.CoreAdminHandler  core create command 
 configSet=genericname=booksaction=CREATE
 67452 [qtp25155085-11] ERROR org.apache.solr.core.CoreContainer  Unable to 
 create core: books
 org.apache.solr.common.SolrException: Could not load configuration from 
 directory C:\dev\solr-4.8.1\example\configsets\generic
 at 
 org.apache.solr.core.ConfigSetService$Default.locateInstanceDir(ConfigSetService.java:145)
 at 
 org.apache.solr.core.ConfigSetService$Default.createCoreResourceLoader(ConfigSetService.java:130)
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:554)
 ...
 {code}
 It seems like Solr looks up the config sets in the wrong directory:
 C:\dev\solr-4.8.1\example\configsets\generic (in the log above) instead of
 C:\dev\solr-4.8.1\example\solr\configsets\generic (like stated in the 
 tutorial and the documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets)
 Moving the configsets directory one level up (into 'example') will work.
 But as of the documentation (and the tutorial) it should be located in the 
 solr home directory.
 In case I'm completely wrong and everythings works as expected, how can the 
 configsets directory be configured?
 The documentation on 
 https://cwiki.apache.org/confluence/display/solr/Config+Sets mentions a 
 configurable configset base directory, but I can't find any information on 
 the web.
 Another thing: If it would work as I expect, the references lib 
 dir=../../../contrib/extraction/lib regex=.*\.jar / etc. in 
 solr-4.8.1/example/solr/configsets/generic/conf/solrconfig.xml should get one 
 more ../ added, I guess (missing in the tutorial).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-06-10 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026947#comment-14026947
 ] 

Paul Elschot commented on LUCENE-5205:
--

I'd like to extend this parser so it can be used to query positional joins, 
LUCENE-5627. This parser is a good fit for that because it provides span 
queries, and the positional joins are based on span queries, so integration 
should be doable.

This needs two changes in the parser here, one for the label-fragment join, and 
one for the label tree operations.

For the label-fragment join it would be necessary to allow a field in 
AbstractSpanQueryParser._parsePureSpanClause, basically at the point where it 
currently throws an exception Can't process field  At that point a 
positional join query can be inserted to join from the new field to the 
original field of the span clause. This will have to be based on a  field 
schema that has the relations between the fields. This schema might also be 
used for indexing the documents.

The label tree will need an extension here to provide span queries in a label 
field that are based on the label tree info.
This is much the same as using the axes in XPath. I'd like to add the / for a 
named child, .. for parent, // for descendant-or-self, and maybe some form of 
child indexing.

It would be easier for me to try this from trunk rather than from the 
lucene5205 branch.


 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.9

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc 

[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-06-10 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026958#comment-14026958
 ] 

Tim Allison commented on LUCENE-5205:
-

Interesting.  Sounds great to me.  How can I help?  What would the syntax of a 
query look like?

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.9

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026969#comment-14026969
 ] 

Dawid Weiss commented on SOLR-6157:
---

The test framework has been pretty well tested and seems to be working fine. 
The timeout is set to an incredibly large value because Solr tests take so 
long. If you let it run until the timeout expires, you will get a stack trace 
of where each thread was. 

Uwe, could you send a signal to the hung process next time you see one? Then 
JVM logs will contain it and I can recover relevant stack traces.

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler
Assignee: Timothy Potter

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-06-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026971#comment-14026971
 ] 

Dawid Weiss commented on SOLR-6157:
---

{code}
@TimeoutSuite(millis = 2 * TimeUnits.HOUR)
{code}

So those tests went beyond the timeout...? Looks like JVM problems with halt(), 
regardless of what actually caused the stall. Uwe, if you see it next time, try 
to capture the stack trace (see if the JVM is responding to it at all).

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler
Assignee: Timothy Potter

 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6145) Concurrent Schema API field additions can result in endless loop

2014-06-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6145:
-

Attachment: SOLR-6145.patch

Final patch, added no-op catch clause for {{SchemaChangedInZkException}} to the 
optimistic concurrency loop in {{AddSchemaFieldsUpdateProcessor.processAdd()}}.

Committing now.

 Concurrent Schema API field additions can result in endless loop
 

 Key: SOLR-6145
 URL: https://issues.apache.org/jira/browse/SOLR-6145
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Critical
 Attachments: SOLR-6145-tests.patch, SOLR-6145.patch, SOLR-6145.patch, 
 SOLR-6145.patch, SOLR-6145.patch, SOLR-6145v2.patch, 
 concurrent_updates_and_schema_api.patch


 The optimistic concurrency loop in {{ManagedIndexSchema.addFields()}} is the 
 likely culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6145) Concurrent Schema API field additions can result in endless loop

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026995#comment-14026995
 ] 

ASF subversion and git services commented on SOLR-6145:
---

Commit 1601770 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1601770 ]

SOLR-6145: Fix Schema API optimistic concurrency by moving it out of 
ManagedIndexSchema.add(Copy)Fields() into the consumers of those methods: 
CopyFieldCollectionResource, FieldCollectionResource, FieldResource, and 
AddSchemaFieldsUpdateProcessorFactory.

 Concurrent Schema API field additions can result in endless loop
 

 Key: SOLR-6145
 URL: https://issues.apache.org/jira/browse/SOLR-6145
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Critical
 Attachments: SOLR-6145-tests.patch, SOLR-6145.patch, SOLR-6145.patch, 
 SOLR-6145.patch, SOLR-6145.patch, SOLR-6145v2.patch, 
 concurrent_updates_and_schema_api.patch


 The optimistic concurrency loop in {{ManagedIndexSchema.addFields()}} is the 
 likely culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3583) Percentiles for facets, pivot facets, and distributed pivot facets

2014-06-10 Thread Andrew Muldowney (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Muldowney updated SOLR-3583:
---

Attachment: SOLR-3583.patch

I've updated 3583 to work with the latest 2894 its based on.

 Percentiles for facets, pivot facets, and distributed pivot facets
 --

 Key: SOLR-3583
 URL: https://issues.apache.org/jira/browse/SOLR-3583
 Project: Solr
  Issue Type: Improvement
Reporter: Chris Russell
Priority: Minor
  Labels: newbie, patch
 Fix For: 4.9, 5.0

 Attachments: SOLR-3583.patch, SOLR-3583.patch, SOLR-3583.patch, 
 SOLR-3583.patch, SOLR-3583.patch, SOLR-3583.patch, SOLR-3583.patch


 Built on top of SOLR-2894, this patch adds percentiles and averages to 
 facets, pivot facets, and distributed pivot facets by making use of range 
 facet internals.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20-ea-b15) - Build # 10521 - Still Failing!

2014-06-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10521/
Java: 64bit/jdk1.8.0_20-ea-b15 -XX:+UseCompressedOops -XX:+UseParallelGC

12 tests failed.
FAILED:  org.apache.lucene.spatial.prefix.DateNRStrategyTest.testIntersects {#0 
seed=[9294A2A51FA82B75:E4AF183611E62404]}

Error Message:
-1

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: -1
at 
__randomizedtesting.SeedInfo.seed([9294A2A51FA82B75:E4AF183611E62404]:0)
at 
org.apache.lucene.spatial.prefix.tree.NumberRangePrefixTree$NRCell.getLVAtLevel(NumberRangePrefixTree.java:697)
at 
org.apache.lucene.spatial.prefix.tree.NumberRangePrefixTree$NRCell.relate(NumberRangePrefixTree.java:747)
at 
org.apache.lucene.spatial.prefix.tree.NumberRangePrefixTree$NRCell.relate(NumberRangePrefixTree.java:708)
at 
org.apache.lucene.spatial.prefix.tree.NumberRangePrefixTree$NRShape.relate(NumberRangePrefixTree.java:116)
at 
org.apache.lucene.spatial.prefix.IntersectsPrefixTreeFilter$1.visitScanned(IntersectsPrefixTreeFilter.java:93)
at 
org.apache.lucene.spatial.prefix.AbstractVisitingPrefixTreeFilter$VisitorTemplate.scan(AbstractVisitingPrefixTreeFilter.java:286)
at 
org.apache.lucene.spatial.prefix.AbstractVisitingPrefixTreeFilter$VisitorTemplate.addIntersectingChildren(AbstractVisitingPrefixTreeFilter.java:255)
at 
org.apache.lucene.spatial.prefix.AbstractVisitingPrefixTreeFilter$VisitorTemplate.getDocIdSet(AbstractVisitingPrefixTreeFilter.java:206)
at 
org.apache.lucene.spatial.prefix.IntersectsPrefixTreeFilter.getDocIdSet(IntersectsPrefixTreeFilter.java:97)
at 
org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:131)
at 
org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.bulkScorer(ConstantScoreQuery.java:141)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:74)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:611)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:483)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:440)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:273)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:261)
at 
org.apache.lucene.spatial.SpatialTestCase.executeQuery(SpatialTestCase.java:142)
at 
org.apache.lucene.spatial.prefix.BaseNonFuzzySpatialOpStrategyTest.testOperation(BaseNonFuzzySpatialOpStrategyTest.java:112)
at 
org.apache.lucene.spatial.prefix.BaseNonFuzzySpatialOpStrategyTest.testOperationRandomShapes(BaseNonFuzzySpatialOpStrategyTest.java:64)
at 
org.apache.lucene.spatial.prefix.DateNRStrategyTest.testIntersects(DateNRStrategyTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 

[jira] [Commented] (LUCENE-5648) Index/search multi-valued time durations

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027069#comment-14027069
 ] 

ASF subversion and git services commented on LUCENE-5648:
-

Commit 1601777 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1601777 ]

LUCENE-5648: Bug fix for detecting Contains relation when on the edge.

 Index/search multi-valued time durations
 

 Key: LUCENE-5648
 URL: https://issues.apache.org/jira/browse/LUCENE-5648
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: LUCENE-5648.patch, LUCENE-5648.patch, LUCENE-5648.patch, 
 LUCENE-5648.patch


 If you need to index a date/time duration, then the way to do that is to have 
 a pair of date fields; one for the start and one for the end -- pretty 
 straight-forward. But if you need to index a variable number of durations per 
 document, then the options aren't pretty, ranging from denormalization, to 
 joins, to using Lucene spatial with 2D as described 
 [here|http://wiki.apache.org/solr/SpatialForTimeDurations].  Ideally it would 
 be easier to index durations, and work in a more optimal way.
 This issue implements the aforementioned feature using Lucene-spatial with a 
 new single-dimensional SpatialPrefixTree implementation. Unlike the other two 
 SPT implementations, it's not based on floating point numbers. It will have a 
 Date based customization that indexes levels at meaningful quantities like 
 seconds, minutes, hours, etc.  The point of that alignment is to make it 
 faster to query across meaningful ranges (i.e. [2000 TO 2014]) and to enable 
 a follow-on issue to facet on the data in a really fast way.
 I'll expect to have a working patch up this week.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6145) Concurrent Schema API field additions can result in endless loop

2014-06-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6145.
--

   Resolution: Fixed
Fix Version/s: 5.0
   4.9

Committed to trunk and branch_4x.

Thanks Gregory and Alexey!

 Concurrent Schema API field additions can result in endless loop
 

 Key: SOLR-6145
 URL: https://issues.apache.org/jira/browse/SOLR-6145
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Critical
 Fix For: 4.9, 5.0

 Attachments: SOLR-6145-tests.patch, SOLR-6145.patch, SOLR-6145.patch, 
 SOLR-6145.patch, SOLR-6145.patch, SOLR-6145v2.patch, 
 concurrent_updates_and_schema_api.patch


 The optimistic concurrency loop in {{ManagedIndexSchema.addFields()}} is the 
 likely culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6145) Concurrent Schema API field additions can result in endless loop

2014-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027063#comment-14027063
 ] 

ASF subversion and git services commented on SOLR-6145:
---

Commit 1601776 from [~steve_rowe] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1601776 ]

SOLR-6145: Fix Schema API optimistic concurrency by moving it out of 
ManagedIndexSchema.add(Copy)Fields() into the consumers of those methods: 
CopyFieldCollectionResource, FieldCollectionResource, FieldResource, and 
AddSchemaFieldsUpdateProcessorFactory. (merged trunk r1601770 and r1601775)

 Concurrent Schema API field additions can result in endless loop
 

 Key: SOLR-6145
 URL: https://issues.apache.org/jira/browse/SOLR-6145
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Critical
 Fix For: 4.9, 5.0

 Attachments: SOLR-6145-tests.patch, SOLR-6145.patch, SOLR-6145.patch, 
 SOLR-6145.patch, SOLR-6145.patch, SOLR-6145v2.patch, 
 concurrent_updates_and_schema_api.patch


 The optimistic concurrency loop in {{ManagedIndexSchema.addFields()}} is the 
 likely culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5745) Refactoring AbstractVisitingPrefixTreeFilter code using cellIterator.

2014-06-10 Thread Varun V Shenoy (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027188#comment-14027188
 ] 

Varun  V Shenoy commented on LUCENE-5745:
-

https://github.com/shenoyvvarun/lucene-solr/tree/lucene-5745
My interim work. It does not work.

 Refactoring AbstractVisitingPrefixTreeFilter code using cellIterator.
 -

 Key: LUCENE-5745
 URL: https://issues.apache.org/jira/browse/LUCENE-5745
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Varun  V Shenoy
Priority: Minor
 Fix For: 5.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 The AbstractVisitingPrefixTreeFilter (used by RPT's Intersects, Within, 
 Disjoint) really should be refactored to use the new CellIterator API as it 
 will reduce the amount of code and should make the code easier to follow 
 since it would be based on a well-known design-pattern (an iterator). It 
 currently uses a VNode and VNode Iterator.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_20-ea-b15) - Build # 4104 - Failure!

2014-06-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4104/
Java: 32bit/jdk1.8.0_20-ea-b15 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.core.TestConfigSets.testDefaultConfigSetBasePathResolution

Error Message:
 Expected: is /path/to/solr/home/configsets  got: 
C:\path\to\solr\home\configsets 

Stack Trace:
java.lang.AssertionError: 
Expected: is /path/to/solr/home/configsets
 got: C:\path\to\solr\home\configsets

at 
__randomizedtesting.SeedInfo.seed([AC5C09D8931AE423:F87AFB14EE220D2F]:0)
at org.junit.Assert.assertThat(Assert.java:780)
at org.junit.Assert.assertThat(Assert.java:738)
at 
org.apache.solr.core.TestConfigSets.testDefaultConfigSetBasePathResolution(TestConfigSets.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6155) Multiple copy field directives are created in a mutable managed schema when identical copy field directives are added

2014-06-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027243#comment-14027243
 ] 

Steve Rowe commented on SOLR-6155:
--

Hmm, so I guess this is a feature then?  I can see people unintentionally 
getting bitten by it though - maybe the REST API should add an 
allowDuplicates param, with a default of false?

 Multiple copy field directives are created in a mutable managed schema when 
 identical copy field directives are added
 -

 Key: SOLR-6155
 URL: https://issues.apache.org/jira/browse/SOLR-6155
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe

 If I add the same copy field directive more than once, e.g. source=sku1 , 
 dest=sku2, then this directive will appear in the schema as many times as it 
 was added.
 It should only appear once.  I guess we could keep the current behavior of 
 not throwing an error when a copy field directive is added that already 
 exists in the schema, but rather than adding a duplicate directive, just have 
 a no-op.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-06-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2894:
---

Attachment: SOLR-2894.patch

bq. review DistributedFacetPivotTest in depth more - add more strong assertions

Attaching updated patch with progress along this line: in addition so some new 
explicit assertions, it also includes some refactoring  simplification of 
setupDistributedPivotFacetDocuments

One thing that jumped out at me when reviewing this is even though the test 
does some queries with large overrequest params as well disabling overrequest, 
there doesn't seem to be any assertions about how the overrequesting affects 
the results -- in fact, because of how the controlClient is compared with the 
distributed client, it seems that with this sample data disabling overrequest 
doesn't even change the results at all.

I definitely want to add some test logic around that -- if for no other reason 
then to prove that *when* the overrequesting is used, it can help with finding 
constraints in the long tail



 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-mincount-minification.patch, 
 SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894_cloud_test.patch, dateToObject.patch, pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5751) Bring MemoryDocValues up to speed

2014-06-10 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5751:
---

 Summary: Bring MemoryDocValues up to speed
 Key: LUCENE-5751
 URL: https://issues.apache.org/jira/browse/LUCENE-5751
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5751.patch

This one has fallen behind...
It picks TABLE/GCD even when it won't actually save space or help, writes with 
BlockpackedWriter even when it won't save space, etc.

Instead of comparing PackedInts.bitsRequired, factor in acceptableOverheadRatio 
too to determine will save space. Check if blocking will save space along the 
same lines (otherwise use regular packed ints).

Fix a similar bug in Lucene49 codec along these same lines (comparing 
PackedInts.bitsRequired instead of what would actually be written)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5751) Bring MemoryDocValues up to speed

2014-06-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5751:


Attachment: LUCENE-5751.patch

Patch: i see significant performance improvements with this codec, sometimes  
50% for numerics/strings.


 Bring MemoryDocValues up to speed
 -

 Key: LUCENE-5751
 URL: https://issues.apache.org/jira/browse/LUCENE-5751
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5751.patch


 This one has fallen behind...
 It picks TABLE/GCD even when it won't actually save space or help, writes 
 with BlockpackedWriter even when it won't save space, etc.
 Instead of comparing PackedInts.bitsRequired, factor in 
 acceptableOverheadRatio too to determine will save space. Check if blocking 
 will save space along the same lines (otherwise use regular packed ints).
 Fix a similar bug in Lucene49 codec along these same lines (comparing 
 PackedInts.bitsRequired instead of what would actually be written)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.8.0_05) - Build # 4017 - Still Failing!

2014-06-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4017/
Java: 32bit/jdk1.8.0_05 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.core.TestConfigSets.testDefaultConfigSetBasePathResolution

Error Message:
 Expected: is /path/to/solr/home/configsets  got: 
C:\path\to\solr\home\configsets 

Stack Trace:
java.lang.AssertionError: 
Expected: is /path/to/solr/home/configsets
 got: C:\path\to\solr\home\configsets

at 
__randomizedtesting.SeedInfo.seed([F3EE3F679A1F9DC7:A7C8CDABE72774CB]:0)
at org.junit.Assert.assertThat(Assert.java:780)
at org.junit.Assert.assertThat(Assert.java:738)
at 
org.apache.solr.core.TestConfigSets.testDefaultConfigSetBasePathResolution(TestConfigSets.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

Re: Adding Morphline support to DIH - worth the effort?

2014-06-10 Thread Alexandre Rafalovitch
Ripples in the pond again. Spreading and dying. Understandable, but
still somewhat annoying.

So, what would be the minimal viable next step to move this
conversation forward? Something for 4.11 as opposed to 5.0?

Anyone with commit status has a feeling of what - minimal -
deliverable they would put their own weight behind?

Regards,
   Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency


On Mon, Jun 9, 2014 at 10:50 AM, david.w.smi...@gmail.com
david.w.smi...@gmail.com wrote:
 One of the ideas over DIH discussed earlier is making it standalone.

 Yeah; my beef with the DIH is that it’s tied to Solr.  But I’d rather see
 something other than the DIH outside Solr; it’s not worthy IMO.  Why have
 something Solr specific even?  A great pipeline shouldn’t tie itself to any
 end-point.  There are a variety of solutions out there that I tried.  There
 are the big 3 open-source ETLs: Kettle, Clover, Talend) and they aren’t
 quite ideal in one way or another.  And Spring-Integration.  And some
 half-baked data pipelines like OpenPipe  Open Pipeline.  I never got around
 to taking a good look at Findwise’s open-sourced Hydra but I learned enough
 to know to my surprise it was configured in code versus a config file (like
 all the others) and that's a big turn-off to me.  Today I read through most
 of the Morphlines docs and a few choice source files and I’m
 super-impressed.  But as you note it’s missing a lot of other stuff.  I
 think something great could be built using it as a core piece.

 ~ David Smiley
 Freelance Apache Lucene/Solr Search Consultant/Developer
 http://www.linkedin.com/in/davidwsmiley


 On Sun, Jun 8, 2014 at 5:51 PM, Mikhail Khludnev
 mkhlud...@griddynamics.com wrote:

 Jack,
 I found your considerations quite reasonable.
 One of the ideas over DIH discussed earlier is making it standalone. So,
 if we start from simple Morphline UI, we can do this extraction. Then, such
 externalized ETL, will work better with Solr Cloud than DIH works now.
 Presumably we can reuse DIH Jdbc Datasources as a source for Morphline
 records.
 Still open questions in this approach are:
 - joins/caching - seem possible with Morphlines but still there is no such
 command
 - delta import - scenario we don't need to forget to handle it
 - threads (it's completely out Morphline's concerns)
 - distributed processing - it would be great if we can partition
 datasource eg something what's done by Scoop
 ... what else?


 On Sun, Jun 8, 2014 at 6:54 PM, Jack Krupansky j...@basetechnology.com
 wrote:

 I've avoided DIH like the plague since it really doesn't fit well in
 Solr, so I'm still baffled as to why you think we need to use DIH as the
 foundation for a Solr Morphlines project. That shouldn't stop you, but
 what's the big impediment to taking a clean slate approach to Morphlines -
 learn what we can from DIH, but do a fresh, clean Solr 5.0 implementation
 that is not burdened from the get-go with all of DIH's baggage?

 Configuring DIH is one of its main problems, so blending Morphlines
 config into DIH config would seem to just make Morphlines less attractive
 than it actually is when viewed by itself.

 You might also consider how ManifoldCF (another Apache project) would
 integrate with DIH and Morphlines as well. I mean, the core use case is ETL
 from external data sources. And how all of this relates to Apache Flume as
 well.

 But back to the original, still unanswered, question: Why use DIH as the
 starting point for integrating Morphlines with Solr - unless the goal is to
 make Morphlines unpalatable and less approachable than even DIH itself?!

 Another question: What does Elasticsearch have in this area (besides
 rivers)? Are they headed in the Morphlines direction as well?


 -- Jack Krupansky

 -Original Message- From: Alexandre Rafalovitch
 Sent: Sunday, June 8, 2014 10:16 AM

 To: dev@lucene.apache.org
 Subject: Re: Adding Morphline support to DIH - worth the effort?

 I see DIH as something that offers a quick way to get things done, as
 long as they fit into DIH's couple of basic scenarios. Going even a
 little beyond hits bugs, bad documentation, inconsistencies and lack
 of ongoing support (e.g. SOLR-4383).

 So, if it works for you - great. If it does not - too bad, use SolrJ.
 And given what I observe, I believe the next round of improvements
 might be easier to achieve by moving to a different open-source pipe
 project than trying to keep reinventing and bandaging one of our own.
 Go where strongest community is, etc.

 Morphline can be seen as a replacement for DIH's EntityProcessors and
 Transformers (Flume adds other bits). The reasons I think it is worth
 looking at are as follows:
 1) DIH is not really being maintained or further improved. So, the
 list of EP and Transformers is the same and does not account for new
 requests (which we see periodically on the mailing list); even the new
 

[jira] [Created] (SOLR-6159) cancelElection fails on uninitialized ElectionContext

2014-06-10 Thread Steven Bower (JIRA)
Steven Bower created SOLR-6159:
--

 Summary: cancelElection fails on uninitialized ElectionContext
 Key: SOLR-6159
 URL: https://issues.apache.org/jira/browse/SOLR-6159
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.8.1
Reporter: Steven Bower
Priority: Critical


I had a solr collection that basically was out of memory (no exception, just 
continuous 80-90 second full GCs). This of course is not a good state, but when 
in this state ever time you come out of the GC your zookeeper session has 
expired causing all kinds of havoc. Anyway I found bug in the condition where 
during LeaderElector.setup() if you get a Zookeeper error the 
LeaderElector.context get set to a context that is not fully initialized (ie 
hasn't called joinElection..

Anyway once this happens the node can no longer attempt to join elections 
because every time the LeaderElector attempts to call cancelElection() on the 
previous ElectionContext..

Some logs below and I've attached a patch that does 2 things:

* Move the setting of LeaderElector.context in the setup call to then of the 
call so it is only set if the setup completes.
* Added a check to see if leaderSeqPath is null in 
ElectionContext.cancelElection
* Made leaderSeqPath volatile as it is being directly accessed by multiple 
threads.
* set LeaderElector.context = null when joinElection fails

There may be other issues.. the patch is focused on breaking the failure loop 
that occurs when initialization of the ElectionContext fails.

{noformat}
2014-06-08 23:14:57.805 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
Opening socket connection to server host1/10.122.142.31:1234. Will not attempt 
to authenticate using SASL (unknown error)
2014-06-08 23:14:57.806 INFO  ClientCnxn [main-SendThread(host1:1234)] - Socket 
connection established to host1/10.122.142.31:1234, initiating session
2014-06-08 23:14:57.810 INFO  ClientCnxn [main-SendThread(host1:1234)] - Unable 
to reconnect to ZooKeeper service, session 0x2467d956c8d0446 has expired, 
closing socket connection
2014-06-08 23:14:57.816 INFO  ConnectionManager [main-EventThread] - Watcher 
org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
name:ZooKeeperConnection 
Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
 got event WatchedEvent state:Expired type:None path:null path:null type:None
2014-06-08 23:14:57.817 INFO  ConnectionManager [main-EventThread] - Our 
previous ZooKeeper session was expired. Attempting to reconnect to recover 
relationship with ZooKeeper...
2014-06-08 23:14:57.817 INFO  DefaultConnectionStrategy [main-EventThread] - 
Connection expired - starting a new one...
2014-06-08 23:14:57.817 INFO  ZooKeeper [main-EventThread] - Initiating client 
connection, 
connectString=host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
 sessionTimeout=15000 
watcher=org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1
2014-06-08 23:14:57.857 INFO  ConnectionManager [main-EventThread] - Waiting 
for client to connect to ZooKeeper
2014-06-08 23:14:57.859 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
Opening socket connection to server host4/172.17.14.107:1234. Will not attempt 
to authenticate using SASL (unknown error)
2014-06-08 23:14:57.891 INFO  ClientCnxn [main-SendThread(host4:1234)] - Socket 
connection established to host4/172.17.14.107:1234, initiating session
2014-06-08 23:14:57.906 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
Session establishment complete on server host4/172.17.14.107:1234, sessionid = 
0x4467d8d79260486, negotiated timeout = 15000
2014-06-08 23:14:57.907 INFO  ConnectionManager [main-EventThread] - Watcher 
org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
name:ZooKeeperConnection 
Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
 got event WatchedEvent state:SyncConnected type:None path:null path:null 
type:None
2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - Client is 
connected to ZooKeeper
2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - Connection 
with ZooKeeper reestablished.
2014-06-08 23:14:57.911 ERROR ZkController [Thread-203] - 
:org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
= Session expired for /overseer_elect/election
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
  at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
  at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:226)
  at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:223)
  at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
  at 

[jira] [Updated] (SOLR-6159) cancelElection fails on uninitialized ElectionContext

2014-06-10 Thread Steven Bower (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Bower updated SOLR-6159:
---

Attachment: SOLR-6159.patch

 cancelElection fails on uninitialized ElectionContext
 -

 Key: SOLR-6159
 URL: https://issues.apache.org/jira/browse/SOLR-6159
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.8.1
Reporter: Steven Bower
Priority: Critical
 Attachments: SOLR-6159.patch


 I had a solr collection that basically was out of memory (no exception, just 
 continuous 80-90 second full GCs). This of course is not a good state, but 
 when in this state ever time you come out of the GC your zookeeper session 
 has expired causing all kinds of havoc. Anyway I found bug in the condition 
 where during LeaderElector.setup() if you get a Zookeeper error the 
 LeaderElector.context get set to a context that is not fully initialized (ie 
 hasn't called joinElection..
 Anyway once this happens the node can no longer attempt to join elections 
 because every time the LeaderElector attempts to call cancelElection() on the 
 previous ElectionContext..
 Some logs below and I've attached a patch that does 2 things:
 * Move the setting of LeaderElector.context in the setup call to then of the 
 call so it is only set if the setup completes.
 * Added a check to see if leaderSeqPath is null in 
 ElectionContext.cancelElection
 * Made leaderSeqPath volatile as it is being directly accessed by multiple 
 threads.
 * set LeaderElector.context = null when joinElection fails
 There may be other issues.. the patch is focused on breaking the failure loop 
 that occurs when initialization of the ElectionContext fails.
 {noformat}
 2014-06-08 23:14:57.805 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
 Opening socket connection to server host1/10.122.142.31:1234. Will not 
 attempt to authenticate using SASL (unknown error)
 2014-06-08 23:14:57.806 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
 Socket connection established to host1/10.122.142.31:1234, initiating session
 2014-06-08 23:14:57.810 INFO  ClientCnxn [main-SendThread(host1:1234)] - 
 Unable to reconnect to ZooKeeper service, session 0x2467d956c8d0446 has 
 expired, closing socket connection
 2014-06-08 23:14:57.816 INFO  ConnectionManager [main-EventThread] - Watcher 
 org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
 name:ZooKeeperConnection 
 Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
  got event WatchedEvent state:Expired type:None path:null path:null type:None
 2014-06-08 23:14:57.817 INFO  ConnectionManager [main-EventThread] - Our 
 previous ZooKeeper session was expired. Attempting to reconnect to recover 
 relationship with ZooKeeper...
 2014-06-08 23:14:57.817 INFO  DefaultConnectionStrategy [main-EventThread] - 
 Connection expired - starting a new one...
 2014-06-08 23:14:57.817 INFO  ZooKeeper [main-EventThread] - Initiating 
 client connection, 
 connectString=host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
  sessionTimeout=15000 
 watcher=org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1
 2014-06-08 23:14:57.857 INFO  ConnectionManager [main-EventThread] - Waiting 
 for client to connect to ZooKeeper
 2014-06-08 23:14:57.859 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
 Opening socket connection to server host4/172.17.14.107:1234. Will not 
 attempt to authenticate using SASL (unknown error)
 2014-06-08 23:14:57.891 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
 Socket connection established to host4/172.17.14.107:1234, initiating session
 2014-06-08 23:14:57.906 INFO  ClientCnxn [main-SendThread(host4:1234)] - 
 Session establishment complete on server host4/172.17.14.107:1234, sessionid 
 = 0x4467d8d79260486, negotiated timeout = 15000
 2014-06-08 23:14:57.907 INFO  ConnectionManager [main-EventThread] - Watcher 
 org.apache.solr.common.cloud.ConnectionManager@7fe8e0f1 
 name:ZooKeeperConnection 
 Watcher:host4:1234,host1:1234,host3:1234,host2:1234/engines/solr/collections/XX
  got event WatchedEvent state:SyncConnected type:None path:null path:null 
 type:None
 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - Client 
 is connected to ZooKeeper
 2014-06-08 23:14:57.909 INFO  ConnectionManager [main-EventThread] - 
 Connection with ZooKeeper reestablished.
 2014-06-08 23:14:57.911 ERROR ZkController [Thread-203] - 
 :org.apache.zookeeper.KeeperException$SessionExpiredException: 
 KeeperErrorCode = Session expired for /overseer_elect/election
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
   at 
 

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 87215 - Failure!

2014-06-10 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/87215/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestNormsFormat.testMergeStability

Error Message:
expected:{null=101, doc=176, fnm=139, fdt=96, sd=8, tbk=109, nvd=42, nvm=61, 
tix=70, gen=36, pos=101, fdx=64} but was:{null=101, doc=176, fnm=139, fdt=96, 
sd=8, nvd=42, smy=32, nvm=61, gen=36, pos=101, fdx=64, tmp=159}

Stack Trace:
java.lang.AssertionError: expected:{null=101, doc=176, fnm=139, fdt=96, sd=8, 
tbk=109, nvd=42, nvm=61, tix=70, gen=36, pos=101, fdx=64} but was:{null=101, 
doc=176, fnm=139, fdt=96, sd=8, nvd=42, smy=32, nvm=61, gen=36, pos=101, 
fdx=64, tmp=159}
at 
__randomizedtesting.SeedInfo.seed([FEECB2A187FA1F97:8AA0F48E8A101D21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testMergeStability(BaseIndexFileFormatTestCase.java:114)
at 
org.apache.lucene.index.BaseNormsFormatTestCase.testMergeStability(BaseNormsFormatTestCase.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 87215 - Failure!

2014-06-10 Thread Robert Muir
I'll fix this. Looks like its because I added field foobar (just for
its norms) and it got MockRandomPostings :)

On Wed, Jun 11, 2014 at 1:52 AM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/87215/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestNormsFormat.testMergeStability

 Error Message:
 expected:{null=101, doc=176, fnm=139, fdt=96, sd=8, tbk=109, nvd=42, nvm=61, 
 tix=70, gen=36, pos=101, fdx=64} but was:{null=101, doc=176, fnm=139, 
 fdt=96, sd=8, nvd=42, smy=32, nvm=61, gen=36, pos=101, fdx=64, tmp=159}

 Stack Trace:
 java.lang.AssertionError: expected:{null=101, doc=176, fnm=139, fdt=96, 
 sd=8, tbk=109, nvd=42, nvm=61, tix=70, gen=36, pos=101, fdx=64} but 
 was:{null=101, doc=176, fnm=139, fdt=96, sd=8, nvd=42, smy=32, nvm=61, 
 gen=36, pos=101, fdx=64, tmp=159}
 at 
 __randomizedtesting.SeedInfo.seed([FEECB2A187FA1F97:8AA0F48E8A101D21]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:147)
 at 
 org.apache.lucene.index.BaseIndexFileFormatTestCase.testMergeStability(BaseIndexFileFormatTestCase.java:114)
 at 
 org.apache.lucene.index.BaseNormsFormatTestCase.testMergeStability(BaseNormsFormatTestCase.java:44)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at