[jira] [Created] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-06-04 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-7639:
--

 Summary: Bring MLTQParser at par with the MLT Handler w.r.t 
supported options
 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta


As of now, there are options that the MLT Handler supports which the QParser 
doesn't. It would be good to have the QParser tap into everything that's 
supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 release apache-solr-ref-guide-5.2.pdf

2015-06-04 Thread Đạt Cao Mạnh
+1 for this reference.

On Fri, Jun 5, 2015 at 3:31 AM, Steve Rowe sar...@gmail.com wrote:

 +1

 Steve

  On Jun 3, 2015, at 1:30 PM, Chris Hostetter hossman_luc...@fucit.org
 wrote:
 
 
  Please VOTE to release these files as the Solr Ref Guide 5.2...
 
 
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.2-RC0/
 
 
  NOTE: this vote will be open for a minimum of 72 hours, but i will not
 call this (ref guide) vote to a close until the 5.2.0 code release is also
 successful -- just in case there are any last minute bugs found that
 warrant an update to the ref guide as well.
 
 
 
  -Hoss
  http://www.lucidworks.com/
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
*Best regards,*
*Cao Mạnh Đạt*






*D.O.B : 31-07-1991Cell: (+84) 946.328.329E-mail: caomanhdat...@gmail.com
caomanhdat...@gmail.comHanoi University of Science and TechnologySchool
of information  communication technologyClass : Computer Science K54*


Re: [jira] [Commented] (SOLR-7613) solrcore.properties file should be loaded if it resides in ZooKeeper

2015-06-04 Thread Noble Paul
Replying here coz jira is down

Let's get rid of solrcore.properties in cloud . We don't need it. It
is not just reading that thing. We need to manage the lifecycle as
well (editing, refreshing etc)


This is the right way to do properties in solrcloud

https://cwiki.apache.org/confluence/display/solr/Config+API#ConfigAPI-CommandsforUser-DefinedProperties

On Fri, Jun 5, 2015 at 3:25 AM, Hoss Man (JIRA) j...@apache.org wrote:

 [ 
 https://issues.apache.org/jira/browse/SOLR-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573660#comment-14573660
  ]

 Hoss Man commented on SOLR-7613:
 

 Some relevant comments on this from the original mailing list discussion...

 Hoss..

 bq. IIUC CoreDescriptor.loadExtraProperties is the relevent method ... it 
 would need to build up the path including the core name and get the system 
 level resource loader (CoreContainer.getResourceLoader()) to access it since 
 the core doesn't exist yet so there is no core level ResourceLoader to use.

 Alan...

 bq. I think this is an oversight, rather than intentional (at least, I 
 certainly didn't intend to write it like this!). The problem here will be 
 that CoreDescriptors are currently built entirely from core.properties files, 
 and the CoreLocators that construct them don't have any access to zookeeper.

 bq. Maybe the way forward is to move properties out of CoreDescriptor and 
 have an entirely separate CoreProperties object that is built and returned by 
 the ConfigSetService, and that is read via the ResourceLoader.  This would 
 fit in quite nicely with the changes I put up on SOLR-7570, in that you could 
 have properties specified on the collection config overriding properties from 
 the configset, and then local core-specific properties overriding both.

 Hoss...

 bq. But they do have access to the CoreContainer which is passed to the 
 CoreDescriptor constructor -- it has all the ZK access you'd need at the time 
 when loadExtraProperties() is called.

 Alan...

 bq. Yeah, you could do it like that.  But looking at it further, I think 
 solrcore.properties is actually being loaded in entirely the wrong place - it 
 should be done by whatever is creating the CoreDescriptor, and then passed in 
 as a Properties object to the CD constructor.  At the moment, you can't refer 
 to a property defined in solrcore.properties within your core.properties file.

 Hoss...

 bq. but if you look at it fro ma historical context, that doesn't really  
 matter for the purpose that solrcore.properties was intended for -- it  
 predates core discover, and was only intended as a way to specify user 
 level properties that could then be substituted in the solrconfig.xml or 
 dih.xml or schema.xml

 bq. ie: making it possible to use a solrcore.prop value to set a core.prop 
 value might be a nice to have, but it's definitely not what it was intended 
 for, so it shouldn't really be a blocker to getting the same (original) basic 
 functionality working in SolrCloud.

 

 Honestly, even ignoring the historical context, it seems like a chicken and 
 egg problem to me -- should it be possible to use a solrecore.properties 
 variable to set the value of another variable in core.properties? or should 
 it be possible to use a core.properties variable to set the value of another 
 variable in solrcore.properties?

 the simplest thing for people to udnerstand would probably be to just say 
 that they are independent, loaded seperately, and cause an error if you try 
 to define the same value in both (i doubt that's currently enforced, but it 
 probably should be)

 solrcore.properties file should be loaded if it resides in ZooKeeper
 

 Key: SOLR-7613
 URL: https://issues.apache.org/jira/browse/SOLR-7613
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.3


 The solrcore.properties file is used to load user defined properties for use 
 primarily in the solrconfig.xml file, though this properties file will only 
 load if it is resident in the core/conf directory on the physical disk, it 
 will not load if it is in ZK's core/conf directory. There should be a 
 mechanism to allow a core properties file to be specified in ZK and can be 
 updated appropriately along with being able to reload the properties when 
 the file changes (or via a core reload).



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 12767 - Failure!

2015-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12767/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 46225 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:135: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:470: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2584: 
Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/docs/changes/jiraVersionList.json

Total time: 60 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_45) - Build # 4893 - Failure!

2015-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4893/
Java: 32bit/jdk1.8.0_45 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2D36CC422510AB74]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([2D36CC422510AB74]:0)




Build Log:
[...truncated 10913 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerBackup
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\init-core-data-001
   [junit4]   2 742442 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.SolrTestCaseJ4 ###Starting testBackupOnCommit
   [junit4]   2 742443 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\collection1
   [junit4]   2 742457 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.e.j.s.Server jetty-9.2.10.v20150310
   [junit4]   2 742461 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1769ef3{/solr,null,AVAILABLE}
   [junit4]   2 742466 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.e.j.s.ServerConnector Started 
ServerConnector@1444c59{HTTP/1.1}{127.0.0.1:50431}
   [junit4]   2 742466 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.e.j.s.Server Started @745581ms
   [junit4]   2 742467 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\collection1\data, hostContext=/solr, 
hostPort=50431}
   [junit4]   2 742467 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.s.SolrDispatchFilter 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@e2f2a
   [junit4]   2 742467 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\'
   [junit4]   2 742491 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.SolrXmlConfig Loading container configuration from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\solr.xml
   [junit4]   2 742503 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.CoresLocator Config-defined core root directory: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\.
   [junit4]   2 742503 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.CoreContainer New CoreContainer 22905125
   [junit4]   2 742503 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\]
   [junit4]   2 742503 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.CoreContainer loading shared library: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup
 2D36CC422510AB74-001\solr-instance-001\lib
   [junit4]   2 742503 WARN  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[2D36CC422510AB74]) 
[] o.a.s.c.SolrResourceLoader 

[jira] [Commented] (LUCENE-5805) QueryNodeImpl.removeFromParent does a lot of work without any effect

2015-06-04 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573892#comment-14573892
 ] 

Cao Manh Dat commented on LUCENE-5805:
--

Just thumb this up for committers can see

 QueryNodeImpl.removeFromParent does a lot of work without any effect
 

 Key: LUCENE-5805
 URL: https://issues.apache.org/jira/browse/LUCENE-5805
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/queryparser
Affects Versions: 4.7.2, 4.9
Reporter: Christoph Kaser
 Attachments: LUCENE-5805.patch


 The method _removeFromParent_ of _QueryNodeImpl_, calls _getChildren_ on the 
 parent and removes any occurrence of this from the result.
 However, since a few releases, _getChildren_ returns a *copy* of the children 
 list, so the code has no effect (except creating a copy of the children list 
 which will then be thrown away). 
 Even worse, since _setChildren_ calls _removeFromParent_ on any previous 
 child, _setChildren_ now has a complexity of O(n^2) and creates a lot of 
 throw-away copies of the children list (for nodes with a lot of children)
 {code}
 public void removeFromParent() {
 if (this.parent != null) {
   ListQueryNode parentChildren = this.parent.getChildren();
   IteratorQueryNode it = parentChildren.iterator();
   
   while (it.hasNext()) {
 if (it.next() == this) {
   it.remove();
 }
   }
   
   this.parent = null;
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-06-04 Thread Chris A. Mattmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris A. Mattmann reopened SOLR-7632:
-

 Change the ExtractingRequestHandler to use Tika-Server
 --

 Key: SOLR-7632
 URL: https://issues.apache.org/jira/browse/SOLR-7632
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris A. Mattmann
  Labels: memex

 It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
 fails it messes up the ExtractingRequestHandler (e.g., the document type 
 caused Tika to fail, etc). A more reliable way and also separated, and easier 
 to deploy version of the ExtractingRequestHandler would make a network call 
 to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
 results and then index the information that way. I have a patch in the works 
 from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-06-04 Thread Chris A. Mattmann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573972#comment-14573972
 ] 

Chris A. Mattmann commented on SOLR-7632:
-

Hi [~ehatcher] thanks. On the forwarding end, good question. We did implement 
CORS in Tika Server, so this may allow it to do that, but am not totally 
positive. I think having this as an option in Solr would be useful too, as part 
of the /update/extract of course. I'll post what I have soon.

 Change the ExtractingRequestHandler to use Tika-Server
 --

 Key: SOLR-7632
 URL: https://issues.apache.org/jira/browse/SOLR-7632
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris A. Mattmann
  Labels: memex

 It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
 fails it messes up the ExtractingRequestHandler (e.g., the document type 
 caused Tika to fail, etc). A more reliable way and also separated, and easier 
 to deploy version of the ExtractingRequestHandler would make a network call 
 to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
 results and then index the information that way. I have a patch in the works 
 from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-06-04 Thread Chris A. Mattmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris A. Mattmann updated SOLR-7632:

Labels: memex  (was: )

 Change the ExtractingRequestHandler to use Tika-Server
 --

 Key: SOLR-7632
 URL: https://issues.apache.org/jira/browse/SOLR-7632
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris A. Mattmann
  Labels: memex

 It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
 fails it messes up the ExtractingRequestHandler (e.g., the document type 
 caused Tika to fail, etc). A more reliable way and also separated, and easier 
 to deploy version of the ExtractingRequestHandler would make a network call 
 to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
 results and then index the information that way. I have a patch in the works 
 from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6527) TermWeight should not load norms when needsScores is false

2015-06-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6527:
-
Attachment: LUCENE-6527.patch

Here is a patch.

 TermWeight should not load norms when needsScores is false
 --

 Key: LUCENE-6527
 URL: https://issues.apache.org/jira/browse/LUCENE-6527
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-6527.patch


 TermWeight currently loads norms all the time, even when needsScores is false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3191 - Still Failing

2015-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3191/

No tests ran.

Build Log:
[...truncated 234 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:536: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:484: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build.xml:50:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1436:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:991:
 Could not read or create hints file: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/.caches/test-stats/core/timehints.txt

Total time: 16 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #3186
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 0.17 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-6526) Make AssertingWeight check that scores are not computed when needsScores is false

2015-06-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6526:
-
Attachment: LUCENE-6526.patch

Here is a patch.

 Make AssertingWeight check that scores are not computed when needsScores is 
 false
 -

 Key: LUCENE-6526
 URL: https://issues.apache.org/jira/browse/LUCENE-6526
 Project: Lucene - Core
  Issue Type: Test
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-6526.patch


 Today nothing prevents you from calling score() if you don't need scores. But 
 we could make AssertingWeight check it in order to make sure that we do not 
 waste resources computing something we don't need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7631) Faceting on multivalued Trie fields with precisionStep != 0 can produce bogus value=0 in some situations

2015-06-04 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7631:
---
Attachment: SOLR-7631_test.patch

Updated patch...

* tests some precisionStep=0 fields as well to demonstrate that they never 
exhibit the failure
* tests all possible facet.method values to demonstrate that the multivalued 
precisionStep=8 fields fail regardless of what method is requested
* fix the NUM_DOCS and MergePolicy used to reduce the number of variables
** NOTE: some observation indicated that low number of docs in the index was 
less likely to fail -- suggesting that the bug is related to either num 
segments, or segment size, or posting list size .. but with NUM_DOCS == 1000 
there are still plenty of seeds that fail reliably.

With these changes, the only pattern i'm seeing is that all of the failures 
seem to involve the RandomCodec -- which reports itself in the test params 
output as...

bq. NOTE: test params are: codec=Asserting(Lucene50): { ... ranodmized posting 
formats here ...}, docValues:{ ...randomized docValues here ...}, sim=etc, 
locale=etc, timezone=etc

...but i haven't found any pattern in the PostingFormat reported for the field 
in question (foo_ti) -- and spot checks using -Dtests.codec=AssertingCodec and 
-Dtests.codec=Lucene50 codec directly haven't failed, leading me to believe it 
must either be some other aspect of how RandomCodec does it's wrapping, or some 
nuance in the PostingFormat selected.

I'm currently beasting this test using every possible -Dtests.codec option to 
sanity check that it only ever fails with random ... once that's done, i 
guess i'll start doing the same thing with -Dtests.postingformat unless anyone 
spots the problem first. 


 Faceting on multivalued Trie fields with precisionStep != 0 can produce bogus 
 value=0 in some situations
 --

 Key: SOLR-7631
 URL: https://issues.apache.org/jira/browse/SOLR-7631
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7631_test.patch, SOLR-7631_test.patch, log.tgz


 Working through SOLR-7605, I've confirmed that the underlying problem exists 
 for regular {{field.facet}} situations, regardless of distrib mode, for Trie 
 fields that have a non-zero precisionStep -- there's still ome other missing 
 piece of the puzzle i haven't figured out, but it relates in some way to some 
 of randomized factors we use in our tests (Codec? PostingFormat? ... no idea)
 The problem, when it manifests, is that faceting on a TrieIntField, using 
 {{facet.mincount=0}}, causes the facet results to include three instances of 
 facet the value 0 listed with a count of 0 -- even though no document in 
 the index contains this value at all...
 {noformat}
[junit4]   lst name=facet_fields
[junit4] lst name=foo_ti
[junit4]   int name=2032/int
 ...
[junit4]   int name=5021/int
[junit4]   int name=00/int
[junit4]   int name=00/int
[junit4]   int name=00/int
 {noformat}
 This is concerning for a few reasons:
 * In the case of PivotFaceting, getting duplicate values back from a single 
 shard like this triggers an assert in distributed queries and the request 
 fails -- even if asserts aren't enabled, the bogus 0 value can be 
 propogated to clients if they ask for facet.pivot.mincount=0
 * Client code expecting a single (value,count) pair for each value may 
 equally be confused/broken by this response where the same value is 
 returned multiple times
 * w/o knowing the root cause, It seems very possible that other nonsense 
 values may be getting returned -- ie: if the error only happens with fields 
 utilizing precisionStep, then it's likely related to the synthetic values 
 used for faster range queries, and other synthetic values may be getting 
 included with bogus counts
 A Patch with a simple test that can demonstrate the bug fairly easily will be 
 attached shortly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-06-04 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7639:
---
Attachment: SOLR-7639.patch

Patch, without any tests. Will add some tests.

 Bring MLTQParser at par with the MLT Handler w.r.t supported options
 

 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7639.patch


 As of now, there are options that the MLT Handler supports which the QParser 
 doesn't. It would be good to have the QParser tap into everything that's 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7573) Inconsistent numbers of docs between leader and replica

2015-06-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573708#comment-14573708
 ] 

Erick Erickson commented on SOLR-7573:
--

Additional data. I have a test harness that I can cause things to go wrong 
with fairly regularly, but not on demand. It works like this:

For (some number of configurable cycles)
Spawn a bunch of threads that create really simple documents and send them 
to the collection
wait for all the threads to terminate
commit(true, true)
for each shard
   check that q=*:* returns the same number of docs found.
  If there is a discrepancy, report and exit


The interesting thing here is that I saw this error, but by the time I could 
investigate via the admin UI, the counts were identical. However, the replica 
that had a smaller count was _also_ forced into leader-initated recovery which 
is a symptom I saw onsite. So the working hypothesis is that the node was in 
LIR for some period but managed to respond to a query. After LIR was over it 
had re-synched and was OK. I'm not clear at all how the replica managed to 
respond, I'll add more logging to see what I can see. I am using HttpSolrClient 
to do the verification with distrib=false so I'm not sure whether the active 
state in ZK matters at all. When I was onsite, the replica didn't recover, but 
we didn't wait very long and restarted it, at which point it did a full sync 
from the leader so it's consistent with what I just saw.

This seems like correct (eventual consistency) behavior, the problem is that 
the replica goes into LIR in the first place. And that it manages to respond to 
a direct ping via HttpSolrClient.

This gives me some hope that if we do SOLR-7571 and have the client(s) keep 
from overwhelming Solr we have a mechanism to at least avoid the situation 
arising in the first place. And if I incorporate that into this test harness 
and the problem goes away it'll give me confidence that we're getting to root 
causes..

 Inconsistent numbers of docs between leader and replica
 ---

 Key: SOLR-7573
 URL: https://issues.apache.org/jira/browse/SOLR-7573
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Erick Erickson
Assignee: Erick Erickson

 Once again assigning to myself to keep track. And once again not reproducible 
 at will and possible related to firehosing updates to Solr.
 Saw a situation where things seemed to be indexed normally, but the number of 
 docs on a leader and follower were not the same. The leader had, as I 
 remember, a 4.5G index and the follower a 1.9G index. No errors in the logs, 
 no recovery initiated, etc. All nodes green.
 The very curious thing was that when the follower was bounced, it did a full 
 index replication from the leader. How that could be happening without the 
 follower ever going into a recovery state I have no idea.
 Again, if I can get this to reproduce locally I can put more diagnostics into 
 the process and see what I can see. I also have some logs to further explore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7613) solrcore.properties file should be loaded if it resides in ZooKeeper

2015-06-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573660#comment-14573660
 ] 

Hoss Man commented on SOLR-7613:


Some relevant comments on this from the original mailing list discussion...

Hoss..

bq. IIUC CoreDescriptor.loadExtraProperties is the relevent method ... it would 
need to build up the path including the core name and get the system level 
resource loader (CoreContainer.getResourceLoader()) to access it since the core 
doesn't exist yet so there is no core level ResourceLoader to use.

Alan...

bq. I think this is an oversight, rather than intentional (at least, I 
certainly didn't intend to write it like this!). The problem here will be that 
CoreDescriptors are currently built entirely from core.properties files, and 
the CoreLocators that construct them don't have any access to zookeeper.

bq. Maybe the way forward is to move properties out of CoreDescriptor and have 
an entirely separate CoreProperties object that is built and returned by the 
ConfigSetService, and that is read via the ResourceLoader.  This would fit in 
quite nicely with the changes I put up on SOLR-7570, in that you could have 
properties specified on the collection config overriding properties from the 
configset, and then local core-specific properties overriding both.

Hoss...

bq. But they do have access to the CoreContainer which is passed to the 
CoreDescriptor constructor -- it has all the ZK access you'd need at the time 
when loadExtraProperties() is called.

Alan...

bq. Yeah, you could do it like that.  But looking at it further, I think 
solrcore.properties is actually being loaded in entirely the wrong place - it 
should be done by whatever is creating the CoreDescriptor, and then passed in 
as a Properties object to the CD constructor.  At the moment, you can't refer 
to a property defined in solrcore.properties within your core.properties file.

Hoss...

bq. but if you look at it fro ma historical context, that doesn't really  
matter for the purpose that solrcore.properties was intended for -- it  
predates core discover, and was only intended as a way to specify user level 
properties that could then be substituted in the solrconfig.xml or dih.xml or 
schema.xml

bq. ie: making it possible to use a solrcore.prop value to set a core.prop 
value might be a nice to have, but it's definitely not what it was intended 
for, so it shouldn't really be a blocker to getting the same (original) basic 
functionality working in SolrCloud.



Honestly, even ignoring the historical context, it seems like a chicken and egg 
problem to me -- should it be possible to use a solrecore.properties variable 
to set the value of another variable in core.properties? or should it be 
possible to use a core.properties variable to set the value of another variable 
in solrcore.properties?

the simplest thing for people to udnerstand would probably be to just say that 
they are independent, loaded seperately, and cause an error if you try to 
define the same value in both (i doubt that's currently enforced, but it 
probably should be)

 solrcore.properties file should be loaded if it resides in ZooKeeper
 

 Key: SOLR-7613
 URL: https://issues.apache.org/jira/browse/SOLR-7613
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.3


 The solrcore.properties file is used to load user defined properties for use 
 primarily in the solrconfig.xml file, though this properties file will only 
 load if it is resident in the core/conf directory on the physical disk, it 
 will not load if it is in ZK's core/conf directory. There should be a 
 mechanism to allow a core properties file to be specified in ZK and can be 
 updated appropriately along with being able to reload the properties when the 
 file changes (or via a core reload).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6527) TermWeight should not load norms when needsScores is false

2015-06-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6527:


 Summary: TermWeight should not load norms when needsScores is false
 Key: LUCENE-6527
 URL: https://issues.apache.org/jira/browse/LUCENE-6527
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand


TermWeight currently loads norms all the time, even when needsScores is false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6458) MultiTermQuery's FILTER rewrite method should support skipping whenever possible

2015-06-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6458.
--
Resolution: Fixed

 MultiTermQuery's FILTER rewrite method should support skipping whenever 
 possible
 

 Key: LUCENE-6458
 URL: https://issues.apache.org/jira/browse/LUCENE-6458
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6458-2.patch, LUCENE-6458.patch, 
 LUCENE-6458.patch, wikimedium.10M.nostopwords.tasks


 Today MultiTermQuery's FILTER rewrite always builds a bit set fom all 
 matching terms. This means that we need to consume the entire postings lists 
 of all matching terms. Instead we should try to execute like regular 
 disjunctions when there are few terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6526) Make AssertingWeight check that scores are not computed when needsScores is false

2015-06-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6526:


 Summary: Make AssertingWeight check that scores are not computed 
when needsScores is false
 Key: LUCENE-6526
 URL: https://issues.apache.org/jira/browse/LUCENE-6526
 Project: Lucene - Core
  Issue Type: Test
Reporter: Adrien Grand
Assignee: Adrien Grand


Today nothing prevents you from calling score() if you don't need scores. But 
we could make AssertingWeight check it in order to make sure that we do not 
waste resources computing something we don't need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7555) Display total space and available space in Admin

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573219#comment-14573219
 ] 

Erik Hatcher commented on SOLR-7555:


[~epugh] I tinkered with this a little more though still didn't have full 
success, but a call to DirectoryFactory#release(directory) gets the test case 
passing.  I still had issues with /admin/system working properly though, but 
maybe you can add in the #release and get it working?

 Display total space and available space in Admin
 

 Key: SOLR-7555
 URL: https://issues.apache.org/jira/browse/SOLR-7555
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 5.1
Reporter: Eric Pugh
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.3

 Attachments: DiskSpaceAwareDirectory.java, 
 SOLR-7555-display_disk_space.patch, SOLR-7555-display_disk_space_v2.patch, 
 SOLR-7555-display_disk_space_v3.patch, SOLR-7555-display_disk_space_v4.patch, 
 SOLR-7555.patch, SOLR-7555.patch, SOLR-7555.patch


 Frequently I have access to the Solr Admin console, but not the underlying 
 server, and I'm curious how much space remains available.   This little patch 
 exposes total Volume size as well as the usable space remaining:
 !https://monosnap.com/file/VqlReekCFwpK6utI3lP18fbPqrGI4b.png!
 I'm not sure if this is the best place to put this, as every shard will share 
 the same data, so maybe it should be on the top level Dashboard?  Also not 
 sure what to call the fields! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7637) Improve error logging in the zkcli CLUSTERPROP command

2015-06-04 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-7637:
---
Attachment: SOLR-7637.patch

Here is the patch. I have also removed redundant code handling 
NodeExistsException/BadVersionException since we retrying internally in the 
ZkStateReader::setClusterProperty(...) API.

 Improve error logging in the zkcli CLUSTERPROP command
 --

 Key: SOLR-7637
 URL: https://issues.apache.org/jira/browse/SOLR-7637
 Project: Solr
  Issue Type: Improvement
Reporter: Hrishikesh Gadre
Priority: Trivial
 Attachments: SOLR-7637.patch


 SOLR-7176 introduced capability to update Solr cluster properties via ZK CLI. 
 The error logging implemented as part of that fix was not proper (i.e. the 
 actual error was getting masked). We should improve the error logging to 
 explicitly state the root cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573214#comment-14573214
 ] 

ASF subversion and git services commented on SOLR-4506:
---

Commit 1683601 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1683601 ]

SOLR-4506: Clean-up old (unused) index directories in the background after 
initializing a new index

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
Assignee: Timothy Potter
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-4506.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7638) Angular UI cloud pane broken

2015-06-04 Thread Upayavira (JIRA)
Upayavira created SOLR-7638:
---

 Summary: Angular UI cloud pane broken
 Key: SOLR-7638
 URL: https://issues.apache.org/jira/browse/SOLR-7638
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Upayavira
Priority: Minor


I suspect the backend behind the Cloud pane changed meaning the cloud tab in 
angular doesn't work. Patch will come soon,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7637) Improve the error logging in the zkcli CLUSTERPROP command

2015-06-04 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-7637:
--

 Summary: Improve the error logging in the zkcli CLUSTERPROP command
 Key: SOLR-7637
 URL: https://issues.apache.org/jira/browse/SOLR-7637
 Project: Solr
  Issue Type: Improvement
Reporter: Hrishikesh Gadre
Priority: Trivial


SOLR-7176 introduced capability to update Solr cluster properties via ZK CLI. 
The error logging implemented as part of that fix was not proper (i.e. the 
actual error was getting masked). We should improve the error logging to 
explicitly state the root cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC4

2015-06-04 Thread Yonik Seeley
+1

-Yonik

On Tue, Jun 2, 2015 at 11:12 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 Please vote for the fourth (and hopefully final) release candidate for
 Apache Lucene/Solr 5.2.0.

 The artifacts can be downloaded from:
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/

 You can run the smoke tester directly with this command:

 python3 -u dev-tools/scripts/smokeTestRelease.py
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/

 Here's my +1

 SUCCESS! [0:32:56.564985]

 --
 Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7637) Improve error logging in the zkcli CLUSTERPROP command

2015-06-04 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-7637:
---
Summary: Improve error logging in the zkcli CLUSTERPROP command  (was: 
Improve the error logging in the zkcli CLUSTERPROP command)

 Improve error logging in the zkcli CLUSTERPROP command
 --

 Key: SOLR-7637
 URL: https://issues.apache.org/jira/browse/SOLR-7637
 Project: Solr
  Issue Type: Improvement
Reporter: Hrishikesh Gadre
Priority: Trivial

 SOLR-7176 introduced capability to update Solr cluster properties via ZK CLI. 
 The error logging implemented as part of that fix was not proper (i.e. the 
 actual error was getting masked). We should improve the error logging to 
 explicitly state the root cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3719) Add instant search capability to example/files /browse

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572819#comment-14572819
 ] 

Erik Hatcher edited comment on SOLR-3719 at 6/4/15 6:18 PM:


Another tweak needed [~esther.quansah] is to add filters/sort to the instant 
search request.  I think adding lensNoQ to the URL should do the trick.


was (Author: ehatcher):
Another tweak needed [~esther.quansah] is to add filters/sort to the instant 
search request.  I think added lensNoQ to the URL should do the trick.

 Add instant search capability to example/files /browse
 

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2015-06-04 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-4506.
--
Resolution: Fixed

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
Assignee: Timothy Potter
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-4506.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573233#comment-14573233
 ] 

ASF subversion and git services commented on SOLR-4506:
---

Commit 1683604 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683604 ]

SOLR-4506: Clean-up old (unused) index directories in the background after 
initializing a new index

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
Assignee: Timothy Potter
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-4506.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 release apache-solr-ref-guide-5.2.pdf

2015-06-04 Thread Yonik Seeley
+1

Oh, and as far as the new faceting stuff is concerned, this release
snuck up faster than expected and there's still some stuff to be
ironed out, so I think I'll continue considering it experimental for
5.2 as well.

-Yonik

On Wed, Jun 3, 2015 at 1:30 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:

 Please VOTE to release these files as the Solr Ref Guide 5.2...

 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.2-RC0/


 NOTE: this vote will be open for a minimum of 72 hours, but i will not call
 this (ref guide) vote to a close until the 5.2.0 code release is also
 successful -- just in case there are any last minute bugs found that warrant
 an update to the ref guide as well.



 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573369#comment-14573369
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1683609 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683609 ]

LUCENE-6508: Simplify directory/lock API

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch, LUCENE-6508.patch, LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573314#comment-14573314
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1683606 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1683606 ]

LUCENE-6508: Simplify directory/lock API

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch, LUCENE-6508.patch, LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6508) Simplify Directory/lock api

2015-06-04 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6508.
-
   Resolution: Fixed
Fix Version/s: 5.3
   Trunk

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch, LUCENE-6508.patch, LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6525) Deprecate IndexWriterConfig's write lock timeout

2015-06-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573469#comment-14573469
 ] 

Michael McCandless commented on LUCENE-6525:


+1

 Deprecate IndexWriterConfig's write lock timeout
 

 Key: LUCENE-6525
 URL: https://issues.apache.org/jira/browse/LUCENE-6525
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Followup from LUCENE-6508
 We should ultimately remove this parameter, it is just sugar over a sleeping 
 lock factory today that sleeps and retries until timeout, like the old code.
 But really if you want a lock that blocks until its obtained, you can simply 
 specify the sleeping lock factory yourself (and have more control over what 
 it does!), or maybe an NIO implementation based on the blocking 
 FileChannel.lock() or something else.
 So this stuff should be out of indexwriter and not baked into our APIs.
 I would like to:
 1) deprecate this, mentioning to use the sleeping factory instead
 2) change default of deprecated timeout to 0, so you only sleep if you ask. I 
 am not really sure if matchVersion can be used, because today the default 
 itself is also settable with a static setter -- OVERENGINEERED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 release apache-solr-ref-guide-5.2.pdf

2015-06-04 Thread Steve Rowe
+1

Steve

 On Jun 3, 2015, at 1:30 PM, Chris Hostetter hossman_luc...@fucit.org wrote:
 
 
 Please VOTE to release these files as the Solr Ref Guide 5.2...
 
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.2-RC0/
 
 
 NOTE: this vote will be open for a minimum of 72 hours, but i will not call 
 this (ref guide) vote to a close until the 5.2.0 code release is also 
 successful -- just in case there are any last minute bugs found that warrant 
 an update to the ref guide as well.
 
 
 
 -Hoss
 http://www.lucidworks.com/
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: TokenOrderingFilter

2015-06-04 Thread david.w.smi...@gmail.com
Hi Dmitry,

Ideally, the token stream produces tokens that have a startOffset = the
startOffset of the previous token from the stream.  Sometime in the past
year or so, this was enforced at the indexing layer, I think.  There used
to be TokenFilters that violated this contract; I think earlier versions of
WordDelimiterFilter could.  If my assumption that this is asserted at the
indexing layer is correct, then I think TokenOrderingFilter is obsolete.

~ David

On Thu, Jun 4, 2015 at 7:48 AM Dmitry Kan dmitry.luc...@gmail.com wrote:

Hi guys,

 Sorry for sending questions to the dev list and not to the user one.
 Somehow I'm getting more luck here.

 We have found the class o.a.solr.highlight.TokenOrderingFilter
 with the following comment:


 -/**

- * Orders Tokens in a window first by their startOffset ascending.

- * endOffset is currently ignored.

- * This is meant to work around fickleness in the highlighter only.  It

- * can mess up token positions and should not be used for indexing or 
 querying.

- */

-final class TokenOrderingFilter extends TokenFilter {

 In fact, removing this class didn't change the behaviour of the highlighter.

 Could anybody shed light on its necessity?

 Thanks,

 Dmitry Kan




Re: [VOTE] 5.2.0 RC4

2015-06-04 Thread Erik Hatcher
I've had issues running the smoke tester but I ran through a typical Solr 
workflow on RC4 and all was fine. 

+1

Erik

 On Jun 2, 2015, at 23:12, Anshum Gupta ans...@anshumgupta.net wrote:
 
 Please vote for the fourth (and hopefully final) release candidate for Apache 
 Lucene/Solr 5.2.0.
 
 The artifacts can be downloaded from:
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/
 
 You can run the smoke tester directly with this command:
 
 python3 -u dev-tools/scripts/smokeTestRelease.py 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/
 
 Here's my +1
 
 SUCCESS! [0:32:56.564985]
 
 -- 
 Anshum Gupta


Re: TokenOrderingFilter

2015-06-04 Thread Dmitry Kan
Hi David,

Thanks for your quick reply.

In fact, we do use WDF in 4.10.2. It very much looks as you explain, that
the offsets are preserved in the monotonically increasing order. Here is
the list of filters we use on the indexing side:

solr.MappingCharFilterFactory

solr.StandardTokenizerFactory

solr.StandardFilterFactory

solr.WordDelimiterFilterFactory

solr.LowerCaseFilterFactory

custom filters that do not mingle with the order of the offsets.




On 4 June 2015 at 18:35, david.w.smi...@gmail.com david.w.smi...@gmail.com
wrote:

 Hi Dmitry,

 Ideally, the token stream produces tokens that have a startOffset = the
 startOffset of the previous token from the stream.  Sometime in the past
 year or so, this was enforced at the indexing layer, I think.  There used
 to be TokenFilters that violated this contract; I think earlier versions of
 WordDelimiterFilter could.  If my assumption that this is asserted at the
 indexing layer is correct, then I think TokenOrderingFilter is obsolete.

 ~ David

 On Thu, Jun 4, 2015 at 7:48 AM Dmitry Kan dmitry.luc...@gmail.com wrote:

Hi guys,

 Sorry for sending questions to the dev list and not to the user one.
 Somehow I'm getting more luck here.

 We have found the class o.a.solr.highlight.TokenOrderingFilter
 with the following comment:


 -/**

- * Orders Tokens in a window first by their startOffset ascending.

- * endOffset is currently ignored.

- * This is meant to work around fickleness in the highlighter only.  It

- * can mess up token positions and should not be used for indexing or 
 querying.

- */

-final class TokenOrderingFilter extends TokenFilter {

 In fact, removing this class didn't change the behaviour of the highlighter.

 Could anybody shed light on its necessity?

 Thanks,

 Dmitry Kan




Re: VOTE: RC0 release apache-solr-ref-guide-5.2.pdf

2015-06-04 Thread Chris Hostetter

: Please VOTE to release these files as the Solr Ref Guide 5.2...
: 
: 
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.2-RC0/

My vote...

+1 to releasing this artifact...

$ sha1sum apache-solr-ref-guide-5.2.pdf
e1d7d658a0233dc4a46bc6e4951051d4c3935541  apache-solr-ref-guide-5.2.pdf



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 release apache-solr-ref-guide-5.2.pdf

2015-06-04 Thread Anshum Gupta
+1, LGTM!

On Wed, Jun 3, 2015 at 10:30 AM, Chris Hostetter hossman_luc...@fucit.org
wrote:


 Please VOTE to release these files as the Solr Ref Guide 5.2...


 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.2-RC0/


 NOTE: this vote will be open for a minimum of 72 hours, but i will not
 call this (ref guide) vote to a close until the 5.2.0 code release is also
 successful -- just in case there are any last minute bugs found that
 warrant an update to the ref guide as well.



 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Anshum Gupta


[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572878#comment-14572878
 ] 

ASF subversion and git services commented on SOLR-7636:
---

Commit 1683560 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683560 ]

SOLR-7636: Update from ZK before returning the status

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7518) Facet Module should respect shards.tolerant

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572932#comment-14572932
 ] 

ASF subversion and git services commented on SOLR-7518:
---

Commit 1683569 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1683569 ]

SOLR-7518: make facet module support shards.tolerant

 Facet Module should respect shards.tolerant
 ---

 Key: SOLR-7518
 URL: https://issues.apache.org/jira/browse/SOLR-7518
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.1
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7518.patch


 Merging node currently gets a NPE if one of the shards doesn't return facets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7518) Facet Module should respect shards.tolerant

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572935#comment-14572935
 ] 

ASF subversion and git services commented on SOLR-7518:
---

Commit 1683570 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683570 ]

SOLR-7518: make facet module support shards.tolerant

 Facet Module should respect shards.tolerant
 ---

 Key: SOLR-7518
 URL: https://issues.apache.org/jira/browse/SOLR-7518
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.1
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.3

 Attachments: SOLR-7518.patch


 Merging node currently gets a NPE if one of the shards doesn't return facets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7518) Facet Module should respect shards.tolerant

2015-06-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7518:
---
Attachment: SOLR-7518.patch

OK, here's a patch that seems to work...

 Facet Module should respect shards.tolerant
 ---

 Key: SOLR-7518
 URL: https://issues.apache.org/jira/browse/SOLR-7518
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.1
Reporter: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7518.patch


 Merging node currently gets a NPE if one of the shards doesn't return facets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_45) - Build # 12935 - Failure!

2015-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12935/
Java: 32bit/jdk1.8.0_45 -client -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([DA3EBF3664796F4E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:235)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9925 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest
 DA3EBF3664796F4E-001/init-core-data-001
   [junit4]   2 420460 INFO  
(SUITE-HttpPartitionTest-seed#[DA3EBF3664796F4E]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/jyoz/ea
   [junit4]   2 420461 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 420461 INFO  (Thread-947) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 420462 INFO  (Thread-947) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2 420562 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.ZkTestServer start zk server on port:39246
   [junit4]   2 420562 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 420563 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 420565 INFO  (zkCallback-758-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@c83673 name:ZooKeeperConnection 
Watcher:127.0.0.1:39246 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 420565 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 420565 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 420565 INFO  
(TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 420567 INFO  

[jira] [Resolved] (SOLR-7518) Facet Module should respect shards.tolerant

2015-06-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7518.

   Resolution: Fixed
Fix Version/s: (was: 5.2)
   5.3

 Facet Module should respect shards.tolerant
 ---

 Key: SOLR-7518
 URL: https://issues.apache.org/jira/browse/SOLR-7518
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.1
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.3

 Attachments: SOLR-7518.patch


 Merging node currently gets a NPE if one of the shards doesn't return facets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572776#comment-14572776
 ] 

Shalin Shekhar Mangar commented on SOLR-7636:
-

bq. In the previous model if the node cannot connect to zk how do you expect it 
to send a message to the OCP ?

That's right. It does not go to OCP but errors out. This commit breaks that 
guarantee. After this change, when I issue a clusterstatus command I have no 
idea whether I am getting stale state or not.

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3719) Add instant search capability to example/files /browse

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572819#comment-14572819
 ] 

Erik Hatcher commented on SOLR-3719:


Another tweak needed [~esther.quansah] is to add filters/sort to the instant 
search request.  I think added lensNoQ to the URL should do the trick.

 Add instant search capability to example/files /browse
 

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC4

2015-06-04 Thread Adrien Grand
+1

SUCCESS! [0:58:23.506603]

On Wed, Jun 3, 2015 at 5:20 PM, Mark Miller markrmil...@gmail.com wrote:
 +1

 SUCCESS! [0:42:59.405453]

 - Mark

 On Wed, Jun 3, 2015 at 11:12 AM Shalin Shekhar Mangar
 shalinman...@gmail.com wrote:

 +1

 Java7: SUCCESS! [1:06:22.348351]

 Java8: SUCCESS! [1:26:28.496238]

 On Wed, Jun 3, 2015 at 8:42 AM, Anshum Gupta ans...@anshumgupta.net
 wrote:

 Please vote for the fourth (and hopefully final) release candidate for
 Apache Lucene/Solr 5.2.0.

 The artifacts can be downloaded from:

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/

 You can run the smoke tester directly with this command:

 python3 -u dev-tools/scripts/smokeTestRelease.py
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC4-rev1683206/

 Here's my +1

 SUCCESS! [0:32:56.564985]

 --
 Anshum Gupta




 --
 Regards,
 Shalin Shekhar Mangar.

 --
 - Mark
 about.me/markrmiller



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_45) - Build # 12935 - Failure!

2015-06-04 Thread Timothy Potter
Taking a look ...


On Thu, Jun 4, 2015 at 8:50 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12935/
 Java: 32bit/jdk1.8.0_45 -client -XX:+UseG1GC

 1 tests failed.
 FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

 Error Message:
 ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

 Stack Trace:
 java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
 released!!! [TransactionLog]
 at __randomizedtesting.SeedInfo.seed([DA3EBF3664796F4E]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at org.junit.Assert.assertNull(Assert.java:551)
 at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:235)
 at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 9925 lines...]
[junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
[junit4]   2 Creating dataDir: 
 /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest
  DA3EBF3664796F4E-001/init-core-data-001
[junit4]   2 420460 INFO  
 (SUITE-HttpPartitionTest-seed#[DA3EBF3664796F4E]-worker) [] 
 o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
 /jyoz/ea
[junit4]   2 420461 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
[junit4]   2 420461 INFO  (Thread-947) [] o.a.s.c.ZkTestServer client 
 port:0.0.0.0/0.0.0.0:0
[junit4]   2 420462 INFO  (Thread-947) [] o.a.s.c.ZkTestServer 
 Starting server
[junit4]   2 420562 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.ZkTestServer start zk server on port:39246
[junit4]   2 420562 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
[junit4]   2 420563 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4]   2 420565 INFO  (zkCallback-758-thread-1) [] 
 o.a.s.c.c.ConnectionManager Watcher 
 org.apache.solr.common.cloud.ConnectionManager@c83673 
 name:ZooKeeperConnection Watcher:127.0.0.1:39246 got event WatchedEvent 
 state:SyncConnected type:None path:null path:null type:None
[junit4]   2 420565 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4]   2 420565 INFO  
 (TEST-HttpPartitionTest.test-seed#[DA3EBF3664796F4E]) [] 
 o.a.s.c.c.SolrZkClient Using 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 12759 - Failure!

2015-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12759/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

No tests ran.

Build Log:
[...truncated 311 lines...]
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572797#comment-14572797
 ] 

Noble Paul commented on SOLR-7636:
--

The node should try to fetch the latest state and if it is not able to connect 
just fail

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5954) Store lucene version in segment_N

2015-06-04 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572843#comment-14572843
 ] 

Ryan Ernst commented on LUCENE-5954:


Thanks Mike! This looks great.

A couple questions:
* Do we really need to add compareTo? Couldn't we use the existing onOrAfter? 
It seems weird to have two ways of comparing versions.
* Is there somewhere we could have a more direct test than deletion policy 
tests? I took a quick look but couldn't find anything unit testing the segment 
infos reading/writing...

 Store lucene version in segment_N
 -

 Key: LUCENE-5954
 URL: https://issues.apache.org/jira/browse/LUCENE-5954
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Michael McCandless
 Attachments: LUCENE-5954.patch


 It would be nice to have the version of lucene that wrote segments_N, so that 
 we can use this to determine which major version an index was written with 
 (for upgrading across major versions).  I think this could be squeezed in 
 just after the segments_N header.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5954) Store lucene version in segment_N

2015-06-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572876#comment-14572876
 ] 

Robert Muir commented on LUCENE-5954:
-

{code}
if (format = VERSION_53) {
  // TODO: in the future (7.0?  sigh) we can use this to throw 
IndexFormatTooOldException ... or just rely on the
  // minSegmentLuceneVersion check instead:
  infos.luceneVersion = Version.fromBits(input.readVInt(), input.readVInt(), 
input.readVInt());
} else {
  // else leave null
}
{code}

I guess I was hoping we could take it further. We dont technically need to 
change file formats to implement this, it could be computed from the segments 
on read in the 4.0-5.2 case? Its just the min() that it finds there. Or does 
this become too hairy?



 Store lucene version in segment_N
 -

 Key: LUCENE-5954
 URL: https://issues.apache.org/jira/browse/LUCENE-5954
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
Assignee: Michael McCandless
 Attachments: LUCENE-5954.patch


 It would be nice to have the version of lucene that wrote segments_N, so that 
 we can use this to determine which major version an index was written with 
 (for upgrading across major versions).  I think this could be squeezed in 
 just after the segments_N header.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7518) Facet Module should respect shards.tolerant

2015-06-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-7518:
--

Assignee: Yonik Seeley

 Facet Module should respect shards.tolerant
 ---

 Key: SOLR-7518
 URL: https://issues.apache.org/jira/browse/SOLR-7518
 Project: Solr
  Issue Type: Bug
  Components: Facet Module
Affects Versions: 5.1
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.2

 Attachments: SOLR-7518.patch


 Merging node currently gets a NPE if one of the shards doesn't return facets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572874#comment-14572874
 ] 

ASF subversion and git services commented on SOLR-7636:
---

Commit 1683558 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1683558 ]

SOLR-7636: Update from ZK before returning the status

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7062) CLUSTERSTATUS returns a collection with state=active, even though the collection could not be created due to a missing configSet

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572227#comment-14572227
 ] 

ASF subversion and git services commented on SOLR-7062:
---

Commit 1683466 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1683466 ]

SOLR-7062 testcase to reproduce the bug. Looks like it is not reproducible

 CLUSTERSTATUS returns a collection with state=active, even though the 
 collection could not be created due to a missing configSet
 

 Key: SOLR-7062
 URL: https://issues.apache.org/jira/browse/SOLR-7062
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Ng Agi
Assignee: Noble Paul
  Labels: solrcloud
 Attachments: SOLR-7062.patch


 A collection can not be created, if its configSet does not exist. 
 Nevertheless, a subsequent CLUSTERSTATUS CollectionAdminRequest returns this 
 collection with a state=active.
 See log below.
 {noformat}
 [INFO] Overseer Collection Processor: Get the message 
 id:/overseer/collection-queue-work/qn-000110 message:{
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [WARNING] OverseerCollectionProcessor.processMessage : createcollection , {
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [INFO] creating collections conf node /collections/blueprint_media_comments 
 [INFO] makePath: /collections/blueprint_media_comments
 [INFO] Got user-level KeeperException when processing 
 sessionid:0x14b315b0f4a000e type:create cxid:0x2f2e zxid:0x2f4 txntype:-1 
 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for 
 /overseer
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] building a new collection: blueprint_media_comments
 [INFO] Create collection blueprint_media_comments with shards [shard1]
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] Creating SolrCores for new collection blueprint_media_comments, 
 shardNames [shard1] , replicationFactor : 1
 [INFO] Creating shard blueprint_media_comments_shard1_replica1 as part of 
 slice shard1 of collection blueprint_media_comments on localhost:44080_solr
 [INFO] core create command 
 qt=/admin/coresproperty.dataDir=datacollection.configName=elasticname=blueprint_media_comments_shard1_replica1action=CREATEnumShards=1collection=blueprint_media_commentsshard=shard1wt=javabinversion=2property.instanceDir=cores/blueprint_media_comments
 [INFO] publishing core=blueprint_media_comments_shard1_replica1 state=down 
 collection=blueprint_media_comments
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] look for our core node name
 [INFO] Update state numShards=1 message={
   core:blueprint_media_comments_shard1_replica1,
   roles:null,
   base_url:http://localhost:44080/solr;,
   node_name:localhost:44080_solr,
   numShards:1,
   state:down,
   shard:shard1,
   collection:blueprint_media_comments,
   operation:state}
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] waiting to find shard id in clusterstate for 
 blueprint_media_comments_shard1_replica1
 [INFO] Check for collection zkNode:blueprint_media_comments
 [INFO] Collection zkNode exists
 [INFO] Load collection config from:/collections/blueprint_media_comments
 [ERROR] Specified config does not exist in ZooKeeper:elastic
 [ERROR] Error creating core [blueprint_media_comments_shard1_replica1]: 
 Specified config does not exist in ZooKeeper:elastic
 org.apache.solr.common.cloud.ZooKeeperException: Specified config does not 
 exist in ZooKeeper:elastic
   at 
 org.apache.solr.common.cloud.ZkStateReader.readConfigName(ZkStateReader.java:160)
   at 
 org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:37)
   at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
   at 
 

[jira] [Updated] (SOLR-7635) bin/solr -e cloud can fail on MacOS

2015-06-04 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7635:

Attachment: SOLR-7635.patch

A patch that covers non-cloud mode case too.

 bin/solr -e cloud can fail on MacOS
 ---

 Key: SOLR-7635
 URL: https://issues.apache.org/jira/browse/SOLR-7635
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2
 Environment: Unix
Reporter: Upayavira
Priority: Minor
 Attachments: SOLR-7635.patch, SOLR-7635.patch


 On MacOS:
 bin/solr -e cloud 
 said:
 Please enter the port for node1 [8983]
 Oops! Looks like port 8983 is already being used by another process. Please 
 choose a different port.
 Looking at the script, it uses:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT`
 which gave the output:
 {{
 COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
 Google  365 upayavira  130u  IPv6 0xab1d227df2e5a7db  0t0  TCP 
 [::1]:49889-[::1]:8983 (ESTABLISHED)
 java  10889 upayavira  118u  IPv6 0xab1d227df2e73ddb  0t0  TCP *:8983 
 (LISTEN)
 java  10889 upayavira  134u  IPv6 0xab1d227df2e756db  0t0  TCP 
 [::1]:8983-[::1]:49889 (ESTABLISHED)
 }}
 This was connections Google Chrome was attempting to make to Solr. 
 Replacing the above line with this:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT | grep LISTEN`
 resolved the issue. Very simple patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7635) bin/solr -e cloud can fail on MacOS

2015-06-04 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-7635:

Attachment: SOLR-7635.patch

 bin/solr -e cloud can fail on MacOS
 ---

 Key: SOLR-7635
 URL: https://issues.apache.org/jira/browse/SOLR-7635
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2
 Environment: Unix
Reporter: Upayavira
Priority: Minor
 Attachments: SOLR-7635.patch


 On MacOS:
 bin/solr -e cloud 
 said:
 Please enter the port for node1 [8983]
 Oops! Looks like port 8983 is already being used by another process. Please 
 choose a different port.
 Looking at the script, it uses:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT`
 which gave the output:
 {{
 COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
 Google  365 upayavira  130u  IPv6 0xab1d227df2e5a7db  0t0  TCP 
 [::1]:49889-[::1]:8983 (ESTABLISHED)
 java  10889 upayavira  118u  IPv6 0xab1d227df2e73ddb  0t0  TCP *:8983 
 (LISTEN)
 java  10889 upayavira  134u  IPv6 0xab1d227df2e756db  0t0  TCP 
 [::1]:8983-[::1]:49889 (ESTABLISHED)
 }}
 This was connections Google Chrome was attempting to make to Solr. 
 Replacing the above line with this:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT | grep LISTEN`
 resolved the issue. Very simple patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7062) CLUSTERSTATUS returns a collection with state=active, even though the collection could not be created due to a missing configSet

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572236#comment-14572236
 ] 

ASF subversion and git services commented on SOLR-7062:
---

Commit 1683467 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683467 ]

SOLR-7062 testcase to reproduce the bug. Looks like it is not reproducible

 CLUSTERSTATUS returns a collection with state=active, even though the 
 collection could not be created due to a missing configSet
 

 Key: SOLR-7062
 URL: https://issues.apache.org/jira/browse/SOLR-7062
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Ng Agi
Assignee: Noble Paul
  Labels: solrcloud
 Attachments: SOLR-7062.patch


 A collection can not be created, if its configSet does not exist. 
 Nevertheless, a subsequent CLUSTERSTATUS CollectionAdminRequest returns this 
 collection with a state=active.
 See log below.
 {noformat}
 [INFO] Overseer Collection Processor: Get the message 
 id:/overseer/collection-queue-work/qn-000110 message:{
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [WARNING] OverseerCollectionProcessor.processMessage : createcollection , {
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [INFO] creating collections conf node /collections/blueprint_media_comments 
 [INFO] makePath: /collections/blueprint_media_comments
 [INFO] Got user-level KeeperException when processing 
 sessionid:0x14b315b0f4a000e type:create cxid:0x2f2e zxid:0x2f4 txntype:-1 
 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for 
 /overseer
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] building a new collection: blueprint_media_comments
 [INFO] Create collection blueprint_media_comments with shards [shard1]
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] Creating SolrCores for new collection blueprint_media_comments, 
 shardNames [shard1] , replicationFactor : 1
 [INFO] Creating shard blueprint_media_comments_shard1_replica1 as part of 
 slice shard1 of collection blueprint_media_comments on localhost:44080_solr
 [INFO] core create command 
 qt=/admin/coresproperty.dataDir=datacollection.configName=elasticname=blueprint_media_comments_shard1_replica1action=CREATEnumShards=1collection=blueprint_media_commentsshard=shard1wt=javabinversion=2property.instanceDir=cores/blueprint_media_comments
 [INFO] publishing core=blueprint_media_comments_shard1_replica1 state=down 
 collection=blueprint_media_comments
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] look for our core node name
 [INFO] Update state numShards=1 message={
   core:blueprint_media_comments_shard1_replica1,
   roles:null,
   base_url:http://localhost:44080/solr;,
   node_name:localhost:44080_solr,
   numShards:1,
   state:down,
   shard:shard1,
   collection:blueprint_media_comments,
   operation:state}
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] waiting to find shard id in clusterstate for 
 blueprint_media_comments_shard1_replica1
 [INFO] Check for collection zkNode:blueprint_media_comments
 [INFO] Collection zkNode exists
 [INFO] Load collection config from:/collections/blueprint_media_comments
 [ERROR] Specified config does not exist in ZooKeeper:elastic
 [ERROR] Error creating core [blueprint_media_comments_shard1_replica1]: 
 Specified config does not exist in ZooKeeper:elastic
 org.apache.solr.common.cloud.ZooKeeperException: Specified config does not 
 exist in ZooKeeper:elastic
   at 
 org.apache.solr.common.cloud.ZkStateReader.readConfigName(ZkStateReader.java:160)
   at 
 org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:37)
   at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
   at 
 

[jira] [Created] (SOLR-7635) bin/solr -e cloud can fail on MacOS

2015-06-04 Thread Upayavira (JIRA)
Upayavira created SOLR-7635:
---

 Summary: bin/solr -e cloud can fail on MacOS
 Key: SOLR-7635
 URL: https://issues.apache.org/jira/browse/SOLR-7635
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2
 Environment: Unix
Reporter: Upayavira
Priority: Minor


On MacOS:

bin/solr -e cloud 
said:
Please enter the port for node1 [8983]
Oops! Looks like port 8983 is already being used by another process. Please 
choose a different port.

Looking at the script, it uses:
PORT_IN_USE=`lsof -Pni:$CLOUD_PORT`
which gave the output:
{{
COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
Google  365 upayavira  130u  IPv6 0xab1d227df2e5a7db  0t0  TCP 
[::1]:49889-[::1]:8983 (ESTABLISHED)
java  10889 upayavira  118u  IPv6 0xab1d227df2e73ddb  0t0  TCP *:8983 
(LISTEN)
java  10889 upayavira  134u  IPv6 0xab1d227df2e756db  0t0  TCP 
[::1]:8983-[::1]:49889 (ESTABLISHED)
}}
This was connections Google Chrome was attempting to make to Solr. 

Replacing the above line with this:

PORT_IN_USE=`lsof -Pni:$CLOUD_PORT | grep LISTEN`

resolved the issue. Very simple patch attached.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7636:
-
Attachment: SOLR-7636.patch

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572531#comment-14572531
 ] 

ASF subversion and git services commented on SOLR-7636:
---

Commit 1683514 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1683514 ]

SOLR-7636: CLUSTERSTATUS API is executed at CollectionsHandler

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572533#comment-14572533
 ] 

ASF subversion and git services commented on SOLR-7636:
---

Commit 1683515 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1683515 ]

SOLR-7636: set svn eol style

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2382 - Failure!

2015-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2382/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=1698, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1698, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([D1112070731C8049:59451FAADDE0EDB1]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:64717/ed/br: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([D1112070731C8049]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:902)




Build Log:
[...truncated 9561 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest
 D1112070731C8049-001/init-core-data-001
   [junit4]   2 289094 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[D1112070731C8049]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2 289095 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[D1112070731C8049]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /ed/br
   [junit4]   2 289099 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 289100 INFO  (Thread-586) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 289100 INFO  (Thread-586) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2 289201 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.ZkTestServer start zk server on port:64707
   [junit4]   2 289201 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 289205 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 289223 INFO  (zkCallback-122-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@59ad8950 
name:ZooKeeperConnection Watcher:127.0.0.1:64707 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 289227 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 289231 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 289232 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 289243 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[D1112070731C8049]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 289243 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2 EndOfStreamException: Unable to read additional data from 
client sessionid 0x14dbdee57ff, likely client has closed socket
   [junit4]   2at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 289245 INFO  

[jira] [Commented] (SOLR-4506) [solr4.0.0] many index.{date} dir in replication node

2015-06-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572526#comment-14572526
 ] 

Mark Miller commented on SOLR-4506:
---

+1, LGTM.

 [solr4.0.0] many index.{date} dir in replication node 
 --

 Key: SOLR-4506
 URL: https://issues.apache.org/jira/browse/SOLR-4506
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: the solr4.0 runs on suse11.
 mem:32G
 cpu:16 cores
Reporter: zhuojunjian
Assignee: Timothy Potter
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-4506.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 in our test,we used solrcloud feature in solr4.0(version detail 
 :4.0.0.2012.10.06.03.04.33).
 the solrcloud configuration is 3 shards and 2 replications each shard.
 we found that there are over than 25 dirs which named index.{date} in one 
 replication node belonging to shard 3. 
 for example:
 index.2013021725864  index.20130218012211880  index.20130218015714713  
 index.20130218023101958  index.20130218030424083  tlog
 index.20130218005648324  index.20130218012751078  index.20130218020141293  
 the issue seems like SOLR-1781. but it is fixed in 4.0-BETA,5.0. 
 so is solr4.0 ? if it is fixed too in solr4.0, why we find the issue again ?
 what can I do?   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 702 - Still Failing

2015-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/702/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6293, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6293, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:54972, http://127.0.0.1:56951, 
http://127.0.0.1:55465, http://127.0.0.1:49733, http://127.0.0.1:48700]
at __randomizedtesting.SeedInfo.seed([96917935C3A4C81C]:0)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:870)
Caused by: org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request:[http://127.0.0.1:54972, 
http://127.0.0.1:56951, http://127.0.0.1:55465, http://127.0.0.1:49733, 
http://127.0.0.1:48700]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:867)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56951: KeeperErrorCode = Session expired for 
/overseer/collection-queue-work/qn-
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
... 5 more


FAILED:  org.apache.solr.search.TestSearcherReuse.test

Error Message:
expected same:Searcher@56eb927d[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C1)
 Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C3) 
Uninverting(_3(6.0.0):C1)))} was not:Searcher@7315e650[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C1)
 Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C3) 
Uninverting(_3(6.0.0):C1)))}

Stack Trace:
java.lang.AssertionError: expected same:Searcher@56eb927d[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C1)
 Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C3) 
Uninverting(_3(6.0.0):C1)))} was not:Searcher@7315e650[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(6.0.0):C1)
 Uninverting(_1(6.0.0):C1) Uninverting(_2(6.0.0):C3) 
Uninverting(_3(6.0.0):C1)))}
at 
__randomizedtesting.SeedInfo.seed([96917935C3A4C81C:1EC546EF6D58A5E4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotSame(Assert.java:641)
at org.junit.Assert.assertSame(Assert.java:580)
at org.junit.Assert.assertSame(Assert.java:593)
at 
org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247)
at 
org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 866 - Still Failing

2015-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/866/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2961, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2961, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58135/p_bj: collection already exists: 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([6D19F8D76D8494DD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1607)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1628)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:910)




Build Log:
[...truncated 10198 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest
 6D19F8D76D8494DD-001/init-core-data-001
   [junit4]   2 281369 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[6D19F8D76D8494DD]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2 281369 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[6D19F8D76D8494DD]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /p_bj/
   [junit4]   2 281374 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 281374 INFO  (Thread-644) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 281374 INFO  (Thread-644) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2 281474 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.ZkTestServer start zk server on port:49560
   [junit4]   2 281488 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 281491 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2 281493 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 281494 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 281496 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2 281497 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[6D19F8D76D8494DD]) [] 
o.a.s.c.AbstractZkTestCase put 

[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573429#comment-14573429
 ] 

ASF subversion and git services commented on LUCENE-6481:
-

Commit 1683615 from [~mikemccand] in branch 'dev/branches/LUCENE-6481'
[ https://svn.apache.org/r1683615 ]

LUCENE-6481: merge trunk

 Improve GeoPointField type to only visit high precision boundary terms 
 ---

 Key: LUCENE-6481
 URL: https://issues.apache.org/jira/browse/LUCENE-6481
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Nicholas Knize
 Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
 LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
 LUCENE-6481_WIP.patch


 Current GeoPointField [LUCENE-6450 | 
 https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges 
 along the space-filling curve that represent a provided bounding box.  This 
 determines which terms to visit in the terms dictionary and which to skip. 
 This is suboptimal for large bounding boxes as we may end up visiting all 
 terms (which could be quite large). 
 This incremental improvement is to improve GeoPointField to only visit high 
 precision terms in boundary ranges and use the postings list for ranges that 
 are completely within the target bounding box.
 A separate improvement is to switch over to auto-prefix and build an 
 Automaton representing the bounding box.  That can be tracked in a separate 
 issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6524) Create an IndexWriter from an already opened NRT or non-NRT reader

2015-06-04 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6524:
--

 Summary: Create an IndexWriter from an already opened NRT or 
non-NRT reader
 Key: LUCENE-6524
 URL: https://issues.apache.org/jira/browse/LUCENE-6524
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.3


I'd like to add a new ctor to IndexWriter, letting you start from an already
opened NRT or non-NRT DirectoryReader.  I think this is a long missing
API in Lucene today, and we've talked in the past about different ways
to fix it e.g. factoring out a shared reader pool between writer and reader.

One use-case, which I hit in LUCENE-5376: if you have a read-only
index, so you've opened a non-NRT DirectoryReader to search it, and
then you want to upgrade to a read/write index, we don't handle that
very gracefully now because you are forced to open 2X the
SegmentReaders.

But with this API, IW populates its reader pool with the incoming
SegmentReaders so they are shared on any subsequent NRT reopens /
segment merging / deletes applying, etc.

Another (more expert) use case is allowing rollback to an NRT-point.
Today, you can only rollback to a commit point (segments_N).  But an
NRT reader also reflects a valid point in time view of the index (it
just doesn't have a segments_N file, and its ref'd files are not
fsync'd), so with this change you can close your old writer, open a
new one from this NRT point, and revert all changes that had been done
after the NRT reader was opened from the old writer.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6523) IW commit without commit user-data changes should also be reflected in NRT reopen

2015-06-04 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6523:
---
Attachment: LUCENE-6523.patch

Patch w/ test  fix.

 IW commit without commit user-data changes should also be reflected in NRT 
 reopen
 -

 Key: LUCENE-6523
 URL: https://issues.apache.org/jira/browse/LUCENE-6523
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6523.patch


 In LUCENE-6505 we fixed NRT readers to properly reflect changes from
 the last commit (new segments_N filename, new commit user-data), but I
 missed the case where a commit is done immediately after opening an
 NRT reader with no changes to the commit user-data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6523) IW commit without commit user-data changes should also be reflected in NRT reopen

2015-06-04 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6523:
--

 Summary: IW commit without commit user-data changes should also be 
reflected in NRT reopen
 Key: LUCENE-6523
 URL: https://issues.apache.org/jira/browse/LUCENE-6523
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.3
 Attachments: LUCENE-6523.patch

In LUCENE-6505 we fixed NRT readers to properly reflect changes from
the last commit (new segments_N filename, new commit user-data), but I
missed the case where a commit is done immediately after opening an
NRT reader with no changes to the commit user-data.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6524) Create an IndexWriter from an already opened NRT or non-NRT reader

2015-06-04 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6524:
---
Attachment: LUCENE-6524.patch

Initial work-in-progress patch ... a few nocommits, but the core
idea seems to work!

This patch also includes the fixes from LUCENE-6523.

I had to add a restriction for the open from NRT reader case: you
can't do this if the last commit this NRT reader sees has been
deleted, e.g. if the old IndexWriter had done a commit after the NRT
reader was opened.  In this case there is no starting commit for
the new writer to fall back on, which makes things tricky ...


 Create an IndexWriter from an already opened NRT or non-NRT reader
 --

 Key: LUCENE-6524
 URL: https://issues.apache.org/jira/browse/LUCENE-6524
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6524.patch


 I'd like to add a new ctor to IndexWriter, letting you start from an already
 opened NRT or non-NRT DirectoryReader.  I think this is a long missing
 API in Lucene today, and we've talked in the past about different ways
 to fix it e.g. factoring out a shared reader pool between writer and reader.
 One use-case, which I hit in LUCENE-5376: if you have a read-only
 index, so you've opened a non-NRT DirectoryReader to search it, and
 then you want to upgrade to a read/write index, we don't handle that
 very gracefully now because you are forced to open 2X the
 SegmentReaders.
 But with this API, IW populates its reader pool with the incoming
 SegmentReaders so they are shared on any subsequent NRT reopens /
 segment merging / deletes applying, etc.
 Another (more expert) use case is allowing rollback to an NRT-point.
 Today, you can only rollback to a commit point (segments_N).  But an
 NRT reader also reflects a valid point in time view of the index (it
 just doesn't have a segments_N file, and its ref'd files are not
 fsync'd), so with this change you can close your old writer, open a
 new one from this NRT point, and revert all changes that had been done
 after the NRT reader was opened from the old writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-7636:


 Summary: CLUSTERSTATUS Api should not go to OCP to fetch data
 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor


Currently it does  multiple ZK operations which is not required. It should just 
read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572549#comment-14572549
 ] 

ASF subversion and git services commented on SOLR-7636:
---

Commit 1683519 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683519 ]

SOLR-7636: CLUSTERSTATUS API is executed at CollectionsHandler

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



TokenOrderingFilter

2015-06-04 Thread Dmitry Kan
Hi guys,

Sorry for sending questions to the dev list and not to the user one.
Somehow I'm getting more luck here.

We have found the class o.a.solr.highlight.TokenOrderingFilter
with the following comment:


-/**

   - * Orders Tokens in a window first by their startOffset ascending.

   - * endOffset is currently ignored.

   - * This is meant to work around fickleness in the highlighter only.  It

   - * can mess up token positions and should not be used for indexing
or querying.

   - */

   -final class TokenOrderingFilter extends TokenFilter {

In fact, removing this class didn't change the behaviour of the highlighter.

Could anybody shed light on its necessity?

Thanks,

Dmitry Kan


[jira] [Commented] (LUCENE-6524) Create an IndexWriter from an already opened NRT or non-NRT reader

2015-06-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572676#comment-14572676
 ] 

Robert Muir commented on LUCENE-6524:
-

{quote}
Another (more expert) use case is allowing rollback to an NRT-point.
{quote}

This really needs to be a new method if added at all.

We should not mess with the semantics of commit/rollback. 


 Create an IndexWriter from an already opened NRT or non-NRT reader
 --

 Key: LUCENE-6524
 URL: https://issues.apache.org/jira/browse/LUCENE-6524
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6524.patch


 I'd like to add a new ctor to IndexWriter, letting you start from an already
 opened NRT or non-NRT DirectoryReader.  I think this is a long missing
 API in Lucene today, and we've talked in the past about different ways
 to fix it e.g. factoring out a shared reader pool between writer and reader.
 One use-case, which I hit in LUCENE-5376: if you have a read-only
 index, so you've opened a non-NRT DirectoryReader to search it, and
 then you want to upgrade to a read/write index, we don't handle that
 very gracefully now because you are forced to open 2X the
 SegmentReaders.
 But with this API, IW populates its reader pool with the incoming
 SegmentReaders so they are shared on any subsequent NRT reopens /
 segment merging / deletes applying, etc.
 Another (more expert) use case is allowing rollback to an NRT-point.
 Today, you can only rollback to a commit point (segments_N).  But an
 NRT reader also reflects a valid point in time view of the index (it
 just doesn't have a segments_N file, and its ref'd files are not
 fsync'd), so with this change you can close your old writer, open a
 new one from this NRT point, and revert all changes that had been done
 after the NRT reader was opened from the old writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572685#comment-14572685
 ] 

ASF subversion and git services commented on LUCENE-6520:
-

Commit 1683532 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1683532 ]

LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments 
were co-linear

 Geo3D GeoPath: co-linear end-points result in NPE
 -

 Key: LUCENE-6520
 URL: https://issues.apache.org/jira/browse/LUCENE-6520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 5.2
Reporter: David Smiley
Assignee: David Smiley
 Attachments: LUCENE-6520.patch


 FAILED:  org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 
 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]}
 {noformat}
 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195)
 at 
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100)
 {noformat}
 [~daddywri] says:
 bq. This is happening because the endpoints that define two path segments are 
 co-linear.  There's a check for that too, but clearly it's not firing 
 properly in this case for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6525) Deprecate IndexWriterConfig's write lock timeout

2015-06-04 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6525:
---

 Summary: Deprecate IndexWriterConfig's write lock timeout
 Key: LUCENE-6525
 URL: https://issues.apache.org/jira/browse/LUCENE-6525
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


Followup from LUCENE-6508

We should ultimately remove this parameter, it is just sugar over a sleeping 
lock factory today that sleeps and retries until timeout, like the old code.

But really if you want a lock that blocks until its obtained, you can simply 
specify the sleeping lock factory yourself (and have more control over what it 
does!), or maybe an NIO implementation based on the blocking FileChannel.lock() 
or something else.

So this stuff should be out of indexwriter and not baked into our APIs.

I would like to:
1) deprecate this, mentioning to use the sleeping factory instead
2) change default of deprecated timeout to 0, so you only sleep if you ask. I 
am not really sure if matchVersion can be used, because today the default 
itself is also settable with a static setter -- OVERENGINEERED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE

2015-06-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572688#comment-14572688
 ] 

ASF subversion and git services commented on LUCENE-6520:
-

Commit 1683533 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1683533 ]

LUCENE-6520: Geo3D GeoPath.done() would throw an NPE if adjacent path segments 
were co-linear

 Geo3D GeoPath: co-linear end-points result in NPE
 -

 Key: LUCENE-6520
 URL: https://issues.apache.org/jira/browse/LUCENE-6520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 5.2
Reporter: David Smiley
Assignee: David Smiley
 Attachments: LUCENE-6520.patch


 FAILED:  org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 
 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]}
 {noformat}
 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195)
 at 
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100)
 {noformat}
 [~daddywri] says:
 bq. This is happening because the endpoints that define two path segments are 
 co-linear.  There's a check for that too, but clearly it's not firing 
 properly in this case for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3190 - Still Failing

2015-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3190/

No tests ran.

Build Log:
[...truncated 177 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:536: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:484: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/build.xml:50:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1436:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:991:
 Could not read or create hints file: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/.caches/test-stats/core/timehints.txt

Total time: 17 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #3186
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 93 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-6520) Geo3D GeoPath: co-linear end-points result in NPE

2015-06-04 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-6520.
--
   Resolution: Fixed
Fix Version/s: 5.3

 Geo3D GeoPath: co-linear end-points result in NPE
 -

 Key: LUCENE-6520
 URL: https://issues.apache.org/jira/browse/LUCENE-6520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 5.2
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.3

 Attachments: LUCENE-6520.patch


 FAILED:  org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations {#2 
 seed=[4AB0FA45EF43F0C3:2240DF3E6EDF83C]}
 {noformat}
 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4AB0FA45EF43F0C3:2240DF3E6EDF83C]:0)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath$SegmentEndpoint.init(GeoPath.java:480)
 at 
 org.apache.lucene.spatial.spatial4j.geo3d.GeoPath.done(GeoPath.java:121)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.randomQueryShape(Geo3dRptTest.java:195)
 at 
 org.apache.lucene.spatial.prefix.RandomSpatialOpStrategyTestCase.testOperationRandomShapes(RandomSpatialOpStrategyTestCase.java:53)
 at 
 org.apache.lucene.spatial.spatial4j.Geo3dRptTest.testOperations(Geo3dRptTest.java:100)
 {noformat}
 [~daddywri] says:
 bq. This is happening because the endpoints that define two path segments are 
 co-linear.  There's a check for that too, but clearly it's not firing 
 properly in this case for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572729#comment-14572729
 ] 

Erik Hatcher commented on SOLR-7632:


[~chrismattmann] wait, don't close won't fix so fast!   I think this would be 
a nice addition, but as an option such that /update/extract would use Tika 
embedded by default or when configured to do so, send the documents to 
Tika-Server. 

Out of curiosity, can Tika-Server forward it's processed output instead of 
sending back to the posting client?   If so, then one could put Tika-Server 
between a client and Solr without the client having to send to Tika-Server, get 
the results, package them up, and send to Solr.

 Change the ExtractingRequestHandler to use Tika-Server
 --

 Key: SOLR-7632
 URL: https://issues.apache.org/jira/browse/SOLR-7632
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris A. Mattmann

 It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
 fails it messes up the ExtractingRequestHandler (e.g., the document type 
 caused Tika to fail, etc). A more reliable way and also separated, and easier 
 to deploy version of the ExtractingRequestHandler would make a network call 
 to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
 results and then index the information that way. I have a patch in the works 
 from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7635) bin/solr -e cloud can fail on MacOS

2015-06-04 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572738#comment-14572738
 ] 

Shawn Heisey commented on SOLR-7635:


This is not exactly related to this specific issue, but I'm wondering ... what 
do we want to do in the Solr script if lsof is not installed on the machine?


 bin/solr -e cloud can fail on MacOS
 ---

 Key: SOLR-7635
 URL: https://issues.apache.org/jira/browse/SOLR-7635
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2
 Environment: Unix
Reporter: Upayavira
Priority: Minor
 Attachments: SOLR-7635.patch, SOLR-7635.patch


 On MacOS:
 bin/solr -e cloud 
 said:
 Please enter the port for node1 [8983]
 Oops! Looks like port 8983 is already being used by another process. Please 
 choose a different port.
 Looking at the script, it uses:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT`
 which gave the output:
 {{
 COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
 Google  365 upayavira  130u  IPv6 0xab1d227df2e5a7db  0t0  TCP 
 [::1]:49889-[::1]:8983 (ESTABLISHED)
 java  10889 upayavira  118u  IPv6 0xab1d227df2e73ddb  0t0  TCP *:8983 
 (LISTEN)
 java  10889 upayavira  134u  IPv6 0xab1d227df2e756db  0t0  TCP 
 [::1]:8983-[::1]:49889 (ESTABLISHED)
 }}
 This was connections Google Chrome was attempting to make to Solr. 
 Replacing the above line with this:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT | grep LISTEN`
 resolved the issue. Very simple patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7635) bin/solr -e cloud can fail on MacOS

2015-06-04 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572738#comment-14572738
 ] 

Shawn Heisey edited comment on SOLR-7635 at 6/4/15 1:23 PM:


This is not exactly related to this specific issue, but I'm wondering ... what 
do we want to do in the Solr script if lsof is not installed on the machine?  I 
would imagine that currently if lsof is not installed but the port IS already 
in use, that the script may try to start Solr anyway, and I'm not sure that the 
user would know why it doesn't work.



was (Author: elyograg):
This is not exactly related to this specific issue, but I'm wondering ... what 
do we want to do in the Solr script if lsof is not installed on the machine?


 bin/solr -e cloud can fail on MacOS
 ---

 Key: SOLR-7635
 URL: https://issues.apache.org/jira/browse/SOLR-7635
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2
 Environment: Unix
Reporter: Upayavira
Priority: Minor
 Attachments: SOLR-7635.patch, SOLR-7635.patch


 On MacOS:
 bin/solr -e cloud 
 said:
 Please enter the port for node1 [8983]
 Oops! Looks like port 8983 is already being used by another process. Please 
 choose a different port.
 Looking at the script, it uses:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT`
 which gave the output:
 {{
 COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
 Google  365 upayavira  130u  IPv6 0xab1d227df2e5a7db  0t0  TCP 
 [::1]:49889-[::1]:8983 (ESTABLISHED)
 java  10889 upayavira  118u  IPv6 0xab1d227df2e73ddb  0t0  TCP *:8983 
 (LISTEN)
 java  10889 upayavira  134u  IPv6 0xab1d227df2e756db  0t0  TCP 
 [::1]:8983-[::1]:49889 (ESTABLISHED)
 }}
 This was connections Google Chrome was attempting to make to Solr. 
 Replacing the above line with this:
 PORT_IN_USE=`lsof -Pni:$CLOUD_PORT | grep LISTEN`
 resolved the issue. Very simple patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6508) Simplify Directory/lock api

2015-06-04 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6508:

Attachment: LUCENE-6508.patch

Updated patch with Uwe's changes and making SleepingLockFactory package-private.

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch, LUCENE-6508.patch, LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3719) Add instant search capability to /browse

2015-06-04 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3719:
---
Attachment: SOLR-3719.patch

Thanks [~esther.quansah] for that initial patch.   I've tweaked it a bit to 
have a separate template that includes the facets too such that as the instant 
search results appear the facets (top phrases by default) adjust too.  I also 
added escaping of the query parameter on the instant search URL (otherwise 
multiword results with spaces didn't work).   The one issue with my patch is 
that the top phrases tag cloud loses its styling - can you fix that somehow?

 Add instant search capability to /browse
 --

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572758#comment-14572758
 ] 

Shalin Shekhar Mangar commented on SOLR-7636:
-

I don't think this change is right. Earlier, cluster status was guaranteed to 
return the latest and most up-to-date cluster state but with this change, a 
node which is not connected to ZooKeeper can return stale data. This change 
should either be reverted or fixed.

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3719) Add instant search capability to /browse

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572756#comment-14572756
 ] 

Erik Hatcher commented on SOLR-3719:


I wonder if this feature should adapt the technique, say, google.com does with 
its instant search feature whereby hitting return or tabbing out of the search 
field adjusts the URL with the hash-q trick so that the page doesn't have to 
fully refresh (when pressing enter) or having the old state (when tabbing out). 
 Food for thought.

 Add instant search capability to /browse
 --

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3719) Add instant search capability to /browse

2015-06-04 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572764#comment-14572764
 ] 

Erik Hatcher commented on SOLR-3719:


But in general - this feature as-is works pretty nicely!   Getting the suggest 
in there will make it even sweeter.

 Add instant search capability to /browse
 --

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3719) Add instant search capability to example/files /browse

2015-06-04 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3719:
---
Summary: Add instant search capability to example/files /browse  (was: 
Add instant search capability to /browse)

 Add instant search capability to example/files /browse
 

 Key: SOLR-3719
 URL: https://issues.apache.org/jira/browse/SOLR-3719
 Project: Solr
  Issue Type: New Feature
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-3719.patch, SOLR-3719.patch


 Once upon a time I tinkered with this in a personal github fork 
 https://github.com/erikhatcher/lucene-solr/commits/instant_search/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7636) CLUSTERSTATUS Api should not go to OCP to fetch data

2015-06-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572769#comment-14572769
 ] 

Noble Paul commented on SOLR-7636:
--

In the previous model if the node cannot connect to zk how do you expect it to 
send a message to the OCP ? 

 CLUSTERSTATUS Api should not go to OCP to fetch data
 

 Key: SOLR-7636
 URL: https://issues.apache.org/jira/browse/SOLR-7636
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7636.patch


 Currently it does  multiple ZK operations which is not required. It should 
 just read the status from ZK and return from the CollectionsHandler 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-06-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572774#comment-14572774
 ] 

Uwe Schindler commented on LUCENE-6508:
---

+1 LGTM

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch, LUCENE-6508.patch, LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org