[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 10699 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10699/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

All tests passed

Build Log:
[...truncated 29154 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.39s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 59 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5768:


Attachment: SOLR-5768.diff

The last patch had the test failure because of an incorrect merge. This patch 
fixes that problem. I am gonna run all tests and do some manual testing before 
committing this.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-02 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049701#comment-14049701
 ] 

Simon Willnauer edited comment on LUCENE-5795 at 7/2/14 7:37 AM:
-

I think making the priority queue bounded makes perfect sense. Yet, I think 
when we touch this queue we should also make sure we get rid of the crazy 
Object[] that is holds. We should really add a struct like object that holds 
the individual values rather than an Object array that requires Float rather 
than float (we should prefer the latter). 
Further I think if we really want to make this more efficient we (not sure 
about the O( n ) though but that's a different story) we should not use 

{code}
  res.insertWithOverflow(new Object[]{word, // the word
  topField, // the top field
  score,// overall score
  idf, // idf
  docFreq, // freq in all docs
  tf
  });
{code}

it will add the element no matter if we exceed the size of the queue or not.

we should rather do something like:

{code}
final int limit = Math.min(maxQueryTerms, words.size());
//...

if (queue.size()  limit) {
  // there is still space in the queue
  queue.add(new ScoreTerm(word, topField, score, ...))
} else {
  ScoreTerm term = queue.top();
  if (term.score()  score) { // update the smallest in the queue in place and 
update the queue.
term.update(word, topField, score, ...);
queue.updateTop();
  }
} 
{code}

I hope that makes sense?


was (Author: simonw):
I think making the priority queue bounded makes perfect sense. Yet, I think 
when we touch this queue we should also make sure we get rid of the crazy 
Object[] that is holds. We should really add a struct like object that holds 
the individual values rather than an Object array that requires Float rather 
than float (we should prefer the latter). 
Further I think if we really want to make this more efficient we (not sure 
about the O(n) though but that's a different story) we should not use 

{code}
  res.insertWithOverflow(new Object[]{word, // the word
  topField, // the top field
  score,// overall score
  idf, // idf
  docFreq, // freq in all docs
  tf
  });
{code}

it will add the element no matter if we exceed the size of the queue or not.

we should rather do something like:

{code}
final int limit = Math.min(maxQueryTerms, words.size());
//...

if (queue.size()  limit) {
  // there is still space in the queue
  queue.add(new ScoreTerm(word, topField, score, ...))
} else {
  ScoreTerm term = queue.top();
  if (term.score()  score) { // update the smallest in the queue in place and 
update the queue.
term.update(word, topField, score, ...);
queue.updateTop();
  }
} 
{code}

I hope that makes sense?

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-02 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049701#comment-14049701
 ] 

Simon Willnauer commented on LUCENE-5795:
-

I think making the priority queue bounded makes perfect sense. Yet, I think 
when we touch this queue we should also make sure we get rid of the crazy 
Object[] that is holds. We should really add a struct like object that holds 
the individual values rather than an Object array that requires Float rather 
than float (we should prefer the latter). 
Further I think if we really want to make this more efficient we (not sure 
about the O(n) though but that's a different story) we should not use 

{code}
  res.insertWithOverflow(new Object[]{word, // the word
  topField, // the top field
  score,// overall score
  idf, // idf
  docFreq, // freq in all docs
  tf
  });
{code}

it will add the element no matter if we exceed the size of the queue or not.

we should rather do something like:

{code}
final int limit = Math.min(maxQueryTerms, words.size());
//...

if (queue.size()  limit) {
  // there is still space in the queue
  queue.add(new ScoreTerm(word, topField, score, ...))
} else {
  ScoreTerm term = queue.top();
  if (term.score()  score) { // update the smallest in the queue in place and 
update the queue.
term.update(word, topField, score, ...);
queue.updateTop();
  }
} 
{code}

I hope that makes sense?

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5768:


Attachment: SOLR-5768.diff

It's probably better to use rb.rsp.getReturnFields() because that is set in the 
prepare method and it takes care of missing fl parameter as well. Otherwise 
we get a NPE in updateFl method. This patch fixes that problem.

I also added a test to assert that response returned from distrib and 
distrib.singlePass method are the same. I think this is ready.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1683 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1683/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 28855 lines...]
check-licenses:
 [echo] License check under: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr
 [licenses] CHECKSUM FAILED for 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 2.85s.), 3 error(s).

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:467: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:70: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:254: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 145 minutes 44 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0 
-XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_05) - Build # 10700 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10700/
Java: 32bit/jdk1.8.0_05 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 29269 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.22s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 72 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_05 -client 
-XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-02 Thread Alex Ksikes (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Ksikes updated LUCENE-5795:


Attachment: LUCENE-5795

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-02 Thread Alex Ksikes (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049777#comment-14049777
 ] 

Alex Ksikes commented on LUCENE-5795:
-

Thanks, I've updated the patch as per your suggestions.

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6220) Replica placement startegy for solrcloud

2014-07-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6220:
-

Description: 
h1.Objective
Most cloud based systems allow to specify rules on how the replicas/nodes of a 
cluster are allocated . Solr should have a flexible mechanism through which we 
should be able to control allocation of replicas or later change it to suit the 
needs of the system

All configurations are per collection basis. The rules are applied whenever a 
replica is created in any of the shards in a given collection during

 * collection creation
 * shard splitting
 * add replica
 * createsshard

There are two aspects to how replicas are placed: snitch and placement. 

h2.snitch 
How to identify the tags of nodes. Snitches are configured through collection 
create command with the snitch prefix  . eg: snitch.type=EC2Snitch.

The system provides the following implicit tag names which cannot be used by 
other snitches
 * node : The solr nodename
 * host : The hostname
 * ip : The ip address of the host
 * cores : This is a dynamic varibale which gives the core count at any given 
point 
 * disk : This is a dynamic variable  which gives the available disk space at 
any given point


There will a few snitches provided by the system such as 

h3.EC2Snitch
Provides two tags called dc, rack from the region and zone values in EC2

h3.IPSnitch 
Use the IP to infer the “dc” and “rack” values

h3.NodePropertySnitch 
This lets users provide system properties to each node with tagname and value .

example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
particular node will have two tags “tag-x” and “tag-y” .
 
h3.RestSnitch 
 Which lets the user configure a url which the server can invoke and get all 
the tags for a given node. 

This takes extra parameters in create command
example:  
{{snitch.type=RestSnitchsnitch.url=http://snitchserverhost:port?nodename={}}}
The response of the  rest call   
{{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}

must be in either json format or properties format. 
eg: 
{code:JavaScript}
{
“tag-x”:”x-val”,
“tag-y”:”y-val”
}
{code}
or

{noformat}
tag-x=x-val
tag-y=y-val
{noformat}
h3.ManagedSnitch
This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
user should be able to manage the tags and values of each node through a 
collection API 


h2.Placement 

This tells how many replicas for a given shard needs to be assigned to nodes 
with the given key value pairs. These parameters will be passed on to the 
collection CREATE api as a parameter  placement . The values will be saved in 
the state of the collection as follows
{code:Javascript}
{
 “mycollection”:{
  “snitch”: {
  type:“EC2Snitch”
}
  “placement”:{
   “key1”: “value1”,
   “key2”: “value2”,
   }
}
{code}

A rule consists of 2 parts

 * LHS or the qualifier .The format is 
\{shardname}.\{replicacount}\{quantifier}  .  Use the wild card “*” for 
qualifying all. quatifiers are
 ** no value means . exactly equal.  e.g: 2 means exactly 2
 ** + means greater than or equal .  e.g : 2+means 2 or more
 ** \- means less  than. e.g 2- means , less than 2 

 * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
name and values are provided by the snitch. The supported operands are
 ** - :  equals
 ** : greater than . Only applicable for numeric tags
 **  : less than , Only applicable to numeric tags
** !  : NOT or not equals

Each collection can have any number of rules. As long as the rules do not 
conflict with each other it should be OK. Or else an error is thrown


Example rules:
 * “shard1.1”:“dc-dc1,rack-168” : This would assign exactly 1 replica for 
shard1 with nodes having tags   “dc=dc1,rack=168”.
 *  “shard1.1+”:“dc-dc1,rack-168”  : Same as above but assigns atleast one 
replica to the tag val combination
 * “*.1”:“dc-dc1” :  For all shards keep exactly one replica in dc:dc1
 * “*.1+”:”dc-dc2”  : At least one  replica needs to be in dc:dc2
 * “*.2-”:”dc-dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
 * “shard1.*”:”rack-730”  :  All replicas of shard1 will go to rack 730
 * “shard1.1”:“node-192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
the node 192.167.1.28983_solr
 * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
 * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go to host 
192.168.89.91
* “\*.*”: “cores5”: All replicas should be created in nodes with  less than 5 
cores  
 * “\*.*”:”disk20gb” :  All replicas must be created in nodes with disk space 
greater than 20gb

In the collection create API all the placement rules are provided as a 
parameter called placement and multiple rules are separated with | 
example:
{noformat}
snitch.type=EC2Snitchplacement=*.1:dc-dc1|*.2-:dc-dc3|shard1.*:rack!738 
{noformat}

  was:
h1.Objective
Most cloud based systems allow to 

[jira] [Commented] (LUCENE-5795) More Like This: ensures selection of best terms is indeed O(n)

2014-07-02 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049805#comment-14049805
 ] 

Simon Willnauer commented on LUCENE-5795:
-

look pretty good though. can you make all the members and methods in ScoreTerm 
package private or private? I also think we should make retrieveTerms() private 
or package private, it seem not used elsewhere no?

 More Like This: ensures selection of best terms is indeed O(n)
 --

 Key: LUCENE-5795
 URL: https://issues.apache.org/jira/browse/LUCENE-5795
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alex Ksikes
Priority: Minor
 Attachments: LUCENE-5795, LUCENE-5795






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5796) Scorer.getChildren() can throw or hide a subscorer for some boolean queries

2014-07-02 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049850#comment-14049850
 ] 

Robert Muir commented on LUCENE-5796:
-

I don't think we should remove the default implementation for FilterScorer, as 
the scorer is not really changed when using this abstract class, its just 
wrapped? For the same reason, i think the boostingscorer (since its just an 
implementation detail of how the current BS2 stuff solves this case) should be 
transparent.

 Scorer.getChildren() can throw or hide a subscorer for some boolean queries
 ---

 Key: LUCENE-5796
 URL: https://issues.apache.org/jira/browse/LUCENE-5796
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
Reporter: Terry Smith
Priority: Minor
 Attachments: LUCENE-5796.patch


 I've isolated two example boolean queries that don't behave with release 4.9 
 of Lucene.
 # A BooleanQuery with three SHOULD clauses and a minimumNumberShouldMatch of 
 2 will throw an ArrayIndexOutOfBoundsException.
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 __randomizedtesting.SeedInfo.seed([2F79B3DF917D071B:2539E6DBC4DF793C]:0)
   at 
 org.apache.lucene.search.MinShouldMatchSumScorer.getChildren(MinShouldMatchSumScorer.java:119)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.summarizeScorer(TestBooleanQueryVisitSubscorers.java:261)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.setScorer(TestBooleanQueryVisitSubscorers.java:238)
   at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:161)
   at 
 org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:64)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
   at 
 org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers.testGetChildrenMinShouldMatchSumScorer(TestBooleanQueryVisitSubscorers.java:196)
 {noformat}
 # A BooleanQuery with two should clauses, one of which is a miss for all 
 documents in the current segment will accidentally mask the scorer that was a 
 hit.
 Unit tests and patch based on {{branch_4x}} are available and will be 
 attached as soon as this ticket has a number.
 They are immediately available on GitHub on branch 
 [shebiki/bqgetchildren|https://github.com/shebiki/lucene-solr/commits/bqgetchildren]
  as commit 
 [c64bb6f|https://github.com/shebiki/lucene-solr/commit/c64bb6f2df8f33dd8daafc953d9c27b5cbf29fa3].
 I took the liberty of naming the relationship in BoostingScorer.getChildren() 
 {{BOOSTING}}. Suspect someone will offer a better name for this. Here is a 
 summary of the various relationships in play for all Scorer.getChildren() 
 implementations on {{branch_4x}} to help choose.
 || class   || 
 relationships
 | org.apache.lucene.search.AssertingScorer | 
 SHOULD
 | org.apache.lucene.search.join.ToParentBlockJoinQuery.BlockJoinScorer | 
 BLOCK_JOIN
 | org.apache.lucene.search.ConjunctionScorer   | MUST
 | org.apache.lucene.search.ConstantScoreQuery.ConstantScorer   | 
 constant
 | org.apache.lucene.queries.function.BoostedQuery.CustomScorer | 
 CUSTOM
 | org.apache.lucene.queries.CustomScoreQuery.CustomScorer  | 
 CUSTOM
 | org.apache.lucene.search.DisjunctionScorer   | 
 SHOULD
 | org.apache.lucene.facet.DrillSidewaysScorer.FakeScorer   | MUST
 | org.apache.lucene.search.FilterScorer| 
 calls in.getChildren() 
 | org.apache.lucene.search.ScoreCachingWrappingScorer  | 
 CACHED
 | org.apache.lucene.search.FilteredQuery.LeapFrogScorer| 
 FILTERED
 | org.apache.lucene.search.MinShouldMatchSumScorer | 
 SHOULD
 | org.apache.lucene.search.FilteredQuery   | 
 FILTERED
 | org.apache.lucene.search.ReqExclScorer   | MUST
 | org.apache.lucene.search.ReqOptSumScorer | 
 MUST, SHOULD
 | org.apache.lucene.search.join.ToChildBlockJoinQuery  | 
 BLOCK_JOIN
 I also removed FilterScorer.getChildren() to prevent mistakes and force 
 subclasses to provide a correct implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: 

[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2018 - Still Failing

2014-07-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2018/

All tests passed

Build Log:
[...truncated 29242 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/example/lib/ext/log4j-1.2.16.jar
 [licenses] EXPECTED sha1 checksum file : 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/licenses/log4j-1.2.16.jar.sha1

[...truncated 1 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:467:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 135 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-4.x-Java7 #2008
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 14 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.8.0_20-ea-b15) - Build # 4066 - Failure!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4066/
Java: 32bit/jdk1.8.0_20-ea-b15 -server -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
startOffset must be non-negative, and endOffset must be = startOffset, 
startOffset=8,endOffset=7

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be = startOffset, startOffset=8,endOffset=7
at 
__randomizedtesting.SeedInfo.seed([46D577F11B0088ED:2C8EC8E0424EA81E]:0)
at 
org.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl.setOffset(OffsetAttributeImpl.java:45)
at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.sv.SwedishLightStemFilter.incrementToken(SwedishLightStemFilter.java:48)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:703)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:614)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:513)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:946)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Commented] (SOLR-5855) Increasing solr highlight performance with caching

2014-07-02 Thread Daniel Debray (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049872#comment-14049872
 ] 

Daniel Debray commented on SOLR-5855:
-

I did a fork on Github and added the changes + tests. The cache is used in our 
environment for ~3 months now without problems. The only thing is that this 
cache has the same limitations as the document cache. So no autowarming 
available. 

If you think its fine i would like to create a pull request or update this the 
appended patch.

https://github.com/DDebray/lucene-solr

 Increasing solr highlight performance with caching
 --

 Key: SOLR-5855
 URL: https://issues.apache.org/jira/browse/SOLR-5855
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Affects Versions: 5.0
Reporter: Daniel Debray
 Fix For: 5.0

 Attachments: highlight.patch


 Hi folks,
 while investigating possible performance bottlenecks in the highlight 
 component i discovered two places where we can save some cpu cylces.
 Both are in the class org.apache.solr.highlight.DefaultSolrHighlighter
 First in method doHighlighting (lines 411-417):
 In the loop we try to highlight every field that has been resolved from the 
 params on each document. Ok, but why not skip those fields that are not 
 present on the current document? 
 So i changed the code from:
 for (String fieldName : fieldNames) {
   fieldName = fieldName.trim();
   if( useFastVectorHighlighter( params, schema, fieldName ) )
 doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, 
 docSummaries, docId, doc, fieldName );
   else
 doHighlightingByHighlighter( query, req, docSummaries, docId, doc, 
 fieldName );
 }
 to:
 for (String fieldName : fieldNames) {
   fieldName = fieldName.trim();
   if (doc.get(fieldName) != null) {
 if( useFastVectorHighlighter( params, schema, fieldName ) )
   doHighlightingByFastVectorHighlighter( fvh, fieldQuery, req, 
 docSummaries, docId, doc, fieldName );
 else
   doHighlightingByHighlighter( query, req, docSummaries, docId, doc, 
 fieldName );
   }
 }
 The second place is where we try to retrieve the TokenStream from the 
 document for a specific field.
 line 472:
 TokenStream tvStream = 
 TokenSources.getTokenStreamWithOffsets(searcher.getIndexReader(), docId, 
 fieldName);
 where..
 public static TokenStream getTokenStreamWithOffsets(IndexReader reader, int 
 docId, String field) throws IOException {
   Fields vectors = reader.getTermVectors(docId);
   if (vectors == null) {
 return null;
   }
   Terms vector = vectors.terms(field);
   if (vector == null) {
 return null;
   }
   if (!vector.hasPositions() || !vector.hasOffsets()) {
 return null;
   }
   return getTokenStream(vector);
 }
 keep in mind that we currently hit the IndexReader n times where n = 
 requested rows(documents) * requested amount of highlight fields.
 in my usecase reader.getTermVectors(docId) takes around 150.000~250.000ns on 
 a warm solr and 1.100.000ns on a cold solr.
 If we store the returning Fields vectors in a cache, this lookups only take 
 25000ns.
 I would suggest something like the following code in the 
 doHighlightingByHighlighter method in the DefaultSolrHighlighter class (line 
 472):
 Fields vectors = null;
 SolrCache termVectorCache = searcher.getCache(termVectorCache);
 if (termVectorCache != null) {
   vectors = (Fields) termVectorCache.get(Integer.valueOf(docId));
   if (vectors == null) {
 vectors = searcher.getIndexReader().getTermVectors(docId);
 if (vectors != null) termVectorCache.put(Integer.valueOf(docId), vectors);
   } 
 } else {
   vectors = searcher.getIndexReader().getTermVectors(docId);
 }
 TokenStream tvStream = TokenSources.getTokenStreamWithOffsets(vectors, 
 fieldName);
 and TokenSources class:
 public static TokenStream getTokenStreamWithOffsets(Fields vectors, String 
 field) throws IOException {
   if (vectors == null) {
 return null;
   }
   Terms vector = vectors.terms(field);
   if (vector == null) {
 return null;
   }
   if (!vector.hasPositions() || !vector.hasOffsets()) {
 return null;
   }
   return getTokenStream(vector);
 }
 4000ms on 1000 docs without cache
 639ms on 1000 docs with cache
 102ms on 30 docs without cache
 22ms on 30 docs with cache
 on an index with 190.000 docs with a numFound of 32000 and 80 different 
 highlight fields.
 I think querys with only one field to highlight on a document does not 
 benefit that much from a cache like this, thats why i think an optional cache 
 would be the best solution there. 
 As i saw the FastVectorHighlighter uses more or less the same approach and 
 could also benefit from this cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5768:


Attachment: SOLR-5768.diff

The last patch had a bug where if uniqueKey was omitted from the fl param 
then you'd get a NPE in mergeIds.

This patch adds the uniqueKey field if not requested to the individual shard 
requests so that we can always merge the shard responses.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049881#comment-14049881
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5768 at 7/2/14 1:02 PM:
-

The last patch had a bug where if uniqueKey was omitted from the fl param 
then you'd get a NPE in mergeIds.

This patch adds the uniqueKey field if not requested to the individual shard 
requests so that we can always merge the shard responses.

Edit - I have added tests using debug=track which assert that indeed this 
optimization works and that no shard requests are sent in the GET_FIELDS stage 
when using this param and the automatic optimization added in SOLR-1880


was (Author: shalinmangar):
The last patch had a bug where if uniqueKey was omitted from the fl param 
then you'd get a NPE in mergeIds.

This patch adds the uniqueKey field if not requested to the individual shard 
requests so that we can always merge the shard responses.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049893#comment-14049893
 ] 

ASF subversion and git services commented on SOLR-5768:
---

Commit 1607360 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1607360 ]

SOLR-5768: Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch 
all fields and skip GET_FIELDS

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049896#comment-14049896
 ] 

ASF subversion and git services commented on SOLR-5768:
---

Commit 1607361 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1607361 ]

SOLR-5768: Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch 
all fields and skip GET_FIELDS

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_05) - Build # 10702 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10702/
Java: 32bit/jdk1.8.0_05 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 29021 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.35s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 73 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_05 -server 
-XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5768.
-

   Resolution: Fixed
Fix Version/s: (was: 4.9)
   4.10

Thanks Gregg!

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Nicola Buso (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicola Buso updated LUCENE-5801:


Attachment: LUCENE-5801_1.patch

public static FacetsConfig.dedupAndEncode(...)
added methods to ovveride in case of different encoding

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049928#comment-14049928
 ] 

Shai Erera commented on LUCENE-5801:


I will review the patch later, but a static method cannot be overridable by 
sub-classes, so we still need the protected method, with the default implt 
delegating to a static utility method...

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049928#comment-14049928
 ] 

Shai Erera edited comment on LUCENE-5801 at 7/2/14 1:54 PM:


I will review the patch later, but a static method cannot be overridable by 
sub-classes, so we still need the protected method, with the default impl 
delegating to a static utility method...


was (Author: shaie):
I will review the patch later, but a static method cannot be overridable by 
sub-classes, so we still need the protected method, with the default implt 
delegating to a static utility method...

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5796) Scorer.getChildren() can throw or hide a subscorer for some boolean queries

2014-07-02 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049941#comment-14049941
 ] 

Terry Smith commented on LUCENE-5796:
-

Thanks for taking the time to review my patch and comment on the approach.

The reason that I advocated changing FilterScorer and BoostedScorer is to allow 
some of my custom Query implementations to use a regular BooleanQuery for 
recall and optionally scoring while taking advantage of the actual Scorers used 
on a per document, per clause basis.

This has been working great across quite a few Lucene releases but failed when 
I upgraded to 4.9 due to the two regressions in behavior for 
Scorer.getChildren() as described in this ticket.

In this scenario, a BooleanQuery containing two TermQueries (one a miss and the 
other a hit) returns the following from BooleanWeight.scorer():

* BoostedScorer
** TermScorer (hit)

Calling getChildren() on this returns an empty list because the BoostedScorer 
just returns in.getChildren() and thus you are unable to navigate to the actual 
TermScorer in play. This would impact any classes that extend FilterScorer and 
don't override getChildren(). In other words, the current wiring does make the 
BoostedScorer transparent but with the disadvantage of hiding the actual scorer 
that performs the work.

If this is an unsupported workflow, I'm happy to move the discussion over to 
the user mailing list.

 Scorer.getChildren() can throw or hide a subscorer for some boolean queries
 ---

 Key: LUCENE-5796
 URL: https://issues.apache.org/jira/browse/LUCENE-5796
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
Reporter: Terry Smith
Priority: Minor
 Attachments: LUCENE-5796.patch


 I've isolated two example boolean queries that don't behave with release 4.9 
 of Lucene.
 # A BooleanQuery with three SHOULD clauses and a minimumNumberShouldMatch of 
 2 will throw an ArrayIndexOutOfBoundsException.
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 __randomizedtesting.SeedInfo.seed([2F79B3DF917D071B:2539E6DBC4DF793C]:0)
   at 
 org.apache.lucene.search.MinShouldMatchSumScorer.getChildren(MinShouldMatchSumScorer.java:119)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.summarizeScorer(TestBooleanQueryVisitSubscorers.java:261)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.setScorer(TestBooleanQueryVisitSubscorers.java:238)
   at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:161)
   at 
 org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:64)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
   at 
 org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers.testGetChildrenMinShouldMatchSumScorer(TestBooleanQueryVisitSubscorers.java:196)
 {noformat}
 # A BooleanQuery with two should clauses, one of which is a miss for all 
 documents in the current segment will accidentally mask the scorer that was a 
 hit.
 Unit tests and patch based on {{branch_4x}} are available and will be 
 attached as soon as this ticket has a number.
 They are immediately available on GitHub on branch 
 [shebiki/bqgetchildren|https://github.com/shebiki/lucene-solr/commits/bqgetchildren]
  as commit 
 [c64bb6f|https://github.com/shebiki/lucene-solr/commit/c64bb6f2df8f33dd8daafc953d9c27b5cbf29fa3].
 I took the liberty of naming the relationship in BoostingScorer.getChildren() 
 {{BOOSTING}}. Suspect someone will offer a better name for this. Here is a 
 summary of the various relationships in play for all Scorer.getChildren() 
 implementations on {{branch_4x}} to help choose.
 || class   || 
 relationships
 | org.apache.lucene.search.AssertingScorer | 
 SHOULD
 | org.apache.lucene.search.join.ToParentBlockJoinQuery.BlockJoinScorer | 
 BLOCK_JOIN
 | org.apache.lucene.search.ConjunctionScorer   | MUST
 | org.apache.lucene.search.ConstantScoreQuery.ConstantScorer   | 
 constant
 | org.apache.lucene.queries.function.BoostedQuery.CustomScorer | 
 CUSTOM
 | org.apache.lucene.queries.CustomScoreQuery.CustomScorer  | 
 CUSTOM
 | org.apache.lucene.search.DisjunctionScorer   | 
 SHOULD
 | org.apache.lucene.facet.DrillSidewaysScorer.FakeScorer   | MUST
 | org.apache.lucene.search.FilterScorer  

[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Nicola Buso (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049948#comment-14049948
 ] 

Nicola Buso commented on LUCENE-5801:
-

You are right Shai, I think while there is an abstraction for the decode part 
of the values, it's missing the encode abstraction.

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049952#comment-14049952
 ] 

Shai Erera commented on LUCENE-5801:


FacetsConfig.dedupAndEncode is the encode abstraction. The decode abstraction 
is in the OrdinalsReader. Previously these were encapsulated in an 
Encoder/Decoder interfaces, but since this added another API (sometimes 
confusing) and custom encoding of category ordinals is extremely expert, I 
think the current abstractions are fine.

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-07-02 Thread Gregg Donovan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049958#comment-14049958
 ] 

Gregg Donovan commented on SOLR-5768:
-

Thank you, Shalin!

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff, 
 SOLR-5768.diff, SOLR-5768.diff, SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5796) Scorer.getChildren() can throw or hide a subscorer for some boolean queries

2014-07-02 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049969#comment-14049969
 ] 

Robert Muir commented on LUCENE-5796:
-

I see: this makes sense. If you have a custom scorer you may need access to the 
raw one, so this makes sense to remove the transparency... I'll look at the 
patch again and reply back if I have more questions.

 Scorer.getChildren() can throw or hide a subscorer for some boolean queries
 ---

 Key: LUCENE-5796
 URL: https://issues.apache.org/jira/browse/LUCENE-5796
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
Reporter: Terry Smith
Priority: Minor
 Attachments: LUCENE-5796.patch


 I've isolated two example boolean queries that don't behave with release 4.9 
 of Lucene.
 # A BooleanQuery with three SHOULD clauses and a minimumNumberShouldMatch of 
 2 will throw an ArrayIndexOutOfBoundsException.
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 __randomizedtesting.SeedInfo.seed([2F79B3DF917D071B:2539E6DBC4DF793C]:0)
   at 
 org.apache.lucene.search.MinShouldMatchSumScorer.getChildren(MinShouldMatchSumScorer.java:119)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.summarizeScorer(TestBooleanQueryVisitSubscorers.java:261)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers$ScorerSummarizingCollector.setScorer(TestBooleanQueryVisitSubscorers.java:238)
   at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:161)
   at 
 org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:64)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
   at 
 org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:94)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
   at 
 org.apache.lucene.search.TestBooleanQueryVisitSubscorers.testGetChildrenMinShouldMatchSumScorer(TestBooleanQueryVisitSubscorers.java:196)
 {noformat}
 # A BooleanQuery with two should clauses, one of which is a miss for all 
 documents in the current segment will accidentally mask the scorer that was a 
 hit.
 Unit tests and patch based on {{branch_4x}} are available and will be 
 attached as soon as this ticket has a number.
 They are immediately available on GitHub on branch 
 [shebiki/bqgetchildren|https://github.com/shebiki/lucene-solr/commits/bqgetchildren]
  as commit 
 [c64bb6f|https://github.com/shebiki/lucene-solr/commit/c64bb6f2df8f33dd8daafc953d9c27b5cbf29fa3].
 I took the liberty of naming the relationship in BoostingScorer.getChildren() 
 {{BOOSTING}}. Suspect someone will offer a better name for this. Here is a 
 summary of the various relationships in play for all Scorer.getChildren() 
 implementations on {{branch_4x}} to help choose.
 || class   || 
 relationships
 | org.apache.lucene.search.AssertingScorer | 
 SHOULD
 | org.apache.lucene.search.join.ToParentBlockJoinQuery.BlockJoinScorer | 
 BLOCK_JOIN
 | org.apache.lucene.search.ConjunctionScorer   | MUST
 | org.apache.lucene.search.ConstantScoreQuery.ConstantScorer   | 
 constant
 | org.apache.lucene.queries.function.BoostedQuery.CustomScorer | 
 CUSTOM
 | org.apache.lucene.queries.CustomScoreQuery.CustomScorer  | 
 CUSTOM
 | org.apache.lucene.search.DisjunctionScorer   | 
 SHOULD
 | org.apache.lucene.facet.DrillSidewaysScorer.FakeScorer   | MUST
 | org.apache.lucene.search.FilterScorer| 
 calls in.getChildren() 
 | org.apache.lucene.search.ScoreCachingWrappingScorer  | 
 CACHED
 | org.apache.lucene.search.FilteredQuery.LeapFrogScorer| 
 FILTERED
 | org.apache.lucene.search.MinShouldMatchSumScorer | 
 SHOULD
 | org.apache.lucene.search.FilteredQuery   | 
 FILTERED
 | org.apache.lucene.search.ReqExclScorer   | MUST
 | org.apache.lucene.search.ReqOptSumScorer | 
 MUST, SHOULD
 | org.apache.lucene.search.join.ToChildBlockJoinQuery  | 
 BLOCK_JOIN
 I also removed FilterScorer.getChildren() to prevent mistakes and force 
 subclasses to provide a correct implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_60) - Build # 10703 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10703/
Java: 32bit/jdk1.7.0_60 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 29143 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.27s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 77 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.7.0_60 -client 
-XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Nicola Buso (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050278#comment-14050278
 ] 

Nicola Buso commented on LUCENE-5801:
-

Than would you prefer to have a o.a.l.f.t.utils.DefaultEncoding with a static 
method called both by FacetsConfig.dedup.. and OrdinalMappingAtomicReader?

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6220) Replica placement strategy for solrcloud

2014-07-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6220:
-

Summary: Replica placement strategy for solrcloud  (was: Replica placement 
startegy for solrcloud)

 Replica placement strategy for solrcloud
 

 Key: SOLR-6220
 URL: https://issues.apache.org/jira/browse/SOLR-6220
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul

 h1.Objective
 Most cloud based systems allow to specify rules on how the replicas/nodes of 
 a cluster are allocated . Solr should have a flexible mechanism through which 
 we should be able to control allocation of replicas or later change it to 
 suit the needs of the system
 All configurations are per collection basis. The rules are applied whenever a 
 replica is created in any of the shards in a given collection during
  * collection creation
  * shard splitting
  * add replica
  * createsshard
 There are two aspects to how replicas are placed: snitch and placement. 
 h2.snitch 
 How to identify the tags of nodes. Snitches are configured through collection 
 create command with the snitch prefix  . eg: snitch.type=EC2Snitch.
 The system provides the following implicit tag names which cannot be used by 
 other snitches
  * node : The solr nodename
  * host : The hostname
  * ip : The ip address of the host
  * cores : This is a dynamic varibale which gives the core count at any given 
 point 
  * disk : This is a dynamic variable  which gives the available disk space at 
 any given point
 There will a few snitches provided by the system such as 
 h3.EC2Snitch
 Provides two tags called dc, rack from the region and zone values in EC2
 h3.IPSnitch 
 Use the IP to infer the “dc” and “rack” values
 h3.NodePropertySnitch 
 This lets users provide system properties to each node with tagname and value 
 .
 example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this 
 particular node will have two tags “tag-x” and “tag-y” .
  
 h3.RestSnitch 
  Which lets the user configure a url which the server can invoke and get all 
 the tags for a given node. 
 This takes extra parameters in create command
 example:  
 {{snitch.type=RestSnitchsnitch.url=http://snitchserverhost:port?nodename={}}}
 The response of the  rest call   
 {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}}
 must be in either json format or properties format. 
 eg: 
 {code:JavaScript}
 {
 “tag-x”:”x-val”,
 “tag-y”:”y-val”
 }
 {code}
 or
 {noformat}
 tag-x=x-val
 tag-y=y-val
 {noformat}
 h3.ManagedSnitch
 This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The 
 user should be able to manage the tags and values of each node through a 
 collection API 
 h2.Placement 
 This tells how many replicas for a given shard needs to be assigned to nodes 
 with the given key value pairs. These parameters will be passed on to the 
 collection CREATE api as a parameter  placement . The values will be saved 
 in the state of the collection as follows
 {code:Javascript}
 {
  “mycollection”:{
   “snitch”: {
   type:“EC2Snitch”
 }
   “placement”:{
“key1”: “value1”,
“key2”: “value2”,
}
 }
 {code}
 A rule consists of 2 parts
  * LHS or the qualifier .The format is 
 \{shardname}.\{replicacount}\{quantifier}  .  Use the wild card “*” for 
 qualifying all. quatifiers are
  ** no value means . exactly equal.  e.g: 2 means exactly 2
  ** + means greater than or equal .  e.g : 2+means 2 or more
  ** \- means less  than. e.g 2- means , less than 2 
  * RHS or  conditions :  The format is \{tagname}\{operand}\{value} . The tag 
 name and values are provided by the snitch. The supported operands are
  ** - :  equals
  ** : greater than . Only applicable for numeric tags
  **  : less than , Only applicable to numeric tags
 ** !  : NOT or not equals
 Each collection can have any number of rules. As long as the rules do not 
 conflict with each other it should be OK. Or else an error is thrown
 Example rules:
  * “shard1.1”:“dc-dc1,rack-168” : This would assign exactly 1 replica for 
 shard1 with nodes having tags   “dc=dc1,rack=168”.
  *  “shard1.1+”:“dc-dc1,rack-168”  : Same as above but assigns atleast one 
 replica to the tag val combination
  * “*.1”:“dc-dc1” :  For all shards keep exactly one replica in dc:dc1
  * “*.1+”:”dc-dc2”  : At least one  replica needs to be in dc:dc2
  * “*.2-”:”dc-dc3” : Keep a maximum of 2 replicas in dc:dc3 for all shards
  * “shard1.*”:”rack-730”  :  All replicas of shard1 will go to rack 730
  * “shard1.1”:“node-192.167.1.2:8983_solr”  : 1 replica of shard1 must go to 
 the node 192.167.1.28983_solr
  * “shard1.* : “rack!738”  : No replica of shard1 should go to rack 738 
  * “shard1.* : “host!192.168.89.91”  : No replica of shard1 should go 

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050310#comment-14050310
 ] 

ASF subversion and git services commented on SOLR-5473:
---

Commit 1607418 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1607418 ]

SOLR-5473 one state.json per collection , SOLR-5474 support for one json per 
collection

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, 
 ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5474) Add stateFormat=2 support to CloudSolrServer

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050311#comment-14050311
 ] 

ASF subversion and git services commented on SOLR-5474:
---

Commit 1607418 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1607418 ]

SOLR-5473 one state.json per collection , SOLR-5474 support for one json per 
collection

 Add  stateFormat=2 support to CloudSolrServer
 -

 Key: SOLR-5474
 URL: https://issues.apache.org/jira/browse/SOLR-5474
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5474.patch, SOLR-5474.patch, SOLR-5474.patch, 
 fail.logs


 In this mode SolrJ would not watch any ZK node
 It fetches the state  on demand and cache the most recently used n 
 collections in memory.
 SolrJ would not listen to any ZK node. When a request comes for a collection 
 ‘xcoll’
 it would first check if such a collection exists
 If yes it first looks up the details in the local cache for that collection
 If not found in cache , it fetches the node /collections/xcoll/state.json and 
 caches the information
 Any query/update will be sent with extra query param specifying the 
 collection name , version (example \_stateVer=xcoll:34) . A node would throw 
 an error (INVALID_NODE) if it does not have the right version
 If SolrJ gets INVALID_NODE error it would invalidate the cache and fetch 
 fresh state information for that collection (and caches it again)
 If there is a connection timeout, SolrJ assumes the node is down and re-fetch 
 the state for the collection and try again



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

The latest patch that is committed

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6157) ReplicationFactorTest hangs

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050321#comment-14050321
 ] 

ASF subversion and git services commented on SOLR-6157:
---

Commit 1607420 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1607420 ]

SOLR-6157: Disable this test for now as it is still hanging on Jenkins.

 ReplicationFactorTest hangs
 ---

 Key: SOLR-6157
 URL: https://issues.apache.org/jira/browse/SOLR-6157
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Uwe Schindler
Assignee: Timothy Potter
 Fix For: 4.10


 See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
 You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050330#comment-14050330
 ] 

Mark Miller commented on SOLR-5473:
---

I see you addressed very little of what has come up. Not even the basic issues 
like a test named *External* and the horrendous ZkStateReader in 
clusterstate.json. This is also still not Make one state.json per collection, 
but a bunch of issues all connected to that.

I maintain my -1 on this commit and re urge that this get's moved to a branch.

There are so many ugly changes in here, I wish some others would review it 
closely as well.

I apologize I was not able to provide feedback sooner. Personal life and work 
has been a zoo lately.

Given that I can't contribute much in the near term, I can only ask so much I 
guess. But a minimum, I  have a firm negative -1 on the ZkStateReader being 
part of the cluster state.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050346#comment-14050346
 ] 

Mark Miller commented on SOLR-5473:
---

I already talked about a lot of the problems that still existed, so i won't go 
overall all of them again. But there are lots of odd things, for example:

{code}
+ /** 
+ * this is only set by Overseer not to be set by others and only set inside 
the Overseer node. If Overseer has 
+ unfinished external collections which are yet to be persisted to ZK 
+ this map is populated and this class can use that information 
+ @param map The map reference 
+ */ 
+ public void setEphemeralCollectionData(Map map){ 
+ ephemeralCollectionData = map; 
+ } 
{code}

That is pretty ugly, it still talks about external collections and it's pretty 
cryptic in general. It appears to basically say, this is a hack for the 
overseer on the very user centric class. We can't just keep shoving in 
shortcuts and ugliness. You keep doing that and all the API's will fall under 
the weight. This patch adds so much complexity and API pain IMO. I'm fine with 
getting the end result done, but I'm not fine with not taking the time to get 
the code and api's and tests right for it.

Most of the changes you listed above do not cover the majority of the problems 
I've talked about above.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050364#comment-14050364
 ] 

Noble Paul commented on SOLR-5473:
--

bq. But a minimum, I have a firm negative -1 on the ZkStateReader being part of 
the cluster state.

As I said , the objective has to be to eliminate ClusterState.java altogether 
because , clusterstate.json will not exist. But, till it exists , I admit , I 
did not have a better choice. 

bq. it still talks about external collections and it's pretty cryptic in general

I did miss that , I shall rectify it



 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5147) Support Block Join documents in DIH

2014-07-02 Thread Ted Sullivan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050363#comment-14050363
 ] 

Ted Sullivan commented on SOLR-5147:


Agree with [~rkesamre...@gmail.com] - I also have an ongoing project that could 
use this functionality which is even more compelling given the recent 
enhancements in 4.9 (SOLR-5285). I too will volunteer to get it uptodate and 
complete if it can be assigned to someone.

 Support Block Join documents in DIH
 ---

 Key: SOLR-5147
 URL: https://issues.apache.org/jira/browse/SOLR-5147
 Project: Solr
  Issue Type: Sub-task
Reporter: Vadim Kirilchuk
 Fix For: 4.9, 5.0


 DIH should be able to index hierarchical documents, i.e. it should be able to 
 work with SolrInputDocuments#addChildDocument.
 There was patch in SOLR-3076: 
 https://issues.apache.org/jira/secure/attachment/12576960/dih-3076.patch
 But it is not uptodate and far from being complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050372#comment-14050372
 ] 

Mark Miller commented on SOLR-5473:
---

bq. But, till it exists , I admit , I did not have a better choice.

IMO, there is no consensus on this patch, so the current commit is not a choice 
- I maintain my official -1 veto and would like you to revert the commit.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6216:


Attachment: SOLR-6216.patch

added some more javadocs and cleanup tests

 Better faceting for multiple intervals on DV fields
 ---

 Key: SOLR-6216
 URL: https://issues.apache.org/jira/browse/SOLR-6216
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch, SOLR-6216.patch


 There are two ways to have faceting on values ranges in Solr right now: 
 “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
 doing something similar:
 {code:java}
 searcher.numDocs(rangeQ , docs)
 {code}
 The good thing about this implementation is that it can benefit from caching. 
 The bad thing is that it may be slow with cold caches, and that there will be 
 a query for each of the ranges.
 A different implementation would be one that works similar to regular field 
 faceting, using doc values and validating ranges for each value of the 
 matching documents. This implementation would sometimes be faster than Range 
 Faceting / Query Faceting, specially on cases where caches are not very 
 effective, like on a high update rate, or where ranges change frequently.
 Functionally, the result should be exactly the same as the one obtained by 
 doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



TestLeaderElectionZkExpiry test failure

2014-07-02 Thread Steve Molloy
I'm having test failures when running the TestLeaderElectionZkExpiry test 
locally after applying the SOLR-6159 patch on 4.9. Basically, it seems like 
path is lost when retrying delete, ending up in IllegalArgumentException. I got 
around it locally by adding:

  if(path == null) {
throw new NoNodeException(Path is null);
  }

at the start of the ZkOperation's execute method. The exception is then logged 
and deemed fine by ElectionContext trying to cancel election. Has anyone else 
run into the issue? Is it really safe to ignore or could it have impacts I 
haven't seen? 

Thanks,
Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: TestLeaderElectionZkExpiry test failure

2014-07-02 Thread Shalin Shekhar Mangar
I'll take a look, Steve.


On Wed, Jul 2, 2014 at 11:08 PM, Steve Molloy smol...@opentext.com wrote:

 I'm having test failures when running the TestLeaderElectionZkExpiry test
 locally after applying the SOLR-6159 patch on 4.9. Basically, it seems like
 path is lost when retrying delete, ending up in IllegalArgumentException. I
 got around it locally by adding:

   if(path == null) {
 throw new NoNodeException(Path is null);
   }

 at the start of the ZkOperation's execute method. The exception is then
 logged and deemed fine by ElectionContext trying to cancel election. Has
 anyone else run into the issue? Is it really safe to ignore or could it
 have impacts I haven't seen?

 Thanks,
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.


RE: TestLeaderElectionZkExpiry test failure

2014-07-02 Thread Steve Molloy



Thanks, in case it might help, I was running this on my laptop with java 1.8.0_05 on Ubuntu 14.04.

Steve


From: Shalin Shekhar Mangar [shalinman...@gmail.com]
Sent: July 2, 2014 1:42 PM
To: dev@lucene.apache.org
Subject: Re: TestLeaderElectionZkExpiry test failure




I'll take a look, Steve.


On Wed, Jul 2, 2014 at 11:08 PM, Steve Molloy 
smol...@opentext.com wrote:

I'm having test failures when running the TestLeaderElectionZkExpiry test locally after applying the SOLR-6159 patch on 4.9. Basically, it seems like path is lost when retrying delete, ending up in IllegalArgumentException. I got around it locally by adding:

 if(path == null) {
  throw new NoNodeException(Path is null);
 }

at the start of the ZkOperation's execute method. The exception is then logged and deemed fine by ElectionContext trying to cancel election. Has anyone else run into the issue? Is it really safe to ignore or could it have impacts I haven't seen?

Thanks,
Steve
-
To unsubscribe, e-mail: 
dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 
dev-h...@lucene.apache.org







-- 
Regards,
Shalin Shekhar Mangar. 






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 576 - Failure

2014-07-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/576/

All tests passed

Build Log:
[...truncated 29665 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr
 [licenses] CHECKSUM FAILED for 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.51s.), 3 error(s).

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/build.xml:474:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 225 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-NightlyTests-trunk #564

Re: TestLeaderElectionZkExpiry test failure

2014-07-02 Thread Shalin Shekhar Mangar
Can you please attach the test logs from your machine? I haven't seen this
test fail locally or on jenkins yet.


On Wed, Jul 2, 2014 at 11:18 PM, Steve Molloy smol...@opentext.com wrote:

  Thanks, in case it might help, I was running this on my laptop with java
 1.8.0_05 on Ubuntu 14.04.

 Steve
  --
 *From:* Shalin Shekhar Mangar [shalinman...@gmail.com]
 *Sent:* July 2, 2014 1:42 PM
 *To:* dev@lucene.apache.org
 *Subject:* Re: TestLeaderElectionZkExpiry test failure

   I'll take a look, Steve.


 On Wed, Jul 2, 2014 at 11:08 PM, Steve Molloy smol...@opentext.com
 wrote:

 I'm having test failures when running the TestLeaderElectionZkExpiry test
 locally after applying the SOLR-6159 patch on 4.9. Basically, it seems like
 path is lost when retrying delete, ending up in IllegalArgumentException. I
 got around it locally by adding:

   if(path == null) {
 throw new NoNodeException(Path is null);
   }

 at the start of the ZkOperation's execute method. The exception is then
 logged and deemed fine by ElectionContext trying to cancel election. Has
 anyone else run into the issue? Is it really safe to ignore or could it
 have impacts I haven't seen?

 Thanks,
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




  --
 Regards,
 Shalin Shekhar Mangar.
   -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050488#comment-14050488
 ] 

Noble Paul commented on SOLR-5473:
--

 is ClusterState referring to ZkStateReader the only problem that is not 
addressed?

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Comment: was deleted

(was:  is ClusterState referring to ZkStateReader the only problem that is not 
addressed?)

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-02 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050529#comment-14050529
 ] 

Shai Erera commented on LUCENE-5801:


Hmm, I thought about it, maybe we don't need the static method at all. 
OrdinalMappingAtomicReader.dedupAndEncode() will use a FacetsConfig private 
extension which exposes its dedupAndEncode()? Yes, this isn't what FacetsConfig 
is for, but I think it's better than exposing a public static method.. would 
you mind giving it a try?

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
 Attachments: LUCENE-5801.patch, LUCENE-5801_1.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_05) - Build # 4159 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4159/
Java: 32bit/jdk1.8.0_05 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 29119 lines...]
check-licenses:
 [echo] License check under: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr
 [licenses] CHECKSUM FAILED for 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\dataimporthandler-extras\test-lib\greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\dataimporthandler\lib\gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-ASL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-BSD.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-CDDL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-CPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-EPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-MIT.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-MPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-PD.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-SUN.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\dataimporthandler\lib\javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-ASL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-BSD.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-CPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-EPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-MIT.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-MPL.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-PD.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-SUN.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\licenses\javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.99s.), 3 error(s).

BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:467: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:70: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:254: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\tools\custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 118 minutes 15 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_05 -client 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050583#comment-14050583
 ] 

Mark Miller commented on SOLR-5473:
---

Some things are better now.

Some of the most important things are not though:

* Still a bunch of references to external collections.

* The ClusterState/ZkStateReader API/Impl Changes - I think this has to be done 
cleanly - almost nothing has changed from the first patch. These API changes 
are my largest objection.

* Since we don't plan on supporting both modes forever (it's horribly limiting 
and confusing), we should implement with that in mind. What is getting 
deprecated, what are the new API's? The new API's should be good.

* Tests seem minimal for such a large change. Sounds like you guys have done a 
lot of external testing, but would have been nice to spend some of that time 
beefing up the current tests in the areas that changed.

* By tying the other stuff you are doing into Make one state.json per 
collection, you are making it hard to think about and implement the latter 
cleanly IMO.

* What was committed just seems like a slightly modified version of the 
original vetoed commit. There are some improvements to some of the minor 
issues, but the overall code and approach and changes remain.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: TestLeaderElectionZkExpiry test failure

2014-07-02 Thread Steve Molloy



Sorry, I lost the logs while running other tests and now I can no longer reproduce the issue (which seemed to be constantly happening before). Only change is that I had to reboot
 to apply latest kernel updates.

I'll keep an eye out for it and will make sure to keep the logs next time I see it. Sorry about this.

Steve


From: Shalin Shekhar Mangar [shalinman...@gmail.com]
Sent: July 2, 2014 1:59 PM
To: dev@lucene.apache.org
Subject: Re: TestLeaderElectionZkExpiry test failure




Can you please attach the test logs from your machine? I haven't seen this test fail locally or on jenkins yet.


On Wed, Jul 2, 2014 at 11:18 PM, Steve Molloy 
smol...@opentext.com wrote:


Thanks, in case it might help, I was running this on my laptop with java 1.8.0_05 on Ubuntu 14.04.

Steve


From: Shalin Shekhar Mangar [shalinman...@gmail.com]
Sent: July 2, 2014 1:42 PM
To: dev@lucene.apache.org
Subject: Re: TestLeaderElectionZkExpiry test failure






I'll take a look, Steve.


On Wed, Jul 2, 2014 at 11:08 PM, Steve Molloy 
smol...@opentext.com wrote:

I'm having test failures when running the TestLeaderElectionZkExpiry test locally after applying the SOLR-6159 patch on 4.9. Basically, it seems like path is lost when retrying delete, ending up in IllegalArgumentException. I got around it locally by adding:

 if(path == null) {
  throw new NoNodeException(Path is null);
 }

at the start of the ZkOperation's execute method. The exception is then logged and deemed fine by ElectionContext trying to cancel election. Has anyone else run into the issue? Is it really safe to ignore or could it have impacts I haven't seen?

Thanks,
Steve
-
To unsubscribe, e-mail: 
dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 
dev-h...@lucene.apache.org







-- 
Regards,
Shalin Shekhar Mangar. 







- To unsubscribe, e-mail:
dev-unsubscr...@lucene.apache.org For additional commands, e-mail:
dev-h...@lucene.apache.org








-- 
Regards,
Shalin Shekhar Mangar. 






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050597#comment-14050597
 ] 

Noble Paul commented on SOLR-5473:
--

bq.Since we don't plan on supporting both modes forever (it's horribly limiting 
and confusing), we should implement with that in mind. 

As I said earlier , We need to get rid of ClusterState. But the problem is for 
at least a couple of releases , we need both to co-exist . The first step is to 
deprecate ClusterState.  We can mark it as deprecated and refer to methods in 
ZkStateReader.

bq.Tests seem minimal for such a large change

Almost every solrcloud tests randomly switches between new/old mode . We have 
done extensive long running tests with very large clusters (1000 collections, 
120 nodes) and very long running performance tests

bq.By tying the other stuff you are doing into Make one state.json per 
collection,

The point is , I started with one state.json. And soon realized that without 
all these other changes it is impossible to do it


 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6137) Managed Schema / Schemaless and SolrCloud concurrency issues

2014-07-02 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6137:
-

Attachment: AddSchemaFieldsUpdateProcessorFactory.java.svnpatch.rej
SOLR-6137.patch

Patch against current trunk, along with 
{{AddSchemaFieldsUpdateProcessorFactory.java.svnpatch.rej}}, the rejected 
elements of [~gchanan]'s v4 patch - it looks like the v4 patch was against an 
earlier version of the patch or something, since the removed lines don't exist 
on trunk.  The patch is my attempt at merging current trunk with the additions 
from the v4 patch.

One problem blocking compilation: {{oldSchema}}'s update lock is pulled from 
{{IndexSchema.getSchemaUpdateLock()}}, which doesn't exist (that method is on 
{{ManagedIndexSchema}}) - in my patch, I've cast to {{oldSchema}} to 
{{ManagedIndexSchema}} to make that compile, but maybe the intent was to move 
{{getSchemaUpdateLock()}} to {{IndexSchema}}?  (That's not part of the v4 
patch, maybe {{git add}} is needed?)

Also, Gregory, when you post patches on JIRA issues, please use the same name 
for each iteration of your patch, rather than adding vN, where N increases 
with each patch version - JIRA will gray out older versions.

 Managed Schema / Schemaless and SolrCloud concurrency issues
 

 Key: SOLR-6137
 URL: https://issues.apache.org/jira/browse/SOLR-6137
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
 Attachments: AddSchemaFieldsUpdateProcessorFactory.java.svnpatch.rej, 
 SOLR-6137.patch, SOLR-6137.patch, SOLR-6137v2.patch, SOLR-6137v3.patch, 
 SOLR-6137v4.patch


 This is a follow up to a message on the mailing list, linked here: 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201406.mbox/%3CCAKfebOOcMeVEb010SsdcH8nta%3DyonMK5R7dSFOsbJ_tnre0O7w%40mail.gmail.com%3E
 The Managed Schema integration with SolrCloud seems pretty limited.
 The issue I'm running into is variants of the issue that schema changes are 
 not pushed to all shards/replicas synchronously.  So, for example, I can make 
 the following two requests:
 1) add a field to the collection on server1 using the Schema API
 2) add a document with the new field, the document is routed to a core on 
 server2
 Then, there appears to be a race between when the document is processed by 
 the core on server2 and when the core on server2, via the 
 ZkIndexSchemaReader, gets the new schema.  If the document is processed 
 first, I get a 400 error because the field doesn't exist.  This is easily 
 reproducible by adding a sleep to the ZkIndexSchemaReader's processing.
 I hit a similar issue with Schemaless: the distributed request handler sends 
 out the document updates, but there is no guarantee that the other 
 shards/replicas see the schema changes made by the update.chain.
 Another issue I noticed today: making multiple schema API calls concurrently 
 can block; that is, one may get through and the other may infinite loop.
 So, for reference, the issues include:
 1) Schema API changes return success before all cores are updated; subsequent 
 calls attempting to use new schema may fail
 2) Schemaless changes may fail on replicas/other shards for the same reason
 3) Concurrent Schema API changes may block
 From Steve Rowe on the mailing list:
 {quote}
 For Schema API users, delaying a couple of seconds after adding fields before 
 using them should workaround this problem.  While not ideal, I think schema 
 field additions are rare enough in the Solr collection lifecycle that this is 
 not a huge problem.
 For schemaless users, the picture is worse, as you noted.  Immediate 
 distribution of documents triggering schema field addition could easily prove 
 problematic.  Maybe we need a schema update blocking mode, where after the ZK 
 schema node watch is triggered, all new request processing is halted until 
 the schema is finished downloading/parsing/swapping out? (Such a mode should 
 help Schema API users too.)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6137) Managed Schema / Schemaless and SolrCloud concurrency issues

2014-07-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050630#comment-14050630
 ] 

Steve Rowe commented on SOLR-6137:
--

bq.  it looks like the v4 patch was against an earlier version of the patch or 
something

Crap, re-reading the comments I see that [~gchanan]'s patch assumes that the 
patch on SOLR-6180 is applied first, I'll start looking there now - I'm 
guessing that this is the source of the patch problems I noted above.

 Managed Schema / Schemaless and SolrCloud concurrency issues
 

 Key: SOLR-6137
 URL: https://issues.apache.org/jira/browse/SOLR-6137
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
 Attachments: AddSchemaFieldsUpdateProcessorFactory.java.svnpatch.rej, 
 SOLR-6137.patch, SOLR-6137.patch, SOLR-6137v2.patch, SOLR-6137v3.patch, 
 SOLR-6137v4.patch


 This is a follow up to a message on the mailing list, linked here: 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201406.mbox/%3CCAKfebOOcMeVEb010SsdcH8nta%3DyonMK5R7dSFOsbJ_tnre0O7w%40mail.gmail.com%3E
 The Managed Schema integration with SolrCloud seems pretty limited.
 The issue I'm running into is variants of the issue that schema changes are 
 not pushed to all shards/replicas synchronously.  So, for example, I can make 
 the following two requests:
 1) add a field to the collection on server1 using the Schema API
 2) add a document with the new field, the document is routed to a core on 
 server2
 Then, there appears to be a race between when the document is processed by 
 the core on server2 and when the core on server2, via the 
 ZkIndexSchemaReader, gets the new schema.  If the document is processed 
 first, I get a 400 error because the field doesn't exist.  This is easily 
 reproducible by adding a sleep to the ZkIndexSchemaReader's processing.
 I hit a similar issue with Schemaless: the distributed request handler sends 
 out the document updates, but there is no guarantee that the other 
 shards/replicas see the schema changes made by the update.chain.
 Another issue I noticed today: making multiple schema API calls concurrently 
 can block; that is, one may get through and the other may infinite loop.
 So, for reference, the issues include:
 1) Schema API changes return success before all cores are updated; subsequent 
 calls attempting to use new schema may fail
 2) Schemaless changes may fail on replicas/other shards for the same reason
 3) Concurrent Schema API changes may block
 From Steve Rowe on the mailing list:
 {quote}
 For Schema API users, delaying a couple of seconds after adding fields before 
 using them should workaround this problem.  While not ideal, I think schema 
 field additions are rare enough in the Solr collection lifecycle that this is 
 not a huge problem.
 For schemaless users, the picture is worse, as you noted.  Immediate 
 distribution of documents triggering schema field addition could easily prove 
 problematic.  Maybe we need a schema update blocking mode, where after the ZK 
 schema node watch is triggered, all new request processing is halted until 
 the schema is finished downloading/parsing/swapping out? (Such a mode should 
 help Schema API users too.)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4760 - Still Failing

2014-07-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4760/

All tests passed

Build Log:
[...truncated 29240 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
 [licenses] CHECKSUM FAILED for 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.63s.), 3 error(s).

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:467:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:70:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 120 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-trunk-Java7 #4753
Archived 1 artifacts
Archive block 

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050645#comment-14050645
 ] 

Mark Miller commented on SOLR-5473:
---

bq. We have done extensive long running tests with very large clusters

That only ensures things work today and not tomorrow. Those kinds of tests are 
important, but just as important are tests that can run again and again over 
time as we make changes, extend, etc. When you change large core behaviors, it 
seems crazy there would not be some additional tests added or existing tests 
expanded more. It's just one point of many I guess. I'm always going to push 
back on a core change this large if tests are minimally added or expanded. 
Manual tests ensure things worked that day. 

bq. Almost every solrcloud tests randomly switches between new/old mode 

There is value there, but not as much as you seem to be banking on. SolrCloud 
is heavily under tested and just because the current tests pass does not mean 
many things were not broken. Part of why it's nice to add to the tests when 
adding large core changes.

bq. The first step is to deprecate ClusterState.

If we could just deprecate clusterstate now, why would we have to pass a 
zkstatereader to it?

It's going to take some significant improvements before I can withdraw my veto.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050655#comment-14050655
 ] 

Noble Paul commented on SOLR-5473:
--

bq.If we could just deprecate clusterstate now, why would we have to pass a 
zkstatereader to it?

 Even if we deprecate ClusterState , it will exist. If it exists and if the 
APIs should work as expected it needs a reference to the ZkStateReader.  

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050656#comment-14050656
 ] 

Shalin Shekhar Mangar commented on SOLR-5473:
-

Mark, we have to revert it if your -1 stands. But we have already done it once 
and it hasn't been very productive because when it comes to your biggest 
objection of coupling ZkStateReader and ClusterState then neither you nor Noble 
or I had a suggestion. If we move it to a branch then again we will be in the 
same place a month down the line since as you say you don't have time to help 
with it.

bq. This is also still not Make one state.json per collection, but a bunch of 
issues all connected to that.

Yes but there's no point of have a state per collection without those other 
changes. Tim wrote very eloquently about what's changed in terms of nodes 
watching the state and why we think it is necessary.

Noble said earlier that these hacks in the API were put in place to support 
back-compat. Having looked at this patch in depth more times than I can 
remember over the past few months, I agree with Noble that it is difficult to 
do without it. This API is definitely not the API we want for Solr 5 and I 
completely agree with you on that. We can refactor and do away with the 
ClusterState completely on trunk (and we intend to do that in future) but 
before we that, a back-compatible version of this change needs to land on 
branch_4x.

It is crazy that it takes a commit to get your attention (and veto!) when 
things can be resolved via discussion and collaboration. Tim and I have been 
reviewing this patch and we shall continue to work with Noble on improving it 
but I am afraid that it might be unproductive again because after three 
committers are comfortable with the approach and commit it to trunk, you veto 
it without any constructive suggestions on actually improving the APIs.

IMO, we should continue to iterate on trunk for a while (at least we have 
jenkins there) and get the APIs right as we want them for Solr 5 and then 
figure out how move it to branch_4x in a back-compatible and hopefully non-ugly 
way.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4760 - Still Failing

2014-07-02 Thread Shalin Shekhar Mangar
Looks like mail has a mail-LICENSE-CDDL.txt license file but the name of
the artifact was changed from mail-1.4.1.jar to javax.mail-1.5.1 so it is
looking for a matching license file. I'll rename it.


On Thu, Jul 3, 2014 at 1:21 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4760/

 All tests passed

 Build Log:
 [...truncated 29240 lines...]
 check-licenses:
  [echo] License check under:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
  [licenses] CHECKSUM FAILED for
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0
  
 /home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
  [licenses] MISSING LICENSE for the following file:
  [licenses]
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
  [licenses]   Expected locations below:
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-ASL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CDDL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-EPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MIT.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-PD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-SUN.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-COMPOUND.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-FAKE.txt
  [licenses] MISSING LICENSE for the following file:
  [licenses]
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
  [licenses]   Expected locations below:
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-ASL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CDDL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-EPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MIT.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-PD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-SUN.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-FAKE.txt
  [licenses] Scanned 208 JAR file(s) for licenses (in 1.63s.), 3 error(s).

 BUILD FAILED
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:467:
 The following error occurred while executing this line:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:70:
 The following error occurred while executing this line:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:254:
 The following error occurred while executing this line:
 

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050665#comment-14050665
 ] 

Mark Miller commented on SOLR-5473:
---

bq. It is crazy that it takes a commit to get your attention

It's simply when I could push it off no longer. As you have seen, I have not 
been active in Solr for a while now. It's unfortunate, but it is what it is. 
Now is when I was able to take a look.

My point is not simply the ZkStateReader - thats one of the more terrible 
changes I think, but much worse it was it does to the api's in general, how 
messy the change is in general, and how it complicates and weakens the code IMO.

I have given constructive thoughts on why the API's are bad and what needs to 
be done or what might be an idea to try.

I can't hold your hand down that entire path though - I have my own work 
schedule and priorities unfortunately.

I can honestly say these look like really bad changes to me though, and I have 
to try and preserve SolrCloud where I can. I see more and more tests failing 
since I left, Noble has a track record of introducing tests that fail a lot 
with SolrCloud, and a *ton* of what I brought up was not addressed at all.

A lot of SolrCloud issues go in that I don't fully support or think is ready 
100%. I'm not trying to be a dick to work with. I do think this work is not 
very good yet and that putting it in trunk will be a huge pain down the line 
and so I can't let it happen is my position.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5302) Analytics Component

2014-07-02 Thread Anirudha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050668#comment-14050668
 ] 

Anirudha edited comment on SOLR-5302 at 7/2/14 8:27 PM:


Yes, in solrCloud mode; Currently, you can use this component when talking to 
individual shards or if you have only one shard.


was (Author: anirudha):
Yes, i solrCloud mode. Currently, you can use this component when talking to 
individual shards or if you have only one shard.

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Fix For: 5.0

 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-07-02 Thread Anirudha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050668#comment-14050668
 ] 

Anirudha commented on SOLR-5302:


Yes, i solrCloud mode. Currently, you can use this component when talking to 
individual shards or if you have only one shard.

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Fix For: 5.0

 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4760 - Still Failing

2014-07-02 Thread Timothy Potter
Ok, great catch Shalin - who would have thought changing a dependency
version would be so much trouble :-( Thanks for fixing.

On Wed, Jul 2, 2014 at 1:24 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 Looks like mail has a mail-LICENSE-CDDL.txt license file but the name of the
 artifact was changed from mail-1.4.1.jar to javax.mail-1.5.1 so it is
 looking for a matching license file. I'll rename it.


 On Thu, Jul 3, 2014 at 1:21 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4760/

 All tests passed

 Build Log:
 [...truncated 29240 lines...]
 check-licenses:
  [echo] License check under:
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr
  [licenses] CHECKSUM FAILED for
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0
 /home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
  [licenses] MISSING LICENSE for the following file:
  [licenses]
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
  [licenses]   Expected locations below:
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-ASL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CDDL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-CPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-EPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MIT.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-MPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-PD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-SUN.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-COMPOUND.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/gimap-LICENSE-FAKE.txt
  [licenses] MISSING LICENSE for the following file:
  [licenses]
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
  [licenses]   Expected locations below:
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-ASL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CDDL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-CPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-EPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MIT.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-MPL.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-PD.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-SUN.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
  [licenses]   =
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/licenses/javax.mail-LICENSE-FAKE.txt
  [licenses] Scanned 208 JAR file(s) for licenses (in 1.63s.), 3 error(s).

 BUILD FAILED

 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:467:
 The following error occurred while executing this line:

 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:70:
 The 

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050680#comment-14050680
 ] 

Mark Miller commented on SOLR-5473:
---

I'll also reiterate: Splitting the clusterstate.json is a large, nice, self 
container issue. If we could do that, I think it would be a lot easier to see 
how it *should* be done in a nice way, where it's easy to deprecate and remove 
an impl later.

I think because you are trying to tie that issue into this collections scaling 
issue, you are okay saying that, oh it just has to be ugly and hackey, because 
it's this whole pile of issues we are solving. I don't think it does have to be 
ugly or hackey and I think if we let that stay in trunk, it will haunt us like 
a lot of other code we have sometimes let in too easily.

Finally, splitting the clusterstate by collection has not been a contentious 
issue. Some of the other things you are doing around watchers and caching is 
more contentious. We don't have full agreement on them, we never have, and it 
really feels like they are coming in as extra pork on a bill with the JIRA 
title make one state.json per collection.



 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050683#comment-14050683
 ] 

Noble Paul commented on SOLR-5473:
--

bq.My point is not simply the ZkStateReader - thats one of the more terrible 
changes I think, but much worse it was it does to the api's in general, how 
messy the change is in general, and how it complicates and weakens the code IMO


I have been trying to explain why I am doing it this way. I fully know it is 
not the most elegant way. But , not breaking back compat was a real challenge. 

bq.I have given constructive thoughts on why the API's are bad and what needs 
to be done or what might be an idea to try.

I'm willing to explore any suggestions . All other reviewers have been of 
similar opinion that there are limitations with the way the system works. 
Introducing this feature in a back-compat way has been quite a challenge . When 
we are ready to break that , we will have the liberty to clean it up 
completely. But we have to make the first baby steps somewhere

bq.and a ton of what I brought up was not addressed at all.

It is not that I love not to address them. Despite my best attempts I could not 
find a better way. I have asked others and they could not suggest solutions 
either


 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050691#comment-14050691
 ] 

Noble Paul commented on SOLR-5473:
--

bq.Finally, splitting the clusterstate by collection has not been a contentious 
issue. Some of the other things you are doing around watchers and caching is 
more contentious

I would like to hear a solution where you split the clusterstate and not 
getting a performance worse of than the single clusterstate.json.  it is not 
worthwhile to split if we are degrading the performance.

I'm doing it because our users are really facing problems with scalability with 
large no:of collections. 

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050691#comment-14050691
 ] 

Noble Paul edited comment on SOLR-5473 at 7/2/14 8:46 PM:
--

bq.Finally, splitting the clusterstate by collection has not been a contentious 
issue. Some of the other things you are doing around watchers and caching is 
more contentious

I would like to hear a solution where you just split the clusterstate and not 
getting a performance worse of than the single clusterstate.json.  it is not 
worthwhile to split if we are degrading the performance.

I'm doing it because our users are really facing problems with scalability with 
large no:of collections. 


was (Author: noble.paul):
bq.Finally, splitting the clusterstate by collection has not been a contentious 
issue. Some of the other things you are doing around watchers and caching is 
more contentious

I would like to hear a solution where you split the clusterstate and not 
getting a performance worse of than the single clusterstate.json.  it is not 
worthwhile to split if we are degrading the performance.

I'm doing it because our users are really facing problems with scalability with 
large no:of collections. 

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_60) - Build # 10705 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10705/
Java: 64bit/jdk1.7.0_60 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 29052 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr
 [licenses] CHECKSUM FAILED for 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 1.19s.), 3 error(s).

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:70: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:254: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 69 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_60 
-XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050698#comment-14050698
 ] 

Shalin Shekhar Mangar commented on SOLR-5473:
-

bq. I think because you are trying to tie that issue into this collections 
scaling issue, you are okay saying that, oh it just has to be ugly and hackey, 
because it's this whole pile of issues we are solving. I don't think it does 
have to be ugly or hackey and I think if we let that stay in trunk, it will 
haunt us like a lot of other code we have sometimes let in too easily.

bq. Finally, splitting the clusterstate by collection has not been a 
contentious issue. Some of the other things you are doing around watchers and 
caching is more contentious. We don't have full agreement on them, we never 
have, and it really feels like they are coming in as extra pork on a bill with 
the JIRA title make one state.json per collection.

As Noble explained earlier, there's really no point in splitting cluster state 
by collection if we were not trying to scale to a large number of collections. 
They are the same issue. We aren't doing this because we want to work around ZK 
size limits for clusterstate.json. We are trying to make large clusters 
possible which have thousands of collections. There is really no point in 
splitting the cluster state per collection and multiplying the number of 
watchers in the system by the number of collections. Even if ZK and SolrCloud 
scales to that limit and I don't know if it would, it is just wasteful and not 
required at all.


 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #649: POMs out of sync

2014-07-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/649/

No tests ran.

Build Log:
[...truncated 40362 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/extra-targets.xml:77:
 Java returned: 1

Total time: 31 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050706#comment-14050706
 ] 

Mark Miller commented on SOLR-5473:
---

bq. There is really no point in splitting the cluster state per collection and 
multiplying the number of watchers in the system by the number of collections.

It would be a step along the path you are seeking it seems for starters - a 
step with consensus.

Also, I think of course there is a point and there are users that where not 
very happy when we switched from having a zk node per shard actually. You can 
have a huge number of nodes and still read changes very quickly. As someone 
already mentioned, one of the more interesting options would be to be able to 
define yourself how many nudes that state was split across or to what level. 
One site I know had a pretty crazy amount of watchers back when we had a design 
closer to this and it all seemed to work just fine with thousands of shard 
entries and many more thousands of watchers. All of these different ideas have 
different tradeoffs. Breaking up by collection is a small improvement and is 
something that can be built upon. We talked about it way before you guys 
started working on scaling to many collections. It's not some sort of mega 
improvement, but it is a step forward and other things you are looking to do 
can be built on it.





 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050717#comment-14050717
 ] 

Mark Miller commented on SOLR-5473:
---

Anyway, no one is forcing you to take it a step at a time. It just feels like 
that's the only way to try and force an approach that is API sane. People are 
going to begin developing against what goes in right away - building on it, 
tying it together and up even more. I think that faster we revert it, the 
easier that is. There is no way I can withdraw my veto. This core part of the 
code has to be *more* sensible, not less. The new and old API's have to be 
cleanly defined. A user of the API's can't be so screwed. What to use, what to 
avoid, what the hell are these two different modes? What API's are jacked and 
why? 

How do the API's need to change to get that zkstatereader out of clusterstate? 
Your telling me that just impossible? I simply don't believe that.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050718#comment-14050718
 ] 

Noble Paul commented on SOLR-5473:
--

bq.It would be a step along the path you are seeking it seems for starters - a 
step with consensus.

When we clearly know that the selective watching is more scalable, and it is 
already implemented, why are we trying a less scalable solution? Just because 
of the fear of the unknown? The point is I raised this question before the 
first rollback in this ticket about the 3 options we have 1) watch all 2) watch 
selectively 3) watch none . I didn't get any response for that. And after 
4months of development we are in the same place again.  

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050722#comment-14050722
 ] 

Noble Paul commented on SOLR-5473:
--

bq.How do the API's need to change to get that zkstatereader out of 
clusterstate? Your telling me that just impossible? I simply don't believe that.

OK we can change the clusterstate often and , it won't be realtime anymore. Is 
that OK. Are you OK with selective watching?

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050722#comment-14050722
 ] 

Noble Paul edited comment on SOLR-5473 at 7/2/14 9:23 PM:
--

bq.How do the API's need to change to get that zkstatereader out of 
clusterstate? Your telling me that just impossible? I simply don't believe that.

OK we can change the clusterstate often and , it won't be realtime anymore. Is 
that OK. I can make that change easily. Are you OK with selective watching?


was (Author: noble.paul):
bq.How do the API's need to change to get that zkstatereader out of 
clusterstate? Your telling me that just impossible? I simply don't believe that.

OK we can change the clusterstate often and , it won't be realtime anymore. Is 
that OK. Are you OK with selective watching?

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-07-02 Thread Gopal Patwa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050764#comment-14050764
 ] 

Gopal Patwa commented on SOLR-5963:
---

Hi Eric, could you add this patch as contrib to 4.x, so other folks can use it. 
I tried applying this patch to 4.9 but did not work may be it was created 
before 4.9

 Finalize interface and backport analytics component to 4x
 -

 Key: SOLR-5963
 URL: https://issues.apache.org/jira/browse/SOLR-5963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5963.patch, SOLR-5963.patch


 Now that we seem to have fixed up the test failures for trunk for the 
 analytics component, we need to solidify the API and back-port it to 4x. For 
 history, see SOLR-5302 and SOLR-5488.
 As far as I know, these are the merges that need to occur to do this (plus 
 any that this JIRA brings up)
 svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
 The only remaining thing I think needs to be done is to solidify the 
 interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
 although SOLR-5488 is the most relevant one.
 [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
 interested here.
 I really want to put this to bed, so if we can get agreement on this soon I 
 can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050776#comment-14050776
 ] 

ASF subversion and git services commented on SOLR-2245:
---

Commit 1607489 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1607489 ]

SOLR-2245: Reverting due to unclear license for greenmail

 MailEntityProcessor Update
 --

 Key: SOLR-2245
 URL: https://issues.apache.org/jira/browse/SOLR-2245
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4, 1.4.1
Reporter: Peter Sturge
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
 SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip


 This patch addresses a number of issues in the MailEntityProcessor 
 contrib-extras module.
 The changes are outlined here:
 * Added an 'includeContent' entity attribute to allow specifying content to 
 be included independently of processing attachments
  e.g. entity includeContent=true processAttachments=false . . . / 
 would include message content, but not attachment content
 * Added a synonym called 'processAttachments', which is synonymous to the 
 mis-spelled (and singular) 'processAttachement' property. This property 
 functions the same as processAttachement. Default= 'true' - if either is 
 false, then attachments are not processed. Note that only one of these should 
 really be specified in a given entity tag.
 * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
 unread, not deleted etc.), there is still a property value stored in the 
 'flags' field (the value is the string none)
 Note: there is a potential backward compat issue with FLAGS.NONE for clients 
 that expect the absence of the 'flags' field to mean 'Not read'. I'm 
 calculating this would be extremely rare, and is inadviasable in any case as 
 user flags can be arbitrarily set, so fixing it up now will ensure future 
 client access will be consistent.
 * The folder name of an email is now included as a field called 'folder' 
 (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
 processing
 * The addPartToDocument() method that processes attachments is significantly 
 re-written, as there looked to be no real way the existing code would ever 
 actually process attachment content and add it to the row data
 Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050785#comment-14050785
 ] 

Shalin Shekhar Mangar commented on SOLR-2245:
-

Tim, I have reverted your commit because the licensing terms for greenmail 
aren't clear. Their website says ASL 2.0 but I peeked into some of their source 
files and all of them have a header saying that they are licensed according to 
LGPL. This is a red flag and we need to tread carefully. There are plenty of 
ASL projects using greenmail and maybe I am just being paranoid but after 
consulting with Steve Rowe, I thought it safer to just revert the commit and 
get more clarity on the licensing issue.

Example:
http://grepcode.com/file/repo1.maven.org/maven2/com.icegreen/greenmail/1.3.1b/com/icegreen/greenmail/store/MailMessageAttributes.java

 MailEntityProcessor Update
 --

 Key: SOLR-2245
 URL: https://issues.apache.org/jira/browse/SOLR-2245
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4, 1.4.1
Reporter: Peter Sturge
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
 SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip


 This patch addresses a number of issues in the MailEntityProcessor 
 contrib-extras module.
 The changes are outlined here:
 * Added an 'includeContent' entity attribute to allow specifying content to 
 be included independently of processing attachments
  e.g. entity includeContent=true processAttachments=false . . . / 
 would include message content, but not attachment content
 * Added a synonym called 'processAttachments', which is synonymous to the 
 mis-spelled (and singular) 'processAttachement' property. This property 
 functions the same as processAttachement. Default= 'true' - if either is 
 false, then attachments are not processed. Note that only one of these should 
 really be specified in a given entity tag.
 * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
 unread, not deleted etc.), there is still a property value stored in the 
 'flags' field (the value is the string none)
 Note: there is a potential backward compat issue with FLAGS.NONE for clients 
 that expect the absence of the 'flags' field to mean 'Not read'. I'm 
 calculating this would be extremely rare, and is inadviasable in any case as 
 user flags can be arbitrarily set, so fixing it up now will ensure future 
 client access will be consistent.
 * The folder name of an email is now included as a field called 'folder' 
 (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
 processing
 * The addPartToDocument() method that processes attachments is significantly 
 re-written, as there looked to be no real way the existing code would ever 
 actually process attachment content and add it to the row data
 Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050722#comment-14050722
 ] 

Noble Paul edited comment on SOLR-5473 at 7/2/14 10:10 PM:
---

bq.How do the API's need to change to get that zkstatereader out of 
clusterstate? Your telling me that just impossible? I simply don't believe that.

OK we can change the clusterstate often and , it won't be realtime anymore. Is 
that OK. I can make that change easily. Are you OK with selective watching?
I'm soon posting a patch with zkStateReader removed from ClusterState. 


was (Author: noble.paul):
bq.How do the API's need to change to get that zkstatereader out of 
clusterstate? Your telling me that just impossible? I simply don't believe that.

OK we can change the clusterstate often and , it won't be realtime anymore. Is 
that OK. I can make that change easily. Are you OK with selective watching?

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-07-02 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050796#comment-14050796
 ] 

Shalin Shekhar Mangar commented on SOLR-5302:
-

Has anybody given a thought about how this might use the new AnalyticsQuery? Is 
the AnalyticsQuery framework powerful enough to make this component cloud-aware?

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Fix For: 5.0

 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050810#comment-14050810
 ] 

Steve Rowe commented on SOLR-2245:
--

Redhat apparently had the same licensing realization as you, Shalin, though 
from a different perspective (AFAIK they aren't bound by ASL compatibility):

# the bug they filed with the greenmail project (no response): 
https://sourceforge.net/p/greenmail/bugs/8/
# the Redhat bugzilla issue: https://bugzilla.redhat.com/show_bug.cgi?id=1059805
# the Fedora hosted tracker issue: https://fedorahosted.org/fpc/ticket/392

The third link has lots of info, though mostly related to issues around whether 
it was okay to accept an effective partial fork of Apache James.

 MailEntityProcessor Update
 --

 Key: SOLR-2245
 URL: https://issues.apache.org/jira/browse/SOLR-2245
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4, 1.4.1
Reporter: Peter Sturge
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
 SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip


 This patch addresses a number of issues in the MailEntityProcessor 
 contrib-extras module.
 The changes are outlined here:
 * Added an 'includeContent' entity attribute to allow specifying content to 
 be included independently of processing attachments
  e.g. entity includeContent=true processAttachments=false . . . / 
 would include message content, but not attachment content
 * Added a synonym called 'processAttachments', which is synonymous to the 
 mis-spelled (and singular) 'processAttachement' property. This property 
 functions the same as processAttachement. Default= 'true' - if either is 
 false, then attachments are not processed. Note that only one of these should 
 really be specified in a given entity tag.
 * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
 unread, not deleted etc.), there is still a property value stored in the 
 'flags' field (the value is the string none)
 Note: there is a potential backward compat issue with FLAGS.NONE for clients 
 that expect the absence of the 'flags' field to mean 'Not read'. I'm 
 calculating this would be extremely rare, and is inadviasable in any case as 
 user flags can be arbitrarily set, so fixing it up now will ensure future 
 client access will be consistent.
 * The folder name of an email is now included as a field called 'folder' 
 (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
 processing
 * The addPartToDocument() method that processes attachments is significantly 
 re-written, as there looked to be no real way the existing code would ever 
 actually process attachment content and add it to the row data
 Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048153#comment-14048153
 ] 

Tomás Fernández Löbbe edited comment on SOLR-6216 at 7/2/14 10:35 PM:
--

I did some very basic performance testing to compare interval faceting vs facet 
queries: 
Dataset: Geonames.org dataset (added 4 times to make it a 33M docs)
Query Set: 4960 boolean queries using terms from the dataset
1 document updated every second
autoSoftCommit every second. 
HW: MacBook Pro Core i7, 2.7 GHz with 8 GB of RAM with spinning disk (5400 RPM)
All times are in milliseconds
Repeated the test with different number of intervals (on the “population” field 
of the geonames dataset)

|| ||   Num Intervals ||1 ||2 ||3 ||4 ||5 ||10 ||
| Min   |Intervals |25 |23 |26 |23 |24 |26 |
| | Facet Query |   2 | 2 | 3 | 4 | 4 | 6 |
|Max |  Intervals | 1885 |  2254 |  2508 |  2800 |  2749 |  3031 |
| | Facet Query |   2199 |  2414 |  3957 |  2766 |  1869 |  5975 |
| Average   | Intervals |   181 |   177 |   191 |   183 |   148 |   174 |
| |Facet Query| 156|277|359|299|216|408|
|P10|Intervals  |53 |54 |54 |54 |54 |56|
| |Facet Query  |26 |30 |33 |31 |29 |35|
|P50|Intervals  |96 |95 |98 |97 |88 |96|
| |Facet Query  |54 |211|293|188|58 |74|
|P90|Intervals  |453|940|467|458|350|438|
| |Facet Query  |432|656|794|749|660|1066|
|P99|Intervals  |809|884|968|877|857|897|
| |Facet Query  |867|1041   |1354   |1219   |1116   |1784|

There is some variation between the tests with different number of intervals 
(with the same method) that I don’t understand very well. For each test, I’d 
restart the jetty (index files are probably cached between tests though).

In general what I see is that the average is similar or lower than facet query, 
the p10 and p50 similar or higher than facet query (these are probably the 
cases where the facet queries hit cache), and lower p90 p99 for the Intervals 
impl. This probably because of facet query missing cache. 

“Max” variates a lot, I don’t think it’s a very representative number, I just 
left it for completeness. Min is very similar for all cases, it’s obvious that 
in the best case (all cache hits), facet query is much faster than intervals. 

I also did a quick test on an internal collection with around 100M docs in a 
single shard, ran around 6000 queries with around 40 intervals each, for this 
test I got: 

|Min|Intervals  |122|
| |Facet Query  |124|
|Max |Intervals |6626|
| |Facet Query  |61009|
|Average|Intervals  |238|
| |Facet Query  |620|
|P10|Intervals  |155|
| |Facet Query  |151|
|P50|Intervals  |201|
| |Facet Query  |202|
|P90|Intervals  |324|
| |Facet Query  |461|
|P99|Intervals  |836|
| |Facet Query  |23662|
 
This collection has updates and soft commits. 
I don’t have numbers for distributed tests, but from what I could see, the 
result was even better on wide collections, because of the lower p90/p99 I 
assume. 


was (Author: tomasflobbe):
I did some very basic performance testing to compare interval faceting vs facet 
queries: 
Dataset: Geonames.org dataset (added 4 times to make it a 33M docs)
Query Set: 4960 boolean queries using terms from the dataset
1 document updated every second
autoSoftCommit every second. 
HW: MacBook Pro Core i7, 2.7 GHz with 8 GB of RAM with spinning disk (5400 RPM)
All times are in milliseconds
Repeated the test with different number of intervals (on the “population” field 
of the geonames dataset)

|| ||   Num Intervals ||1 ||2 ||3 ||4 ||5 ||10 ||
| Min   |Intervals |25 |23 |26 |23 |24 |26 |
| | Facet Query |   2 | 2 | 3 | 4 | 4 | 6 |
|Max |  Intervals | 1885 |  2254 |  2508 |  2800 |  2749 |  3031 |
| | Facet Query |   2199 |  2414 |  3957 |  2766 |  1869 |  5975 |
| Average   | Intervals |   181 |   177 |   191 |   183 |   148 |   174 |
| |Facet Query| 156|277|359|299|216|408|
|P10|Intervals  |53 |54 |54 |54 |54 |56|
| |Facet Query  |26 |30 |33 |31 |29 |35|
|P50|Intervals  |96 |95 |98 |97 |88 |96|
| |Facet Query  |54 |211|293|188|58 |74|
|P90|Intervals  |453|940|467|458|350|438|
| |Facet Query  |432|656|794|749|660|1066|
|P99|Intervals  |809|884|968|877|857|897|
| |Facet Query  |867|1041   |1354   |1219   |1116   |1784|

There is some variation between the tests with different number of intervals 
(with 

[jira] [Commented] (SOLR-5302) Analytics Component

2014-07-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050830#comment-14050830
 ] 

Joel Bernstein commented on SOLR-5302:
--

I'm fairly certain all the functionality in the AnalyticsComponent could be 
implemented as an AnalyticsQuery. Any functions that could be distributed would 
have a MergeStrategy implementations as well. 

 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Fix For: 5.0

 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050829#comment-14050829
 ] 

Mark Miller commented on SOLR-5473:
---

I'm still calling for a revert. People will start committing issues and make 
taking it out difficult and error prone. I've already had to convert some of my 
own work to this state and then from it the last time it took a couple days to 
take it out.

Removing zkStateReader is a key step, but one of many issues I'm trying to get 
across. The API's in general are the problem. No user or developer that did not 
work on this can sanely deal with these changes IMO. The API's are 
intermingled, there is no to idea that there will be a deprecation and 
evolution, it's still, as I've said more than once above, implemented simply 
like an option. There are awkward API's, its not clear what the modes are, 
where the code that separates the modes is divided, etc, etc.

I still think this looks early. 

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5963) Finalize interface and backport analytics component to 4x

2014-07-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050833#comment-14050833
 ] 

Erick Erickson commented on SOLR-5963:
--

Not at this time. This is in considerable flux at the moment, I'm not entirely 
sure it will be backported or whether we'll instead start over with the 
pluggable analytics.

Applying the patch won't really help either. You need to merge all the 
revisions as I've outlined.

Best,
Erick

 Finalize interface and backport analytics component to 4x
 -

 Key: SOLR-5963
 URL: https://issues.apache.org/jira/browse/SOLR-5963
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5963.patch, SOLR-5963.patch


 Now that we seem to have fixed up the test failures for trunk for the 
 analytics component, we need to solidify the API and back-port it to 4x. For 
 history, see SOLR-5302 and SOLR-5488.
 As far as I know, these are the merges that need to occur to do this (plus 
 any that this JIRA brings up)
 svn merge -c 1543651 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545009 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545053 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545054 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545080 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545143 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545417 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545514 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1545650 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546074 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1546263 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1559770 https://svn.apache.org/repos/asf/lucene/dev/trunk
 svn merge -c 1583636 https://svn.apache.org/repos/asf/lucene/dev/trunk
 The only remaining thing I think needs to be done is to solidify the 
 interface, see comments from [~yo...@apache.org] on the two JIRAs mentioned, 
 although SOLR-5488 is the most relevant one.
 [~sbower], [~houstonputman] and [~yo...@apache.org] might be particularly 
 interested here.
 I really want to put this to bed, so if we can get agreement on this soon I 
 can make it march.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-07-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050839#comment-14050839
 ] 

Erick Erickson commented on SOLR-5302:
--

Shalin:

It's actually a somewhat different problem I think. We're thinking of pulling 
this out of 5.x and going with the analytics framework instead, but haven't 
quite reached consensus on that. The big consideration here is that making this 
work distributed is seems like a big task. Using the pluggable framework seems 
like it would be easier to build up as necessary.

We really need to figure it out soon


 Analytics Component
 ---

 Key: SOLR-5302
 URL: https://issues.apache.org/jira/browse/SOLR-5302
 Project: Solr
  Issue Type: New Feature
Reporter: Steven Bower
Assignee: Erick Erickson
 Fix For: 5.0

 Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
 SOLR-5302.patch, Search Analytics Component.pdf, Statistical Expressions.pdf, 
 solr_analytics-2013.10.04-2.patch


 This ticket is to track a replacement for the StatsComponent. The 
 AnalyticsComponent supports the following features:
 * All functionality of StatsComponent (SOLR-4499)
 * Field Faceting (SOLR-3435)
 ** Support for limit
 ** Sorting (bucket name or any stat in the bucket
 ** Support for offset
 * Range Faceting
 ** Supports all options of standard range faceting
 * Query Faceting (SOLR-2925)
 * Ability to use overall/field facet statistics as input to range/query 
 faceting (ie calc min/max date and then facet over that range
 * Support for more complex aggregate/mapping operations (SOLR-1622)
 ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
 median, percentiles
 ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
 string reversal, string concat
 ** Easily pluggable framework to add additional operations
 * New / cleaner output format
 Outstanding Issues:
 * Multi-value field support for stats (supported for faceting)
 * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1685 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1685/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 29014 lines...]
check-licenses:
 [echo] License check under: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr
 [licenses] CHECKSUM FAILED for 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler-extras/test-lib/greenmail-1.3.1b.jar
 (expected: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0  
/home/maven/repository-staging/to-ibiblio/maven2/com/icegreen/greenmail/1.3.1b/greenmail-1.3.1b.jar
 was: 1e2727e8cae768b8f91dfc44cc5e8e1c4802c5c0)
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler/lib/gimap-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-ASL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-BSD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-CDDL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-CPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-EPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-MIT.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-MPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-PD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-SUN.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-COMPOUND.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/gimap-LICENSE-FAKE.txt
 [licenses] MISSING LICENSE for the following file:
 [licenses]   
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/contrib/dataimporthandler/lib/javax.mail-1.5.1.jar
 [licenses]   Expected locations below:
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-ASL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-BSD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-BSD_LIKE.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-CDDL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-CPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-EPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-MIT.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-MPL.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-PD.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-SUN.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-COMPOUND.txt
 [licenses]   = 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/licenses/javax.mail-LICENSE-FAKE.txt
 [licenses] Scanned 208 JAR file(s) for licenses (in 2.28s.), 3 error(s).

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:467: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:70: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:254: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/custom-tasks.xml:62:
 License check failed. Check the logs.

Total time: 112 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0 
-XX:-UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_05) - Build # 10706 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10706/
Java: 64bit/jdk1.8.0_05 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 58923 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:406: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:179: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/example/solr/collection1/conf/_rest_managed.json

Total time: 77 minutes 16 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0_05 
-XX:+UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-02 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050875#comment-14050875
 ] 

Hoss Man commented on SOLR-2245:


FWIW: 

* everything _but_ the source headers seems to agree that greenmail is ASL 
licensed...
** SF.net license label added by project maintainer: 
http://sourceforge.net/projects/greenmail/ - 
http://sourceforge.net/directory/license:apache2/
** project website: http://www.icegreen.com/greenmail/ - 
http://www.apache.org/licenses/LICENSE-2.0.txt
** maven pom.xml: 
http://sourceforge.net/p/greenmail/code/HEAD/tree/trunk/pom.xml#l18
** license.txt shpped with project: 
http://sourceforge.net/p/greenmail/code/HEAD/tree/trunk/license.txt
* from what i can tell, even files copied verbatim from Apache James, w/o any 
modifications, have the exact same LGPL copyright header - which smells like a 
straight up IDE generated header mistake to me.
* there is a feedback page on the icegreen.com domain that links to contact 
details on another site -- i suppose you could try reaching out to the dev that 
way: http://www.icegreen.com/greenmail/feedback.html - 
http://waelchatila.com/pages/consulting.html
* greenmail appears to be the epitome of a completely dead project -- any 
resolution of this issue that involves on waiting for developer response / 
action is probably a bad idea...
** Latest code commit: 2010-06-03 
http://sourceforge.net/p/greenmail/code/HEAD/tree/
** issue tracker contains 8 bug reports going back to 2009-03-15, none of which 
have ever recieved a comment from any project developer: 
http://sourceforge.net/p/greenmail/bugs/
** most recent mailing list postings from project dev:
*** latest reply to user list from a dev: Jan 2010 - 
http://sourceforge.net/p/greenmail/mailman/message/24321407/ 
*** latest release announcement list message: Dec 2007 - 
http://sourceforge.net/p/greenmail/mailman/greenmail-announcement/
** project website has a blog link that redirects to another domain (same as 
contact details) where most recent greenmail blog is 1.3 release announcement 
from 2007: http://www.icegreen.com/articles - http://waelchatila.com/ - 
http://waelchatila.com/tags/greenmail/


 MailEntityProcessor Update
 --

 Key: SOLR-2245
 URL: https://issues.apache.org/jira/browse/SOLR-2245
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4, 1.4.1
Reporter: Peter Sturge
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
 SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip


 This patch addresses a number of issues in the MailEntityProcessor 
 contrib-extras module.
 The changes are outlined here:
 * Added an 'includeContent' entity attribute to allow specifying content to 
 be included independently of processing attachments
  e.g. entity includeContent=true processAttachments=false . . . / 
 would include message content, but not attachment content
 * Added a synonym called 'processAttachments', which is synonymous to the 
 mis-spelled (and singular) 'processAttachement' property. This property 
 functions the same as processAttachement. Default= 'true' - if either is 
 false, then attachments are not processed. Note that only one of these should 
 really be specified in a given entity tag.
 * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
 unread, not deleted etc.), there is still a property value stored in the 
 'flags' field (the value is the string none)
 Note: there is a potential backward compat issue with FLAGS.NONE for clients 
 that expect the absence of the 'flags' field to mean 'Not read'. I'm 
 calculating this would be extremely rare, and is inadviasable in any case as 
 user flags can be arbitrarily set, so fixing it up now will ensure future 
 client access will be consistent.
 * The folder name of an email is now included as a field called 'folder' 
 (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
 processing
 * The addPartToDocument() method that processes attachments is significantly 
 re-written, as there looked to be no real way the existing code would ever 
 actually process attachment content and add it to the row data
 Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2245) MailEntityProcessor Update

2014-07-02 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050897#comment-14050897
 ] 

Steve Rowe commented on SOLR-2245:
--

ASF projects I found that depend on greenmail for testing purposes - there are 
probably others - here are the pom.xml files that have this dependency:

# Apache Syncope: http://svn.apache.org/repos/asf/syncope/trunk/pom.xml
# Apache Geronimo: 
https://svn.apache.org/repos/asf/geronimo/javamail/trunk/geronimo-javamail_1.4/geronimo-javamail_1.4_provider/pom.xml
# Apache OODT: http://svn.apache.org/repos/asf/oodt/trunk/protocol/imaps/pom.xml
# Apache Oozie: 
https://git-wip-us.apache.org/repos/asf?p=oozie.git;a=blob;f=pom.xml;h=bad1e0fbee619f2e5020733792c4a09256b69dcf;hb=master
# Apache Axis2: 
http://svn.apache.org/repos/asf/axis/axis2/java/core/trunk/modules/transport/mail/pom.xml



 MailEntityProcessor Update
 --

 Key: SOLR-2245
 URL: https://issues.apache.org/jira/browse/SOLR-2245
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4, 1.4.1
Reporter: Peter Sturge
Assignee: Timothy Potter
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.patch, 
 SOLR-2245.patch, SOLR-2245.patch, SOLR-2245.zip


 This patch addresses a number of issues in the MailEntityProcessor 
 contrib-extras module.
 The changes are outlined here:
 * Added an 'includeContent' entity attribute to allow specifying content to 
 be included independently of processing attachments
  e.g. entity includeContent=true processAttachments=false . . . / 
 would include message content, but not attachment content
 * Added a synonym called 'processAttachments', which is synonymous to the 
 mis-spelled (and singular) 'processAttachement' property. This property 
 functions the same as processAttachement. Default= 'true' - if either is 
 false, then attachments are not processed. Note that only one of these should 
 really be specified in a given entity tag.
 * Added a FLAGS.NONE value, so that if an email has no flags (i.e. it is 
 unread, not deleted etc.), there is still a property value stored in the 
 'flags' field (the value is the string none)
 Note: there is a potential backward compat issue with FLAGS.NONE for clients 
 that expect the absence of the 'flags' field to mean 'Not read'. I'm 
 calculating this would be extremely rare, and is inadviasable in any case as 
 user flags can be arbitrarily set, so fixing it up now will ensure future 
 client access will be consistent.
 * The folder name of an email is now included as a field called 'folder' 
 (e.g. folder=INBOX.Sent). This is quite handy in search/post-indexing 
 processing
 * The addPartToDocument() method that processes attachments is significantly 
 re-written, as there looked to be no real way the existing code would ever 
 actually process attachment content and add it to the row data
 Tested on the 3.x trunk with a number of popular imap servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-07-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2894:
---

Attachment: SOLR-2894.patch


bq. It looks like we need to rethink how the values are encoded into a path for 
the purpose of refinement so we can account for and differentiate between 
missing values, the empty string (0 chars), and the literal string null (4 
chars)

I've been working on this for the last few days - cleaning up how we deal with 
the refinement strings so that facet.missing and/or empty strings () in 
fields won't be problematic.

It's been slow going as i tried to be systematic about refactoring  
documenting methods as i went along and started understanding more and more of 
the code.

The bulk of the changes i made can be summarized as:
# make the valuePath tracking more structured via ListString instead of 
building up single comma seperated refinement string right off the bat
# refactor the encoding/decoding of the refinement strings into a utility 
method thta can handle null and empty string.
# refactor the refinement count  subset computation so that it can actually 
handle facet.missing correctly (before attempts at refining facet.missing were 
just looking for the term null (ie: 4 characters)

Full details on how this patch differs from the lsat one are listed below -- 
but as things stand right now there is still a nasty bug somewhere in the 
facet.missing processing that i can't wrap my head arround...  

In short: when facet.missing is enabled in the SPECIAL test i mentioned in my 
last comment, it's somehow causing the refined counts of of the non-missing 
SPECIAL value to be wrong (even if the SPECIAL value is a regular string, and 
not ). 

I can't really wrap my head arround how that's happening -- it's going to 
involve some more manual testing  some more unit tests to get to the bottom of 
it, but in the mean time I wanted to get this patch posted.

If folks could review it  sanity check that i'm not doing something stupid 
with the refinement that would be appreciated.



Detailed changes in this patch iteration...

* PivotFacetHelper
** add new encodeRefinementValuePath  decodeRefinementValuePath methods
*** special encoding to handle empty strings (should be valid when pivoting) 
and null values (needed for facet.missing refinement)
** add tests in TestPivotHelperCode
* PivotFacetValue  PivotFacetField
** in general, make these a bit more structured
** eliminate fieldPath since it's unused
** replace PivotFacetValue.field (String) with a ref to the actual parentPivot 
(PivotFacetField)
** add PivotFacetField.parentValue (PivotFacetValue) to ref the value this 
pivot field is nested under (if any)
** replace valuePath with getValuePath() (ListString) to track the full 
structure
* FacetComponent
** prune some big chunks of commented out code (alt approaches no longer needed 
it looks like?)
** use new PivotFacetValue.getValuePath() + 
PivotFacetHelper.encodeRefinementValuePath instead of PivotFacetValue.valuePath
* SimpleFacets
** make getListedTermCounts(String,String) private again  add javadocs 
clarifing that it smarSplits the list of terms
** convert getListedTermCounts(String,String,DocSet) - 
getListedTermCounts(String,DocSet,ListString)
*** ie: pull the split logic out of this method, since it's confusing, and some 
callers don't need it.
*** add javadocs
*** updated SimpleFacets callers to do the split themselves
* PivotFacetProcessor
** refactor subset logic (that dealt with missing values via negatived range 
query) into getSubset helper method
*** add complimentary getSubsetSize method as well
** update previous callers of getListedTermCounts(String,String,DocSet) to use 
getSubsetSize instead in order to correctly handle the refinements of null (ie: 
facet.missing)
** refactor  cleanup processSingle:
*** have caller do the field splitting  validation (eliminates redundency when 
refining many values)
*** stop treating empty string as special case, switch conditionals that were 
looking at first value to look at list size directly
* misc new javadocs on various methods throughout hte above mentioned files



Misc notes for the future:

* even if/when we get the refinement logic fixed, we really need some safety 
check to ensure we've completely eliminated this possibility of an infinite 
loop on refinement:
** coordinator should assert that if if asks shard for a refinement, that 
refinement is returned
** shard should assert that if it's asked to refine, the #vals makes sense for 
the #fields in the pivot
* we need to include more testing of facet.missing:
** randomized testing in in TestCloudPivotFacet
** more usage of it in the Small  Large tests.
* in general, we need more testing that we know triggers refinement
** ie: the Small test already does a bunch with facet.missing, but I guess 
that never caught ny of these bugs, because refinement was never needed?

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050936#comment-14050936
 ] 

Mark Miller commented on SOLR-5473:
---

bq. Just because of the fear of the unknown? 

For fear of the hellish API's that devs and users will be stuck with. I'm sorry 
if I was not clear about that. And for fear of what you are affecting that you 
don't understand. To do this kind of work and not add or mess with many tests 
looks very scary from where I am sitting. We need more effort on stabilizing 
and investing existing test fails, more work on adding tests for missing and 
changed areas - much more than we need this kind of unstabalizing change when 
the API's are half baked and detrimental to the code base if people starting 
building on them.

bq. The point is I raised this question before

And I've talked about it before - probably more than once. There are tradeoffs 
in the approaches. I told you at a minimum that I would certainly be open to 
have the option to do selective. And that if you wanted to change to selective 
by default, it seems that you should have to specifically investigate and argue 
that the change is better by spelling out the nuances of the change, the 
benefits and the loses, etc. I didn't say it could not be done, I said it did 
not seem like we came to a consensus.

It feels like everything we have discussed from the start and I had issues 
with, you have essentially done it all as you planned anyway, and anytime I 
object, you go back and do some cosmetic changes and roll back the same thing. 
You tell me, the big stuff you want changed, too hard, impossible, the small 
stuff, done and done (which half not done, like all the external mentions still 
there).

There is no consensus IMO, not until I'm convinced by my voices that I'm 
smocking crack, and a vetoed commit cannot be committed again until a veto is 
withdrawn.





 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-07-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050936#comment-14050936
 ] 

Mark Miller edited comment on SOLR-5473 at 7/3/14 12:59 AM:


bq. Just because of the fear of the unknown? 

For fear of the hellish API's that devs and users will be stuck with. I'm sorry 
if I was not clear about that. And for fear of what you are affecting that you 
don't understand. To do this kind of work and not add or mess with many tests 
looks very scary from where I am sitting. We need more effort on stabilizing 
and investigating existing test fails, more work on adding tests for missing 
and changed areas - much more than we need this kind of unstabalizing change 
when the API's are half baked and detrimental to the code base if people 
starting building on them.

bq. The point is I raised this question before

And I've talked about it before - probably more than once. There are tradeoffs 
in the approaches. I told you at a minimum that I would certainly be open to 
have the option to do selective. And that if you wanted to change to selective 
by default, it seems that you should have to specifically investigate and argue 
that the change is better by spelling out the nuances of the change, the 
benefits and the loses, etc. I didn't say it could not be done, I said it did 
not seem like we came to a consensus.

It feels like everything we have discussed from the start and I had issues 
with, you have essentially done it all as you planned anyway, and anytime I 
object, you go back and do some cosmetic changes and roll back the same thing. 
You tell me, the big stuff you want changed, too hard, impossible, the small 
stuff, done and done (with half not done, like all the external mentions still 
there).

There is no consensus IMO, not until I'm convinced by more voices that I'm 
smocking crack, and a vetoed commit cannot be committed again until a veto is 
withdrawn.






was (Author: markrmil...@gmail.com):
bq. Just because of the fear of the unknown? 

For fear of the hellish API's that devs and users will be stuck with. I'm sorry 
if I was not clear about that. And for fear of what you are affecting that you 
don't understand. To do this kind of work and not add or mess with many tests 
looks very scary from where I am sitting. We need more effort on stabilizing 
and investing existing test fails, more work on adding tests for missing and 
changed areas - much more than we need this kind of unstabalizing change when 
the API's are half baked and detrimental to the code base if people starting 
building on them.

bq. The point is I raised this question before

And I've talked about it before - probably more than once. There are tradeoffs 
in the approaches. I told you at a minimum that I would certainly be open to 
have the option to do selective. And that if you wanted to change to selective 
by default, it seems that you should have to specifically investigate and argue 
that the change is better by spelling out the nuances of the change, the 
benefits and the loses, etc. I didn't say it could not be done, I said it did 
not seem like we came to a consensus.

It feels like everything we have discussed from the start and I had issues 
with, you have essentially done it all as you planned anyway, and anytime I 
object, you go back and do some cosmetic changes and roll back the same thing. 
You tell me, the big stuff you want changed, too hard, impossible, the small 
stuff, done and done (which half not done, like all the external mentions still 
there).

There is no consensus IMO, not until I'm convinced by my voices that I'm 
smocking crack, and a vetoed commit cannot be committed again until a veto is 
withdrawn.





 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, 

[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-02 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050971#comment-14050971
 ] 

Matt Davis commented on LUCENE-5755:


Sample forbidden api ant called from gradle, not sure it is good form but it 
does work for a trivial example:
https://gist.github.com/mdavis95/94217e972c879d430028


 Explore alternative build systems
 -

 Key: LUCENE-5755
 URL: https://issues.apache.org/jira/browse/LUCENE-5755
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor

 I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
 It's not even the tool's fault; it seems Lucene builds just hit the borders 
 of what it can do, especially in terms of submodule dependencies etc.
 I don't think Maven will help much too, given certain things I'd like to have 
 in the build (for example collect all tests globally for a single execution 
 phase at the end of the build, to support better load-balancing).
 I'd like to explore Gradle as an alternative. This task is a notepad for 
 thoughts and experiments.
 An example of a complex (?) gradle build is javafx, for example.
 http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5755) Explore alternative build systems

2014-07-02 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050985#comment-14050985
 ] 

Hoss Man commented on LUCENE-5755:
--

FWIW:

* i was recently made aware of the existence of the pants BUILD system: 
https://pantsbuild.github.io/
* i have not looked into it in depth, but some of it's stated optimizes for 
design goals seem inline with the stated objective of this issue: 
https://pantsbuild.github.io/first_concepts.html
** building multiple, dependent things from source
** building code in a variety of languages
** speed of build execution
* for Java based compilation:
** it uses ivy: https://pantsbuild.github.io/3rdparty_jvm.html
** it exposes a simple mechanism to depend on set versions and prevent 
transitive dependencies
** docs suggest that it's default way of tracking 3rd party deps is designed 
around preventing multiple sub-modules from depending on conflicting versions 
of the same 3rd party dep: https://pantsbuild.github.io/3rdparty.html
* the build files are implemented via python code, so scripting certain rules 
(ie: call intransitive() on all jars()) seems like it would probably be easy.

Note: that is the sum total of my knowledge on Pants ... if folks think it 
looks promising, then the next step for folks would be to read more about it -- 
asking me follow up questions is probably a waste of time.

 Explore alternative build systems
 -

 Key: LUCENE-5755
 URL: https://issues.apache.org/jira/browse/LUCENE-5755
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor

 I am dissatisfied with how ANT and submodules currently work in Lucene/ Solr. 
 It's not even the tool's fault; it seems Lucene builds just hit the borders 
 of what it can do, especially in terms of submodule dependencies etc.
 I don't think Maven will help much too, given certain things I'd like to have 
 in the build (for example collect all tests globally for a single execution 
 phase at the end of the build, to support better load-balancing).
 I'd like to explore Gradle as an alternative. This task is a notepad for 
 thoughts and experiments.
 An example of a complex (?) gradle build is javafx, for example.
 http://hg.openjdk.java.net/openjfx/8/master/rt/file/f89b7dc932af/build.gradle



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Filtering

2014-07-02 Thread Chris Hostetter


https://people.apache.org/~hossman/#java-user

...or...

https://people.apache.org/~hossman/#solr-user


(it's not clear if you are specifically asking about lucene's Filter 
class to use in java code, or if you are asking a more general question 
about indexing and solr would be a good fit ... either way dev@lucene is 
not hte appropriate list)




: Date: Mon, 30 Jun 2014 07:30:13 -0700 (PDT)
: From: Venkata krishna venkat1...@gmail.com
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Filtering
: 
: Hi,
: 
: While indexing i am using file name as id.
: 
: In my project we  get list files from db , after that  need to do search for
: a particular phrase  in those files what we got from db.
: 
:  how to write filter for to search a phrase in a list files mentioned.
: 
: 
: So could you please provide any solution.
: 
: 
: 
: Thanks,
: 
: Venkata Krishna Tolusuri.
: 
: 
: 
: --
: View this message in context: 
http://lucene.472066.n3.nabble.com/Filtering-tp4144773.html
: Sent from the Lucene - Java Developer mailing list archive at Nabble.com.
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6222) CollapsingQParserPlugin throws Exception when useFilterForSortedQuery=true

2014-07-02 Thread Umesh Prasad (JIRA)
Umesh Prasad created SOLR-6222:
--

 Summary: CollapsingQParserPlugin throws Exception when 
useFilterForSortedQuery=true
 Key: SOLR-6222
 URL: https://issues.apache.org/jira/browse/SOLR-6222
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9, 4.8
Reporter: Umesh Prasad
Priority: Minor


CollapsingQParserPlugin throws Exception when useFilterForSortedQuery=true

It throws an exception when used with

useFilterForSortedQuery true /useFilterForSortedQuery

Patch attached (against 4.8.1 but reproducible in other branches also)





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20-ea-b15) - Build # 10707 - Still Failing!

2014-07-02 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10707/
Java: 32bit/jdk1.8.0_20-ea-b15 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 51687 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:406: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:179: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/example/solr/collection1/conf/_rest_managed.json

Total time: 80 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_20-ea-b15 -client 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

  1   2   >