[jira] [Updated] (LUCENE-4186) Overhaul Lucene spatial's "distErrPct"

2012-09-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4186:
-

Attachment: LUCENE-4186_distErrPct_upgrade.patch

I updated the patch to add a 
PrefixTreeStrategy.createIndexedFields(shape,distErr) specialization, and 
accounted for the Spatial4j 0.3 update.

I plan to commit this sometime tomorrow.

> Overhaul Lucene spatial's "distErrPct"
> --
>
> Key: LUCENE-4186
> URL: https://issues.apache.org/jira/browse/LUCENE-4186
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Critical
> Fix For: 4.0
>
> Attachments: LUCENE-4186_distErrPct_upgrade.patch, 
> LUCENE-4186_distErrPct_upgrade.patch
>
>
> The distance-error-percent of a query shape in Lucene spatial is, in a 
> nutshell, the percent of the shape's area that is an error epsilon when 
> considering search detail at its edges.  The default is 2.5%, for reference.  
> However, as configured, it is read in as a fraction:
> {code:xml}
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>distErrPct="0.025" maxDetailDist="0.001" />
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4365) The Maven build can't directly handle complex inter-module dependencies involving the test-framework modules

2012-09-06 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe resolved LUCENE-4365.
-

   Resolution: Fixed
Fix Version/s: 4.0
   5.0
Lucene Fields: New,Patch Available  (was: New)

Committed:

- [r1381779|https://svn.apache.org/viewvc?view=rev&rev=1381779]: trunk
- [r1381854|https://svn.apache.org/viewvc?view=rev&rev=1381854]: branch_4x

> The Maven build can't directly handle complex inter-module dependencies 
> involving the test-framework modules
> 
>
> Key: LUCENE-4365
> URL: https://issues.apache.org/jira/browse/LUCENE-4365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Steven Rowe
>Assignee: Steven Rowe
>Priority: Minor
> Fix For: 5.0, 4.0
>
> Attachments: LUCENE-4365.patch, LUCENE-4365.patch, 
> lucene.solr.cyclic.dependencies.removed.png, 
> lucene.solr.dependency.cycles.png.jpg
>
>
> The Maven dependency model disallows cyclic dependencies, of which there are 
> now several in the Ant build (considering test and compile dependencies 
> together, as Maven does).  All of these cycles involve either the Lucene 
> test-framework or the Solr test-framework.
> The current Maven build works around this problem by incorporating 
> dependencies' sources into dependent modules' test sources, rather than 
> literally declaring the problematic dependencies as such. (See SOLR-3780 for 
> a recent example of putting this workaround in place for the Solrj module.)  
> But with the factoring out of the Lucene Codecs module, upon which Lucene 
> test-framework has a compile-time dependency, the complexity of the 
> workarounds required to make it all hang together is great enough that I want 
> to attempt a (Maven-build-only) module refactoring.  It should require fewer 
> contortions and be more maintainable.
> The Maven build is currently broken, as of the addition of the Codecs module 
> (LUCENE-4340).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4186) Overhaul Lucene spatial's "distErrPct"

2012-09-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4186:
-

Summary: Overhaul Lucene spatial's "distErrPct"  (was: Lucene spatial's 
"distErrPct" is treated as a fraction, not a percent.)

> Overhaul Lucene spatial's "distErrPct"
> --
>
> Key: LUCENE-4186
> URL: https://issues.apache.org/jira/browse/LUCENE-4186
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Critical
> Fix For: 4.0
>
> Attachments: LUCENE-4186_distErrPct_upgrade.patch
>
>
> The distance-error-percent of a query shape in Lucene spatial is, in a 
> nutshell, the percent of the shape's area that is an error epsilon when 
> considering search detail at its edges.  The default is 2.5%, for reference.  
> However, as configured, it is read in as a fraction:
> {code:xml}
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>distErrPct="0.025" maxDetailDist="0.001" />
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4186) Lucene spatial's "distErrPct" is treated as a fraction, not a percent.

2012-09-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450317#comment-13450317
 ] 

David Smiley commented on LUCENE-4186:
--

After some interesting discussion in chat, proliferating SpatialArgs here is 
not the way to go.  Instead, PrefixTreeStrategy can be overloaded to take the 
distErr.

The subject of this issue is also now wrong, and I'll change it.  Even though 
distErrPct is technically input as a fraction instead of a percent, it'll stay 
that way.

> Lucene spatial's "distErrPct" is treated as a fraction, not a percent.
> --
>
> Key: LUCENE-4186
> URL: https://issues.apache.org/jira/browse/LUCENE-4186
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Critical
> Fix For: 4.0
>
> Attachments: LUCENE-4186_distErrPct_upgrade.patch
>
>
> The distance-error-percent of a query shape in Lucene spatial is, in a 
> nutshell, the percent of the shape's area that is an error epsilon when 
> considering search detail at its edges.  The default is 2.5%, for reference.  
> However, as configured, it is read in as a fraction:
> {code:xml}
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>distErrPct="0.025" maxDetailDist="0.001" />
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1381812 - in /lucene/dev/trunk/dev-tools/maven: lucene/codecs/src/test/pom.xml.template lucene/core/src/test/pom.xml.template solr/core/src/test/pom.xml.template solr/solrj/src/test/p

2012-09-06 Thread Steven A Rowe
Thanks Robert.

-Original Message-
From: rm...@apache.org [mailto:rm...@apache.org] 
Sent: Thursday, September 06, 2012 8:10 PM
To: comm...@lucene.apache.org
Subject: svn commit: r1381812 - in /lucene/dev/trunk/dev-tools/maven: 
lucene/codecs/src/test/pom.xml.template lucene/core/src/test/pom.xml.template 
solr/core/src/test/pom.xml.template solr/solrj/src/test/pom.xml.template

Author: rmuir
Date: Fri Sep  7 00:10:20 2012
New Revision: 1381812

URL: http://svn.apache.org/viewvc?rev=1381812&view=rev
Log:
eol-style=native

Modified:
lucene/dev/trunk/dev-tools/maven/lucene/codecs/src/test/pom.xml.template   
(contents, props changed)
lucene/dev/trunk/dev-tools/maven/lucene/core/src/test/pom.xml.template   
(contents, props changed)
lucene/dev/trunk/dev-tools/maven/solr/core/src/test/pom.xml.template   
(contents, props changed)
lucene/dev/trunk/dev-tools/maven/solr/solrj/src/test/pom.xml.template   
(contents, props changed)

Modified: 
lucene/dev/trunk/dev-tools/maven/lucene/codecs/src/test/pom.xml.template
URL: 
http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/codecs/src/test/pom.xml.template?rev=1381812&r1=1381811&r2=1381812&view=diff
==
--- lucene/dev/trunk/dev-tools/maven/lucene/codecs/src/test/pom.xml.template 
(original)
+++ lucene/dev/trunk/dev-tools/maven/lucene/codecs/src/test/pom.xml.template 
Fri Sep  7 00:10:20 2012
@@ -1,74 +1,74 @@
-http://maven.apache.org/POM/4.0.0";
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
-  
-  4.0.0
-  
-org.apache.lucene
-lucene-parent
-@version@
-../../../pom.xml
-  
-  org.apache.lucene
-  lucene-codecs-tests
-  Lucene codecs tests
-  jar
-  
-lucene/codecs
-../../../../..
-${top-level}/${module-directory}/src/test
-  
-  
-
-  
-  org.apache.lucene
-  lucene-test-framework
-  ${project.version}
-  test
-
-
-  org.apache.lucene
-  lucene-codecs
-  ${project.version}
-  test
-
-  
-  
-
-${module-path}
-
-  
-${project.build.testSourceDirectory}
-
-  **/*.java
-
-  
-
-
-  
-org.apache.maven.plugins
-maven-deploy-plugin
-
-  true
-
-  
-
-  
-
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+  
+  4.0.0
+  
+org.apache.lucene
+lucene-parent
+@version@
+../../../pom.xml
+  
+  org.apache.lucene
+  lucene-codecs-tests
+  Lucene codecs tests
+  jar
+  
+lucene/codecs
+../../../../..
+${top-level}/${module-directory}/src/test
+  
+  
+
+  
+  org.apache.lucene
+  lucene-test-framework
+  ${project.version}
+  test
+
+
+  org.apache.lucene
+  lucene-codecs
+  ${project.version}
+  test
+
+  
+  
+
+${module-path}
+
+  
+${project.build.testSourceDirectory}
+
+  **/*.java
+
+  
+
+
+  
+org.apache.maven.plugins
+maven-deploy-plugin
+
+  true
+
+  
+
+  
+

Modified: lucene/dev/trunk/dev-tools/maven/lucene/core/src/test/pom.xml.template
URL: 
http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/maven/lucene/core/src/test/pom.xml.template?rev=1381812&r1=1381811&r2=1381812&view=diff
==
--- lucene/dev/trunk/dev-tools/maven/lucene/core/src/test/pom.xml.template 
(original)
+++ lucene/dev/trunk/dev-tools/maven/lucene/core/src/test/pom.xml.template Fri 
Sep  7 00:10:20 2012
@@ -1,98 +1,98 @@
-http://maven.apache.org/POM/4.0.0";
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
-  
-  4.0.0
-  
-org.apache.lucene
-lucene-parent
-@version@
-../../../pom.xml
-  
-  org.apache.lucene
-  lucene-core-tests
-  Lucene Core tests
-  jar
-  
-lucene/core
-../../../../..
-${top-level}/${module-directory}/src/test
-  
-  
-
-  
-  org.apache.lucene
-  lucene-test-framework
-  ${project.version}
-  test
-
-
-  org.apache.lucene
-  lucene-core
-  ${project.version}
-  test
-
-
-  junit
-  junit
-  test
-
-
-  org.apache.ant
-  ant
-  test
-
-
-  com.carrotsearch.randomizedtesting
-  randomizedtesting-runner
-  test
-
-  
-  
-
-${module-path}
-
-  
-${project.build.testSourceDirectory}
-
-  **/*.java
-
-  
-
-
- 

[jira] [Created] (SOLR-3807) Currently during recovery we pause for a number of seconds after waiting for the leader to see a recovering state so that any previous updates will have finished before ou

2012-09-06 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3807:
-

 Summary: Currently during recovery we pause for a number of 
seconds after waiting for the leader to see a recovering state so that any 
previous updates will have finished before our commit on the leader - we don't 
need this wait for peersync.
 Key: SOLR-3807
 URL: https://issues.apache.org/jira/browse/SOLR-3807
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Priority: Minor
 Fix For: 4.0, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-09-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-3686.
---

   Resolution: Fixed
Fix Version/s: 5.0
   4.0

this is cleaned up now. add something like IndexWriter in a solrj file and you 
get a compile error.

> fix solr/core and solr/solrj not to share a lib/ directory
> --
>
> Key: SOLR-3686
> URL: https://issues.apache.org/jira/browse/SOLR-3686
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.0, 5.0
>
> Attachments: SOLR-3686.patch, SOLR-3686.patch
>
>
> This makes the build system hairy.
> it also prevents us from using ivy's sync=true (LUCENE-4262) 
> which totally prevents the issue of outdated jars.
> We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450208#comment-13450208
 ] 

Robert Muir commented on SOLR-3686:
---

I committed the first patch (separately: smokeTester is working again).

Will proceed in a bit with looking at fixing solrj to compile with just its 
own classpath so that we get compile-time checking there.


> fix solr/core and solr/solrj not to share a lib/ directory
> --
>
> Key: SOLR-3686
> URL: https://issues.apache.org/jira/browse/SOLR-3686
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: SOLR-3686.patch, SOLR-3686.patch
>
>
> This makes the build system hairy.
> it also prevents us from using ivy's sync=true (LUCENE-4262) 
> which totally prevents the issue of outdated jars.
> We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b51) - Build # 928 - Failure!

2012-09-06 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/928/
Java: 32bit/jdk1.8.0-ea-b51 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 20986 lines...]
-jenkins-javadocs-lint:

[...truncated 24 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:268: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:110: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* dev-tools/maven/lucene/codecs/src/test/pom.xml.template
* dev-tools/maven/lucene/core/src/test/pom.xml.template
* dev-tools/maven/solr/core/src/test/pom.xml.template
* dev-tools/maven/solr/solrj/src/test/pom.xml.template

Total time: 28 minutes 33 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Description set: Java: 32bit/jdk1.8.0-ea-b51 -server -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.6.0_35) - Build # 650 - Failure!

2012-09-06 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/650/
Java: 64bit/jdk1.6.0_35 -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 20280 lines...]
-jenkins-javadocs-lint:

[...truncated 24 lines...]
BUILD FAILED
C:\Jenkins\workspace\Lucene-Solr-trunk-Windows\build.xml:268: The following 
error occurred while executing this line:
C:\Jenkins\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:110: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* dev-tools/maven/lucene/codecs/src/test/pom.xml.template
* dev-tools/maven/lucene/core/src/test/pom.xml.template
* dev-tools/maven/solr/core/src/test/pom.xml.template
* dev-tools/maven/solr/solrj/src/test/pom.xml.template

Total time: 44 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Description set: Java: 64bit/jdk1.6.0_35 -XX:+UseParallelGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-trunk-Java6 - Build # 15192 - Failure

2012-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java6/15192/

All tests passed

Build Log:
[...truncated 20302 lines...]
-jenkins-javadocs-lint:

javadocs-lint:

[...truncated 1605 lines...]
javadocs-lint:
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec]   build/docs/core/org/apache/lucene/store/package-use.html
 [exec] WARNING: anchor 
"../../../../org/apache/lucene/store/subclasses" appears more than once
 [exec] 
 [exec] Verify...

[...truncated 594 lines...]
javadocs-lint:
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...

[...truncated 24 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java6/build.xml:268:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java6/extra-targets.xml:110:
 The following files are missing svn:eol-style (or binary svn:mime-type):
* dev-tools/maven/lucene/codecs/src/test/pom.xml.template
* dev-tools/maven/lucene/core/src/test/pom.xml.template
* dev-tools/maven/solr/core/src/test/pom.xml.template
* dev-tools/maven/solr/solrj/src/test/pom.xml.template

Total time: 44 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache

2012-09-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450114#comment-13450114
 ] 

Shawn Heisey commented on SOLR-3393:


Adrien,

I've been looking at your patch, especially the warming code.  I can't see 
anything in there that maintains the frequency values from the old cache to the 
new cache.

With maxFreq of 10 and a cache size much larger (200, 1000, etc), there's no 
difference from the cache's perspective between something that has been 
requested 50 times versus something that has been requested 100 times.  How did 
the maxFreq being related to the cache size make it slower?


> Implement an optimized LFUCache
> ---
>
> Key: SOLR-3393
> URL: https://issues.apache.org/jira/browse/SOLR-3393
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.6, 4.0-ALPHA
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache

2012-09-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450096#comment-13450096
 ] 

Shawn Heisey commented on SOLR-3393:


Adrien, thanks for looking at it and making it better.  This is early in my 
experience with Java - I can still count the number of projects I've built 
myself on one hand.  Also, there have been a number of changes to the entire 
cache system since I wrote the first patch, changes that I have not had a 
chance to review.

I definitely like doing the decay only at warm time.  I'm perfectly happy to 
have evictDecay yanked out.  I didn't think of the decay at all, that was Yonik 
on SOLR-2906.  I agreed with his reasons.  I wonder if there might be a way to 
have the decay happen much less often -- say after a configurable number of 
commits rather than for every commit.  Also, I can't remember whether I kept 
the bitshift decay (dividing by two) or changed it to subtract one from the 
frequency.  IMHO subtracting one would be better.

I don't understand your first note about the put, and I can't take the time to 
re-read the code right now.  On whether things should be volatile or not -- I 
based all that on SOLR-2906, and I based SOLR-2906 on existing stuff.  I don't 
completely understand what the implications are.  If you do, awesome.

On the default for maxFreq and how it might affect performance -- again, I 
expect you've got more experience and can make a better determination.

Hoss, I would be very surprised to learn than anyone was actually using the 
current implementation in 3.6.0 or the 4.0 alpha/beta.  I still haven't had a 
chance to give it a serious trial in my own setup, and I wrote it!  I think 
about that first attempt as similar to the first sort algorithm you ever get 
introduced to in a programming class, before they introduce recursion and tell 
you about quicksort.  


> Implement an optimized LFUCache
> ---
>
> Key: SOLR-3393
> URL: https://issues.apache.org/jira/browse/SOLR-3393
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.6, 4.0-ALPHA
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450086#comment-13450086
 ] 

Robert Muir commented on SOLR-3686:
---

smoke tests have been broken for a while: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/

I'll fix the smoke tester here too, I think its checking the wrong javadocs 
path when testing solr. 

Somehow we need to run this jenkins job more often.

separately i think this patch fixes a few bugs:
* previously solrj-lib included unnecessary commons-codec, which it doesnt 
depend on.
* but it didnt include necessary things like zookeeper, which it does depend on.

> fix solr/core and solr/solrj not to share a lib/ directory
> --
>
> Key: SOLR-3686
> URL: https://issues.apache.org/jira/browse/SOLR-3686
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: SOLR-3686.patch, SOLR-3686.patch
>
>
> This makes the build system hairy.
> it also prevents us from using ivy's sync=true (LUCENE-4262) 
> which totally prevents the issue of outdated jars.
> We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4365) The Maven build can't directly handle complex inter-module dependencies involving the test-framework modules

2012-09-06 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe updated LUCENE-4365:


Attachment: LUCENE-4365.patch

{{ant generate-maven-artifacts}} was not using the correct POMs for the four 
modules split into main/test modules ({{lucene-core}}, {{lucene-codecs}}, 
{{solr-core}}, and {{solr-solrj}}).  Each of these modules now has three POMs - 
a main POM, a test POM, and an aggregator POM that call the other two in a 
recursive build.  The previous patch was causing the aggregator POMs from the 
base module directory to be used, rather than the POM for the main module, 
which will be located at {{/src/java/}}.

This patch fixes the problem by using a {{dist-maven}} specialization called 
{{dist-maven-src-java}} in each of the four affected modules' {{build.xml}} 
files.

Committing shortly.

> The Maven build can't directly handle complex inter-module dependencies 
> involving the test-framework modules
> 
>
> Key: LUCENE-4365
> URL: https://issues.apache.org/jira/browse/LUCENE-4365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Steven Rowe
>Assignee: Steven Rowe
>Priority: Minor
> Attachments: LUCENE-4365.patch, LUCENE-4365.patch, 
> lucene.solr.cyclic.dependencies.removed.png, 
> lucene.solr.dependency.cycles.png.jpg
>
>
> The Maven dependency model disallows cyclic dependencies, of which there are 
> now several in the Ant build (considering test and compile dependencies 
> together, as Maven does).  All of these cycles involve either the Lucene 
> test-framework or the Solr test-framework.
> The current Maven build works around this problem by incorporating 
> dependencies' sources into dependent modules' test sources, rather than 
> literally declaring the problematic dependencies as such. (See SOLR-3780 for 
> a recent example of putting this workaround in place for the Solrj module.)  
> But with the factoring out of the Lucene Codecs module, upon which Lucene 
> test-framework has a compile-time dependency, the complexity of the 
> workarounds required to make it all hang together is great enough that I want 
> to attempt a (Maven-build-only) module refactoring.  It should require fewer 
> contortions and be more maintainable.
> The Maven build is currently broken, as of the addition of the Codecs module 
> (LUCENE-4340).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2012-09-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13450033#comment-13450033
 ] 

Yonik Seeley commented on SOLR-3755:


I've run into a few impedance mismatch issues implementing the JSON above.
Internally we seem to use ZkNodeProps which accepts Map... but a 
JSON Map is better represented as a Map.

I think I'll try going in the following direction:
- Make ZkNodeProps that accepts Map as properties, and can thus 
represent integers and more complex types.  This will be just like a Map, but 
add some convenience methods
- Make Slice subclass ZkNodeProps
- Make a new Replica class (instead of just representing it as a generic 
ZkNodeProps)

In general, to construct these classes from JSON, it seems like we should just 
pass the Map generated from the JSON parser and then the 
constructor can pull out key elements and construct sub-elements.

Thoughts?

> shard splitting
> ---
>
> Key: SOLR-3755
> URL: https://issues.apache.org/jira/browse/SOLR-3755
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Attachments: SOLR-3755.patch, SOLR-3755.patch
>
>
> We can currently easily add replicas to handle increases in query volume, but 
> we should also add a way to add additional shards dynamically by splitting 
> existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-09-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-3686:
--

Attachment: SOLR-3686.patch

fix a typo in the previous patch.

running 'ant nightly-smoke'. if it passes i'll commit.

> fix solr/core and solr/solrj not to share a lib/ directory
> --
>
> Key: SOLR-3686
> URL: https://issues.apache.org/jira/browse/SOLR-3686
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: SOLR-3686.patch, SOLR-3686.patch
>
>
> This makes the build system hairy.
> it also prevents us from using ivy's sync=true (LUCENE-4262) 
> which totally prevents the issue of outdated jars.
> We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-09-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-3686:
--

Attachment: SOLR-3686.patch

1st patch for iteration.

This changes no classpaths etc (which we should separately do), just separates 
the lib folders:

* turns on ivy sync here to end clean-jar hell
* copies all jar files in solrj/lib for packaging instead of a separate list in 
the build.xml like today that is only bound to cause bugs

tests pass, but i'll inspect artifacts and packaging before committing.

then we can iterate separately on fixing e.g. solrj's compile-classpath so it 
wont accidentally depend on lucene and things like that.


> fix solr/core and solr/solrj not to share a lib/ directory
> --
>
> Key: SOLR-3686
> URL: https://issues.apache.org/jira/browse/SOLR-3686
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: SOLR-3686.patch
>
>
> This makes the build system hairy.
> it also prevents us from using ivy's sync=true (LUCENE-4262) 
> which totally prevents the issue of outdated jars.
> We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3806) solrcloud node addresses

2012-09-06 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-3806:
--

 Summary: solrcloud node addresses
 Key: SOLR-3806
 URL: https://issues.apache.org/jira/browse/SOLR-3806
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Yonik Seeley
 Fix For: 4.0


On my mac (OS-X Lion), addresses such as http://rogue:8983/solr do not work 
outside of Java.  This means that when the cloud UI displays clickable nodes, 
they don't actually work.

This worked at some point in the far past, when my address was detected and 
published as "Rogue.local" as opposed to "Rogue".


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4123) Add CachingRAMDirectory

2012-09-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449931#comment-13449931
 ] 

Michael McCandless commented on LUCENE-4123:


bq. I am not sure if we really need that directory. With my changes in 
LUCENE-3659 we can handle that easily (also for files > 2 GiB). LUCENE-3659 
makes the buf size of RAMDir configureable (depending on IOContext while 
writing) and when you do new RAMDirectory(otherDir) - to cache the whole dir in 
RAM - it will use the maximum possible buffer size for the underlying file (2 
GiB) - as we dont write and need no smaller buf size.

Actually I think the two dirs have different use cases.

So I think we should do both: 1) fix RAMDir to do better buffering
(LUCENE-3659) and 2) add this new dir.

RAMDir is good for pure in-memory indices (for testing, or transient
usage, etc.) or for pulling in a read-only index from disk, while
CachingRAMDir (I think we should rename it to CachingDirWrapper) is
good if you want to write to the index but also want persistence,
since all writes go straight to the wrapped directory.

I don't think the limitations of this dir (max 2.1 GB file size) need
to block committing ... the javadocs call this out, and we can improve
it later.  It could be wrapping the byte[] in ByteBuffer and using
ByteBufferII doesn't lose any perf: that would be great. But we can
explore that after committing.

But definitely +1 to get LUCENE-3659 in...


> Add CachingRAMDirectory
> ---
>
> Key: LUCENE-4123
> URL: https://issues.apache.org/jira/browse/LUCENE-4123
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/store
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-4123.patch, LUCENE-4123.patch, LUCENE-4123.patch, 
> LUCENE-4123.patch
>
>
> The directory is very simple and useful if you have an index that you
> know fully fits into available RAM.  You could also use FileSwitchDir if
> you want to leave some files (eg stored fields or term vectors) on disk.
> It wraps any other Directory and delegates all writing (IndexOutput) to
> it, but for reading (IndexInput), it allocates a single byte[] and fully
> reads the file in and then serves requests off that single byte[].  It's
> more GC friendly than RAMDir since it only allocates a single array per
> file.
> It has a few nocommits still, but all tests pass if I wrap the delegate
> inside MockDirectoryWrapper using this.
> I tested with 1M Wikipedia english index (would like to test w/ 10M docs
> but I don't have enough RAM...); it seems to give a nice speedup:
> {noformat}
> TaskQPS base StdDev base  QPS cachedStdDev cached  
> Pct diff
>  Respell  197.007.27  203.198.17   -4% -  
>  11%
> PKLookup  121.122.80  125.463.20   -1% -  
>   8%
>   Fuzzy2   66.622.62   69.912.85   -3% -  
>  13%
>   Fuzzy1  206.206.47  222.216.521% -  
>  14%
>TermGroup100K  160.146.62  175.713.793% -  
>  16%
>   Phrase   34.850.40   38.750.618% -  
>  14%
>   TermBGroup100K  363.75   15.74  406.98   13.233% -  
>  20%
> SpanNear   53.081.11   59.532.944% -  
>  20%
> TermBGroup100K1P  222.539.78  252.865.966% -  
>  21%
> SloppyPhrase   70.362.05   79.954.484% -  
>  23%
> Wildcard  238.104.29  272.784.97   10% -  
>  18%
>OrHighMed  123.494.85  149.324.66   12% -  
>  29%
>  Prefix3  288.468.10  350.405.38   16% -  
>  26%
>   OrHighHigh   76.463.27   93.132.96   13% -  
>  31%
>   IntNRQ   92.252.12  113.475.74   14% -  
>  32%
> Term  757.12   39.03  958.62   22.68   17% -  
>  36%
>  AndHighHigh  103.034.48  133.893.76   21% -  
>  39%
>   AndHighMed  376.36   16.58  493.99   10.00   23% -  
>  40%
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4366) Small speedups for BooleanScorer

2012-09-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449920#comment-13449920
 ] 

Michael McCandless commented on LUCENE-4366:


10M Wikipedia results:
{noformat}
TaskQPS base StdDev base  QPS bs   StdDev bs  Pct 
diff
 MedSloppyPhrase6.170.265.980.23  -10% -
5%
HighSloppyPhrase0.880.050.860.05  -12% -
8%
HighSpanNear2.290.022.250.03   -3% -
0%
 Respell   71.951.53   70.982.87   -7% -
4%
PKLookup  199.321.69  196.734.17   -4% -
1%
 MedSpanNear3.910.073.890.07   -3% -
2%
  Fuzzy1   81.831.27   81.591.04   -3% -
2%
HighTerm   30.772.15   30.802.08  -12% -   
14%
 LowSpanNear   10.730.09   10.760.13   -1% -
2%
 LowSloppyPhrase   21.670.63   21.710.42   -4% -
5%
 MedTerm  138.51   10.57  138.83   10.26  -13% -   
16%
  Fuzzy2   39.980.65   40.110.88   -3% -
4%
 Prefix3   30.640.44   30.920.30   -1% -
3%
Wildcard   29.911.04   30.270.77   -4% -
7%
 LowTerm  467.62   18.17  478.39   12.75   -4% -
9%
   MedPhrase   13.820.40   14.170.32   -2% -
7%
   LowPhrase   16.120.74   16.590.59   -5% -   
11%
 AndHighHigh   10.150.44   10.540.16   -2% -   
10%
  HighPhrase1.790.071.860.05   -2% -   
11%
  AndHighLow 1046.18   38.67 1106.29   20.150% -   
11%
  AndHighMed   51.653.07   55.960.860% -   
16%
   OrHighMed   32.862.33   35.920.620% -   
19%
   OrHighLow   13.211.04   14.860.113% -   
23%
  IntNRQ   11.790.88   13.310.292% -   
24%
  OrHighHigh6.680.477.610.114% -   
24%
{noformat}

> Small speedups for BooleanScorer
> 
>
> Key: LUCENE-4366
> URL: https://issues.apache.org/jira/browse/LUCENE-4366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-4366.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4366) Small speedups for BooleanScorer

2012-09-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449914#comment-13449914
 ] 

Michael McCandless commented on LUCENE-4366:


The patch specializes the [private] collector BS uses to add docs to the bucket 
list, eg the first clause need not check if the bucket is stale because it 
always is.  It sorts the clauses first by prohibited clauses and second by 
smallest firstDocId (proxy for highest docFreq).  It uses an int[] instead of 
linked list to track live buckets ...

> Small speedups for BooleanScorer
> 
>
> Key: LUCENE-4366
> URL: https://issues.apache.org/jira/browse/LUCENE-4366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-4366.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4366) Small speedups for BooleanScorer

2012-09-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4366:
---

Attachment: LUCENE-4366.patch

Initial patch...

> Small speedups for BooleanScorer
> 
>
> Key: LUCENE-4366
> URL: https://issues.apache.org/jira/browse/LUCENE-4366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-4366.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4366) Small speedups for BooleanScorer

2012-09-06 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4366:
--

 Summary: Small speedups for BooleanScorer
 Key: LUCENE-4366
 URL: https://issues.apache.org/jira/browse/LUCENE-4366
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3393) Implement an optimized LFUCache

2012-09-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-3393:
---

Attachment: SOLR-3393.patch

Chris, I tried to rework your patch in order to remove the decay options, make 
maxFreq configurable (with 10 as a default value) and share the statistics code 
with LRUCache (by adding a common MapCacheBase superclass).

What do you think?

> Implement an optimized LFUCache
> ---
>
> Key: SOLR-3393
> URL: https://issues.apache.org/jira/browse/SOLR-3393
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.6, 4.0-ALPHA
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3796) I am getting 404 when accessing http://localhost:7101/wcoe-solr/admin

2012-09-06 Thread Sridharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sridharan updated SOLR-3796:


Component/s: web gui

> I am getting 404 when accessing http://localhost:7101/wcoe-solr/admin
> -
>
> Key: SOLR-3796
> URL: https://issues.apache.org/jira/browse/SOLR-3796
> Project: Solr
>  Issue Type: Bug
>  Components: Build, web gui
>Affects Versions: 3.6.1
> Environment: windows XP/Weblogic
>Reporter: Sridharan
>
> I deployed solr.war successfully in weblogic 9
> I got the welcome page when i access http://localhost:7101/wcoe-solr/
> But it gives 404 error, when i access the admin 
> "http://localhost:7101/wcoe-solr/admin";
> Please help

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3795) /admin/luke?show=schema is returning raw toString of SchemaField and CopyField objects for "copyDests" and "copySources"

2012-09-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-3795.


   Resolution: Fixed
Fix Version/s: 5.0

Committed revision 1381685. 4x
Committed revision 1381691. trunk


> /admin/luke?show=schema is returning raw toString of SchemaField and 
> CopyField objects for "copyDests" and "copySources"
> 
>
> Key: SOLR-3795
> URL: https://issues.apache.org/jira/browse/SOLR-3795
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.0, 5.0
>
> Attachments: SOLR-3795.patch
>
>
> While looking into SOLR-3734 i noticed that the LukeRequestHandler is blindly 
> putting arrays of CopyField and SchemaField objects in the response, when 
> returning copy from/to info, which are then getting written out using their 
> toString.
> steffkes seems to have done a great job of parsing the field name out of the 
> SchemaField.toString, but the CopyField.toString info is useless -- and 
> clients shouldn't have to do special string parsing to pull out this info.
> I think we should just fix both of these arrays to just be the simple string 
> values that they were most likely intended to be

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Robert Muir
On Thu, Sep 6, 2012 at 1:32 PM, Uwe Schindler  wrote:
> This job was the only one with wrong SVN checkout option. I went through all 
> jobs and corrected this one.
>

Thanks Uwe: I still think its important to fix this solr/lib stuff so
that it doesnt happen locally for developers, and so that we avoid
packaging bugs or wrong dependencies for solrj.

Ill try to take a look at splitting these (unless someone beats me to
it, or has strong opinions on how this should be).

-- 
lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Uwe Schindler
This job was the only one with wrong SVN checkout option. I went through all 
jobs and corrected this one.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Robert Muir [mailto:rcm...@gmail.com]
> Sent: Thursday, September 06, 2012 5:15 PM
> To: dev@lucene.apache.org
> Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure
> 
> I think this jenkins job has the wrong clean option? its just svn-upping but 
> not
> nuking svn-ignored stuff like the other jenkins jobs.
> 
> the real problem is the solr/lib which is "shared" by solr and solrj:
> https://issues.apache.org/jira/browse/SOLR-3686
> 
> This means that these tests have a bogus classpath (e.g. every so often
> someone accidentally makes solrj depend on lucene or something like that,
> nothing fails).
> 
> This also means we must disable ivy's sync=true in solr/core and
> solr/solrj: which means solr developers have to put up with "clean-jars hell"
> whenever anyone updates a jar.
> no other part of the build has this problem: when jars are resolved anything
> thats not supposed to be there is automatically deleted.
> 
>   
>   
> 
> 
> On Thu, Sep 6, 2012 at 11:02 AM, Apache Jenkins Server
>  wrote:
> > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/26/
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 6701 lines...]
> > [javac] Compiling 476 source files to /usr/home/hudson/hudson-
> slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/classes/java
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/search/function/distance/HaversineCon
> stFunction.java:42: cannot find symbol
> > [javac] symbol  : static DEGREES_TO_RADIANS
> > [javac] location: class com.spatial4j.core.distance.DistanceUtils
> > [javac] import static
> com.spatial4j.core.distance.DistanceUtils.DEGREES_TO_RADIANS;
> > [javac] ^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:402:
> cannot find symbol
> > [javac] symbol  : variable DEGREES_TO_RADIANS
> > [javac] location: class com.spatial4j.core.distance.DistanceUtils
> > [javac] return vals.doubleVal(doc) *
> DistanceUtils.DEGREES_TO_RADIANS;
> > [javac]   ^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:408:
> cannot find symbol
> > [javac] symbol  : variable RADIANS_TO_DEGREES
> > [javac] location: class com.spatial4j.core.distance.DistanceUtils
> > [javac] return vals.doubleVal(doc) *
> DistanceUtils.RADIANS_TO_DEGREES;
> > [javac]   ^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/schema/GeoHashField.java:51: cannot
> find symbol
> > [javac] symbol  : variable GEO
> > [javac] location: class com.spatial4j.core.context.SpatialContext
> > [javac]   private final SpatialContext ctx = SpatialContext.GEO;
> > [javac]^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:148: cannot
> find symbol
> > [javac] symbol  : variable GEO
> > [javac] location: class com.spatial4j.core.context.SpatialContext
> > [javac] Rectangle bbox =
> DistanceUtils.calcBoxByDistFromPtDEG(latCenter, lonCenter, distDeg,
> SpatialContext.GEO, null);
> > [javac] 
> >^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:400: cannot
> find symbol
> > [javac] symbol  : variable DEGREES_TO_RADIANS
> > [javac] location: class com.spatial4j.core.distance.DistanceUtils
> > [javac]   this.latCenterRad = SpatialDistanceQuery.this.latCenter *
> DistanceUtils.DEGREES_TO_RADIANS;
> > [javac] 
> >  ^
> > [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
> NightlyTests-
> 4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:401: cannot
> find symbol
> > [javac] symbol  : variable DEGREES_TO_RADIANS
> > [javac] location: class com.spatial4j.core.distance.DistanceUtils
> > [javac]   this.lonCenterRad = SpatialDistanceQuery.this.lonCenter *
> DistanceUtils.DEGREES_TO_RADIANS;
> > [javac] 
> >  ^
> > [java

Re: solrconfig issue: wiki doc says update log feature is not enabled by default

2012-09-06 Thread Chris Hostetter

: The wiki doc for updateLog says that this feature is “not enabled by default”
: when in fact it is enabled by default in the example solrconfig.xml for Solr
: 4.0-BETA.
: 
: The question is whether the example is wrong or the doc is wrong. All I know
: is that they do not match.

The wiki is correct, the example is also correct.

There is a differnce between solr's "default" behavior and the example 
configs.  Some things are listed in the example conigs because, as of the 
time they are released, they are recomended or suggested.  But they may 
not be the "default" behavior (for people who upgrade from previous 
configs, or who use "minimal" configs) either because of backcompat 
concerns, or because some explicit choices are needed when configuring.

ie: there is no default autoCommit hardcoded in solr, but the example 
includes a suggested autoCommit configuration.

If your solrconfig.xml has not  tag, then you will not have 
updateLog enabled.  that is what "not enabled by default" means


-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-2039) Regex support and beyond in JavaCC QueryParser

2012-09-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449822#comment-13449822
 ] 

Hoss Man commented on LUCENE-2039:
--

Yonik: strongly suggest you open a new issue to address this, since LUCENE-2039 
is already listed as a feature added in 4.0-ALPHA.

if you "fix" this mid string slash issue, you're going to want a unique jira id 
to refer to when citing the bug fix as a CHANGES.txt for 4.0-final, or no one 
will have any clear idea what works where.

> Regex support and beyond in JavaCC QueryParser
> --
>
> Key: LUCENE-2039
> URL: https://issues.apache.org/jira/browse/LUCENE-2039
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Critical
> Fix For: 4.0-ALPHA, 4.0
>
> Attachments: LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, LUCENE-2039.patch, 
> LUCENE-2039_wrap_master_parser.patch
>
>
> Since the early days the standard query parser was limited to the queries 
> living in core, adding other queries or extending the parser in any way 
> always forced people to change the grammar file and regenerate. Even if you 
> change the grammar you have to be extremely careful how you modify the parser 
> so that other parts of the standard parser are affected by customisation 
> changes. Eventually you had to live with all the limitation the current 
> parser has like tokenizing on whitespaces before a tokenizer / analyzer has 
> the chance to look at the tokens. 
> I was thinking about how to overcome the limitation and add regex support to 
> the query parser without introducing any dependency to core. I added a new 
> special character that basically prevents the parser from interpreting any of 
> the characters enclosed in the new special characters. I choose the forward 
> slash  '/' as the delimiter so that everything in between two forward slashes 
> is basically escaped and ignored by the parser. All chars embedded within 
> forward slashes are treated as one token even if it contains other special 
> chars like * []?{} or whitespaces. This token is subsequently passed to a 
> pluggable "parser extension" with builds a query from the embedded string. I 
> do not interpret the embedded string in any way but leave all the subsequent 
> work to the parser extension. Such an extension could be another full 
> featured query parser itself or simply a ctor call for regex query. The 
> interface remains quiet simple but makes the parser extendible in an easy way 
> compared to modifying the javaCC sources.
> The downsides of this patch is clearly that I introduce a new special char 
> into the syntax but I guess that would not be that much of a deal as it is 
> reflected in the escape method though. It would truly be nice to have more 
> than once extension an have this even more flexible so treat this patch as a 
> kickoff though.
> Another way of solving the problem with RegexQuery would be to move the JDK 
> version of regex into the core and simply have another method like:
> {code}
> protected Query newRegexQuery(Term t) {
>   ... 
> }
> {code}
> which I would like better as it would be more consistent with the idea of the 
> query parser to be a very strict and defined parser.
> I will upload a patch in a second which implements the extension based 
> approach I guess I will add a second patch with regex in core soon too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3789) Cannot use "internal" compression in replication handler

2012-09-06 Thread Joseph Lamoree (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449816#comment-13449816
 ] 

Joseph Lamoree commented on SOLR-3789:
--

I downloaded the build with this fix included:
https://builds.apache.org/job/Solr-Artifacts-4.x/84/artifact/solr/package/

I can confirm that internal compression works correctly between slave and 
master now. Thanks for the very fast turnaround!

> Cannot use "internal" compression in replication handler
> 
>
> Key: SOLR-3789
> URL: https://issues.apache.org/jira/browse/SOLR-3789
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0-BETA
>Reporter: Sami Siren
>Assignee: Sami Siren
> Fix For: 4.0
>
> Attachments: SOLR-3789.patch
>
>
> The implementation for internal compression is currently broken. a Wrong 
> parameter value is used to activate the feature in SnapPuller.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Robert Muir
On Thu, Sep 6, 2012 at 12:28 PM, Chris Hostetter
 wrote:
>
> : the real problem is the solr/lib which is "shared" by solr and solrj:
> : https://issues.apache.org/jira/browse/SOLR-3686
>
> I suspect the only reason it was ever setup that way, was to
> prevent having duplicate copies of jars in svn and when we ship releases.
> svn & src releasese are no longer an issue now that we use ivy, which just
> leaves the binary releases...

Actually I think even this might be a bit suspect, because there is
explicit duplication (a solrj-lib/) in the binary packaging.

the way this is created today leads to bugs (e.g.
https://issues.apache.org/jira/browse/SOLR-3541)

If we separate the libs, then its a straightforward copy (and
compile-time checked that we have the right stuff in there).

-- 
lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Robert Muir
On Thu, Sep 6, 2012 at 12:28 PM, Chris Hostetter
 wrote:
>
> : the real problem is the solr/lib which is "shared" by solr and solrj:
> : https://issues.apache.org/jira/browse/SOLR-3686
>
> I suspect the only reason it was ever setup that way, was to
> prevent having duplicate copies of jars in svn and when we ship releases.
> svn & src releasese are no longer an issue now that we use ivy, which just
> leaves the binary releases...
>
> but honestly i'm not sure why that was ever even an issue -- core has a
> dependency on solrj, so why can't we just tell non-war users who wnat to
> use core to just copy the dependencies directly from solrj/lib?

I'm +1 for either that, or moving the hack to the binary-packaging, or
whatever we can do.

it would be great to compile solrj's src with only its dependencies
(its fine if it runs tests with solr-core and its), and it would be
also great to remove the "ivy-sync-disable" so that developers dont
have clean-jars issues anymore.

-- 
lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3779) LineEntityProcessor processes only one document

2012-09-06 Thread Simon Boyle (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449779#comment-13449779
 ] 

Simon Boyle edited comment on SOLR-3779 at 9/7/12 3:27 AM:
---

We've noticed similar issues in 3.6.1 after upgrading from 3.5
Only the first file processed in a multi-file 
FileListEntityProcessor/LineEntityProcessor combination, 
and with only the first value of a multi-valued entry listed in a nested 
SqlEntityProcessor.


  was (Author: beinmysolo):
We've noticed similar issues in 3.6.1 after upgrading from 3.6.1
Only the first file processed in a multi-file 
FileListEntityProcessor/LineEntityProcessor combination, 
and with only the first value of a multi-valued entry listed in a nested 
SqlEntityProcessor.

  
> LineEntityProcessor processes only one document
> ---
>
> Key: SOLR-3779
> URL: https://issues.apache.org/jira/browse/SOLR-3779
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.0-BETA
>Reporter: Ahmet Arslan
>Assignee: James Dyer
> Fix For: 4.0
>
> Attachments: SOLR-3779.patch
>
>
> LineEntityProcessor processes only one document when combined with 
> FileListEntityProcessor.
> {code:xml}
> 
> 
> 
> baseDir="/Volumes/data/Documents" recursive="false" rootEntity="false" 
> dataSource="null" transformer="TemplateTransformer" >
>   processor="LineEntityProcessor" url="${f.fileAbsolutePath}" dataSource="fds"  
> rootEntity="true" transformer="TemplateTransformer">
>  template="hello${f.fileAbsolutePath},${jc.rawLine}" />
> 
>
> 
> 
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3779) LineEntityProcessor processes only one document

2012-09-06 Thread Simon Boyle (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449779#comment-13449779
 ] 

Simon Boyle commented on SOLR-3779:
---

We've noticed similar issues in 3.6.1 after upgrading from 3.6.1
Only the first file processed in a multi-file 
FileListEntityProcessor/LineEntityProcessor combination, 
and with only the first value of a multi-valued entry listed in a nested 
SqlEntityProcessor.


> LineEntityProcessor processes only one document
> ---
>
> Key: SOLR-3779
> URL: https://issues.apache.org/jira/browse/SOLR-3779
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.0-BETA
>Reporter: Ahmet Arslan
>Assignee: James Dyer
> Fix For: 4.0
>
> Attachments: SOLR-3779.patch
>
>
> LineEntityProcessor processes only one document when combined with 
> FileListEntityProcessor.
> {code:xml}
> 
> 
> 
> baseDir="/Volumes/data/Documents" recursive="false" rootEntity="false" 
> dataSource="null" transformer="TemplateTransformer" >
>   processor="LineEntityProcessor" url="${f.fileAbsolutePath}" dataSource="fds"  
> rootEntity="true" transformer="TemplateTransformer">
>  template="hello${f.fileAbsolutePath},${jc.rawLine}" />
> 
>
> 
> 
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Chris Hostetter

: the real problem is the solr/lib which is "shared" by solr and solrj:
: https://issues.apache.org/jira/browse/SOLR-3686

I suspect the only reason it was ever setup that way, was to 
prevent having duplicate copies of jars in svn and when we ship releases.  
svn & src releasese are no longer an issue now that we use ivy, which just 
leaves the binary releases... 

but honestly i'm not sure why that was ever even an issue -- core has a 
dependency on solrj, so why can't we just tell non-war users who wnat to 
use core to just copy the dependencies directly from solrj/lib?

: This also means we must disable ivy's sync=true in solr/core and
: solr/solrj: which means solr developers have to put up with
: "clean-jars hell" whenever anyone updates a jar.

Ah ... that explains so much -- mccandles and i were confused by this 
yesterday, but didn't realize that there was a special hack for this.



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Robert Muir
On Thu, Sep 6, 2012 at 12:18 PM, Erick Erickson  wrote:
> Java files matching anything ending in .jj or .jflex, right?
>
> Yep, I changed one of the java files once, I think McCandless rescued me
>
> I'll make it so, but just to verify that I'm not going off the deep
> end again, the process will be to modify the .jj or .jflex files to
> remove tabs, regenerate the java and check _both_ back in?
>

I don't think so. I think just ignore the problems in the jflex/javacc
generated code for now, so this means we cannot yet enable the ant
check.

I think the generators themselves are making these tabs, we need to
deal with that separately.

-- 
lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson
Java files matching anything ending in .jj or .jflex, right?

Yep, I changed one of the java files once, I think McCandless rescued me

I'll make it so, but just to verify that I'm not going off the deep
end again, the process will be to modify the .jj or .jflex files to
remove tabs, regenerate the java and check _both_ back in?

Erick

On Thu, Sep 6, 2012 at 9:05 AM, Erick Erickson (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449766#comment-13449766
>  ]
>
> Erick Erickson commented on LUCENE-4362:
> 
>
> OK, my test failures apparently related to this patch, I'm getting the same 
> thing after rolling it all back. I'll post on the dev list.
>
>> ban tab-indented source
>> ---
>>
>> Key: LUCENE-4362
>> URL: https://issues.apache.org/jira/browse/LUCENE-4362
>> Project: Lucene - Core
>>  Issue Type: Task
>>Reporter: Robert Muir
>>Assignee: Erick Erickson
>> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch, 
>> LUCENE-4362.patch
>>
>>
>> This makes code really difficult to read and work with.
>> Its easy enough to prevent.
>> {noformat}
>> Index: build.xml
>> ===
>> --- build.xml (revision 1380979)
>> +++ build.xml (working copy)
>> @@ -77,11 +77,12 @@
>>  
>>
>>
>> +  
>>  
>>
>>
>>  
>> -The following files contain @author 
>> tags or nocommits:${line.separator}${validate.patternsFound}
>> +The following files contain @author 
>> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>>
>> {noformat}
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3679) Core Admin UI gives no feedback if "Add Core" fails

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-3679.
-

Resolution: Fixed

Committed revision 1381655. trunk
Committed revision 1381656. 4x

> Core Admin UI gives no feedback if "Add Core" fails
> ---
>
> Key: SOLR-3679
> URL: https://issues.apache.org/jira/browse/SOLR-3679
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Attachments: SOLR-3679.patch, SOLR-3679.patch
>
>
> * start the example
> * load the admin ui, click on core admin
> * click on "Add Core"
> * fill the form out with giberish and submit.
> The form stays on the screen w/o any feedback that an error occurred

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449766#comment-13449766
 ] 

Erick Erickson commented on LUCENE-4362:


OK, my test failures apparently related to this patch, I'm getting the same 
thing after rolling it all back. I'll post on the dev list.

> ban tab-indented source
> ---
>
> Key: LUCENE-4362
> URL: https://issues.apache.org/jira/browse/LUCENE-4362
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Erick Erickson
> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch, 
> LUCENE-4362.patch
>
>
> This makes code really difficult to read and work with.
> Its easy enough to prevent.
> {noformat}
> Index: build.xml
> ===
> --- build.xml (revision 1380979)
> +++ build.xml (working copy)
> @@ -77,11 +77,12 @@
>  
>
>
> +  
>  
>
>
>  
> -The following files contain @author 
> tags or nocommits:${line.separator}${validate.patternsFound}
> +The following files contain @author 
> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3679) Core Admin UI gives no feedback if "Add Core" fails

2012-09-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449759#comment-13449759
 ] 

Hoss Man commented on SOLR-3679:


Bah .. no, must have been from manual testing with the example - ignore any 
solr.xml changes.

> Core Admin UI gives no feedback if "Add Core" fails
> ---
>
> Key: SOLR-3679
> URL: https://issues.apache.org/jira/browse/SOLR-3679
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Attachments: SOLR-3679.patch, SOLR-3679.patch
>
>
> * start the example
> * load the admin ui, click on core admin
> * click on "Add Core"
> * fill the form out with giberish and submit.
> The form stays on the screen w/o any feedback that an error occurred

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4365) The Maven build can't directly handle complex inter-module dependencies involving the test-framework modules

2012-09-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449739#comment-13449739
 ] 

Uwe Schindler commented on LUCENE-4365:
---

bq. Yeah some of the crazy analyzers tests (TestRandomChains etc) have this 
same logic. would be nice if it was done better.

Unfortunately the Java ClassLoaders do not support listing all classes in a 
package. To solve this, the tests use a trick: They ask for the resource URI 
for the base package path from the classloader. And then standard recursive 
directory inspection is used. This needs the classloader to return a file:// 
URL, if that is not the case, we throw exception - that's the one you get?.

But not only those tests are doing this, a lot of tests, that access zip files 
directly in classpath (when Random Access is needed, because Classloaders only 
allow streams) do the same, they get the reource URI and then open the ZIP 
file. I think this is not a problem, as the tests are accessing only their own 
resources, not those of a foreign mdoule - so JAR files are not involved.

Maybe Java 8 has a solution to list all classes in a package...

> The Maven build can't directly handle complex inter-module dependencies 
> involving the test-framework modules
> 
>
> Key: LUCENE-4365
> URL: https://issues.apache.org/jira/browse/LUCENE-4365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Steven Rowe
>Assignee: Steven Rowe
>Priority: Minor
> Attachments: LUCENE-4365.patch, 
> lucene.solr.cyclic.dependencies.removed.png, 
> lucene.solr.dependency.cycles.png.jpg
>
>
> The Maven dependency model disallows cyclic dependencies, of which there are 
> now several in the Ant build (considering test and compile dependencies 
> together, as Maven does).  All of these cycles involve either the Lucene 
> test-framework or the Solr test-framework.
> The current Maven build works around this problem by incorporating 
> dependencies' sources into dependent modules' test sources, rather than 
> literally declaring the problematic dependencies as such. (See SOLR-3780 for 
> a recent example of putting this workaround in place for the Solrj module.)  
> But with the factoring out of the Lucene Codecs module, upon which Lucene 
> test-framework has a compile-time dependency, the complexity of the 
> workarounds required to make it all hang together is great enough that I want 
> to attempt a (Maven-build-only) module refactoring.  It should require fewer 
> contortions and be more maintainable.
> The Maven build is currently broken, as of the addition of the Codecs module 
> (LUCENE-4340).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Robert Muir
I think this jenkins job has the wrong clean option? its just
svn-upping but not nuking svn-ignored stuff like the other jenkins
jobs.

the real problem is the solr/lib which is "shared" by solr and solrj:
https://issues.apache.org/jira/browse/SOLR-3686

This means that these tests have a bogus classpath (e.g. every so
often someone accidentally makes solrj depend on lucene or something
like that, nothing fails).

This also means we must disable ivy's sync=true in solr/core and
solr/solrj: which means solr developers have to put up with
"clean-jars hell" whenever anyone updates a jar.
no other part of the build has this problem: when jars are resolved
anything thats not supposed to be there is automatically deleted.

  
  


On Thu, Sep 6, 2012 at 11:02 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/26/
>
> All tests passed
>
> Build Log:
> [...truncated 6701 lines...]
> [javac] Compiling 476 source files to 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/classes/java
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/function/distance/HaversineConstFunction.java:42:
>  cannot find symbol
> [javac] symbol  : static DEGREES_TO_RADIANS
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac] import static 
> com.spatial4j.core.distance.DistanceUtils.DEGREES_TO_RADIANS;
> [javac] ^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:402:
>  cannot find symbol
> [javac] symbol  : variable DEGREES_TO_RADIANS
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac] return vals.doubleVal(doc) * 
> DistanceUtils.DEGREES_TO_RADIANS;
> [javac]   ^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:408:
>  cannot find symbol
> [javac] symbol  : variable RADIANS_TO_DEGREES
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac] return vals.doubleVal(doc) * 
> DistanceUtils.RADIANS_TO_DEGREES;
> [javac]   ^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/GeoHashField.java:51:
>  cannot find symbol
> [javac] symbol  : variable GEO
> [javac] location: class com.spatial4j.core.context.SpatialContext
> [javac]   private final SpatialContext ctx = SpatialContext.GEO;
> [javac]^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:148:
>  cannot find symbol
> [javac] symbol  : variable GEO
> [javac] location: class com.spatial4j.core.context.SpatialContext
> [javac] Rectangle bbox = 
> DistanceUtils.calcBoxByDistFromPtDEG(latCenter, lonCenter, distDeg, 
> SpatialContext.GEO, null);
> [javac]   
>  ^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:400:
>  cannot find symbol
> [javac] symbol  : variable DEGREES_TO_RADIANS
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac]   this.latCenterRad = SpatialDistanceQuery.this.latCenter * 
> DistanceUtils.DEGREES_TO_RADIANS;
> [javac]   
>^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:401:
>  cannot find symbol
> [javac] symbol  : variable DEGREES_TO_RADIANS
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac]   this.lonCenterRad = SpatialDistanceQuery.this.lonCenter * 
> DistanceUtils.DEGREES_TO_RADIANS;
> [javac]   
>^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:430:
>  cannot find symbol
> [javac] symbol  : variable DEGREES_TO_RADIANS
> [javac] location: class com.spatial4j.core.distance.DistanceUtils
> [javac]   double latRad = lat * DistanceUtils.DEGREES_TO_RADIANS;
> [javac]  ^
> [javac] 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:431:
>  cannot find symbol
>

[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449729#comment-13449729
 ] 

Michael McCandless commented on LUCENE-4364:


This patch looks great!  MMapDir is so simple now ... and I love how slice is a 
method on BBII.  Very nice to nuke openFullSlice too.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #602: POMs out of sync

2012-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/602/

No tests ran.

Build Log:
[...truncated 438 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 26 - Failure

2012-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/26/

All tests passed

Build Log:
[...truncated 6701 lines...]
[javac] Compiling 476 source files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/build/solr-core/classes/java
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/function/distance/HaversineConstFunction.java:42:
 cannot find symbol
[javac] symbol  : static DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac] import static 
com.spatial4j.core.distance.DistanceUtils.DEGREES_TO_RADIANS;
[javac] ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:402:
 cannot find symbol
[javac] symbol  : variable DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac] return vals.doubleVal(doc) * 
DistanceUtils.DEGREES_TO_RADIANS;
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java:408:
 cannot find symbol
[javac] symbol  : variable RADIANS_TO_DEGREES
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac] return vals.doubleVal(doc) * 
DistanceUtils.RADIANS_TO_DEGREES;
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/GeoHashField.java:51:
 cannot find symbol
[javac] symbol  : variable GEO
[javac] location: class com.spatial4j.core.context.SpatialContext
[javac]   private final SpatialContext ctx = SpatialContext.GEO;
[javac]^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:148:
 cannot find symbol
[javac] symbol  : variable GEO
[javac] location: class com.spatial4j.core.context.SpatialContext
[javac] Rectangle bbox = 
DistanceUtils.calcBoxByDistFromPtDEG(latCenter, lonCenter, distDeg, 
SpatialContext.GEO, null);
[javac] 
   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:400:
 cannot find symbol
[javac] symbol  : variable DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac]   this.latCenterRad = SpatialDistanceQuery.this.latCenter * 
DistanceUtils.DEGREES_TO_RADIANS;
[javac] 
 ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:401:
 cannot find symbol
[javac] symbol  : variable DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac]   this.lonCenterRad = SpatialDistanceQuery.this.lonCenter * 
DistanceUtils.DEGREES_TO_RADIANS;
[javac] 
 ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:430:
 cannot find symbol
[javac] symbol  : variable DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac]   double latRad = lat * DistanceUtils.DEGREES_TO_RADIANS;
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/schema/LatLonType.java:431:
 cannot find symbol
[javac] symbol  : variable DEGREES_TO_RADIANS
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac]   double lonRad = lon * DistanceUtils.DEGREES_TO_RADIANS;
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/function/distance/GeohashHaversineFunction.java:54:
 cannot find symbol
[javac] symbol  : method degrees2Dist(int,double)
[javac] location: class com.spatial4j.core.distance.DistanceUtils
[javac] this.degreesToDist = DistanceUtils.degrees2Dist(1, radius);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/solr/core/src/java/org/apache/solr/search/function/distance/GeohashHaversineFunction.java:55:
 cannot find symbol
[javac] symbol  : variable GEO
[javac] location: c

[jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449694#comment-13449694
 ] 

Robert Muir commented on LUCENE-4362:
-

as mentioned on the list, i dont think we should change the generated files 
(e.g. standardtokenizerimpl etc).

We should fix those separately as part of the generation process / get jflex 
fixed for the one line it generates that causes this, etc :)

> ban tab-indented source
> ---
>
> Key: LUCENE-4362
> URL: https://issues.apache.org/jira/browse/LUCENE-4362
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Erick Erickson
> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch, 
> LUCENE-4362.patch
>
>
> This makes code really difficult to read and work with.
> Its easy enough to prevent.
> {noformat}
> Index: build.xml
> ===
> --- build.xml (revision 1380979)
> +++ build.xml (working copy)
> @@ -77,11 +77,12 @@
>  
>
>
> +  
>  
>
>
>  
> -The following files contain @author 
> tags or nocommits:${line.separator}${validate.patternsFound}
> +The following files contain @author 
> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3795) /admin/luke?show=schema is returning raw toString of SchemaField and CopyField objects for "copyDests" and "copySources"

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449691#comment-13449691
 ] 

Stefan Matheis (steffkes) commented on SOLR-3795:
-

Perfectly fine Hoss, go ahead!

> /admin/luke?show=schema is returning raw toString of SchemaField and 
> CopyField objects for "copyDests" and "copySources"
> 
>
> Key: SOLR-3795
> URL: https://issues.apache.org/jira/browse/SOLR-3795
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.0
>
> Attachments: SOLR-3795.patch
>
>
> While looking into SOLR-3734 i noticed that the LukeRequestHandler is blindly 
> putting arrays of CopyField and SchemaField objects in the response, when 
> returning copy from/to info, which are then getting written out using their 
> toString.
> steffkes seems to have done a great job of parsing the field name out of the 
> SchemaField.toString, but the CopyField.toString info is useless -- and 
> clients shouldn't have to do special string parsing to pull out this info.
> I think we should just fix both of these arrays to just be the simple string 
> values that they were most likely intended to be

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-4362:
---

Attachment: LUCENE-4362.patch

Rolls up both the other patches and removes the rest of the tabs from the java 
files

> ban tab-indented source
> ---
>
> Key: LUCENE-4362
> URL: https://issues.apache.org/jira/browse/LUCENE-4362
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Erick Erickson
> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch, 
> LUCENE-4362.patch
>
>
> This makes code really difficult to read and work with.
> Its easy enough to prevent.
> {noformat}
> Index: build.xml
> ===
> --- build.xml (revision 1380979)
> +++ build.xml (working copy)
> @@ -77,11 +77,12 @@
>  
>
>
> +  
>  
>
>
>  
> -The following files contain @author 
> tags or nocommits:${line.separator}${validate.patternsFound}
> +The following files contain @author 
> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449687#comment-13449687
 ] 

Erick Erickson commented on LUCENE-4362:


I took Robert's patch, applied it and spent some time on a flight playing with 
tabs. I went through the diffs on a fast scan and indented some egregious 
indentation that jumped out.

Honest, I tried to restrain myself when reformatting _code_ rather than just 
indenting some stray lines, but in a few cases I just couldn't stand it and 
reformatted a couple of files (almost all the lines had tabs anyway) and a few 
complete methods that also had almost all tabbed lines so there shouldn't be 
very many gratuitous changes...

However, a curious thing happens when I try "ant test". There's some kind of 
never-ending process that I'm seeing occasionally on my machine. I won't have a 
chance to really look at it for a bit, I'll report more detail when I do. I'll 
need to roll back my changes and see if it occurs without them, the usual

> ban tab-indented source
> ---
>
> Key: LUCENE-4362
> URL: https://issues.apache.org/jira/browse/LUCENE-4362
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Erick Erickson
> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch, 
> LUCENE-4362.patch
>
>
> This makes code really difficult to read and work with.
> Its easy enough to prevent.
> {noformat}
> Index: build.xml
> ===
> --- build.xml (revision 1380979)
> +++ build.xml (working copy)
> @@ -77,11 +77,12 @@
>  
>
>
> +  
>  
>
>
>  
> -The following files contain @author 
> tags or nocommits:${line.separator}${validate.patternsFound}
> +The following files contain @author 
> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4354) add validate-maven task to check maven dependencies

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449684#comment-13449684
 ] 

Robert Muir commented on LUCENE-4354:
-

I committed this: but its not yet linked into any jenkins job, until this 
randomizedtesting jar can actually be downloaded from maven reliably.


> add validate-maven task to check maven dependencies
> ---
>
> Key: LUCENE-4354
> URL: https://issues.apache.org/jira/browse/LUCENE-4354
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
> Attachments: LUCENE-4354-dep-fix.patch, 
> LUCENE-4354_hacked_lucene_only.patch, LUCENE-4354.patch, LUCENE-4354.patch, 
> LUCENE-4354.patch, LUCENE-4354.patch, LUCENE-4354.patch, LUCENE-4354.patch, 
> LUCENE-4354.patch, LUCENE-4354.patch, LUCENE-4354.patch, LUCENE-4354.patch, 
> LUCENE-4354.patch
>
>
> We had a situation where the maven artifacts depended on the wrong version of 
> tika: we should test that the maven dependencies are correct.
> An easy way to do this is to force it to download all of its dependencies, 
> and then run our existing license checks over that.
> This currently fails: maven is bringing in some extra 3rd party libraries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3680) Core Admin UI "Add Core" screen has confusing "ghost" defaults

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-3680.
-

Resolution: Duplicate

Will be fixed in conclusion with SOLR-3679

> Core Admin UI "Add Core" screen has confusing "ghost" defaults
> --
>
> Key: SOLR-3680
> URL: https://issues.apache.org/jira/browse/SOLR-3680
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
>
> Currently the "Add Core" screen in the admin UI has all of the form fields 
> filled in with what i can only describe as "ghost values":
> * if you click on them they vanish
> * if you click away w/o typing anything they re-appear
> * if you submit the form with a ghost value left in place, no value is sent
> i think we should consider:
> * removing these ghost values completley
> * using some visual indicator of what blanks are required

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3679) Core Admin UI gives no feedback if "Add Core" fails

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449676#comment-13449676
 ] 

Stefan Matheis (steffkes) commented on SOLR-3679:
-

Hoss, not sure if the included change to {{solr/example/solr/solr.xml}} is 
really wanted? If so, just let me know and i'll commit this.

> Core Admin UI gives no feedback if "Add Core" fails
> ---
>
> Key: SOLR-3679
> URL: https://issues.apache.org/jira/browse/SOLR-3679
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Attachments: SOLR-3679.patch, SOLR-3679.patch
>
>
> * start the example
> * load the admin ui, click on core admin
> * click on "Add Core"
> * fill the form out with giberish and submit.
> The form stays on the screen w/o any feedback that an error occurred

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2039) Regex support and beyond in JavaCC QueryParser

2012-09-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-2039:
-

 Priority: Critical  (was: Minor)
Fix Version/s: 4.0

> Regex support and beyond in JavaCC QueryParser
> --
>
> Key: LUCENE-2039
> URL: https://issues.apache.org/jira/browse/LUCENE-2039
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Critical
> Fix For: 4.0-ALPHA, 4.0
>
> Attachments: LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, LUCENE-2039.patch, 
> LUCENE-2039_wrap_master_parser.patch
>
>
> Since the early days the standard query parser was limited to the queries 
> living in core, adding other queries or extending the parser in any way 
> always forced people to change the grammar file and regenerate. Even if you 
> change the grammar you have to be extremely careful how you modify the parser 
> so that other parts of the standard parser are affected by customisation 
> changes. Eventually you had to live with all the limitation the current 
> parser has like tokenizing on whitespaces before a tokenizer / analyzer has 
> the chance to look at the tokens. 
> I was thinking about how to overcome the limitation and add regex support to 
> the query parser without introducing any dependency to core. I added a new 
> special character that basically prevents the parser from interpreting any of 
> the characters enclosed in the new special characters. I choose the forward 
> slash  '/' as the delimiter so that everything in between two forward slashes 
> is basically escaped and ignored by the parser. All chars embedded within 
> forward slashes are treated as one token even if it contains other special 
> chars like * []?{} or whitespaces. This token is subsequently passed to a 
> pluggable "parser extension" with builds a query from the embedded string. I 
> do not interpret the embedded string in any way but leave all the subsequent 
> work to the parser extension. Such an extension could be another full 
> featured query parser itself or simply a ctor call for regex query. The 
> interface remains quiet simple but makes the parser extendible in an easy way 
> compared to modifying the javaCC sources.
> The downsides of this patch is clearly that I introduce a new special char 
> into the syntax but I guess that would not be that much of a deal as it is 
> reflected in the escape method though. It would truly be nice to have more 
> than once extension an have this even more flexible so treat this patch as a 
> kickoff though.
> Another way of solving the problem with RegexQuery would be to move the JDK 
> version of regex into the core and simply have another method like:
> {code}
> protected Query newRegexQuery(Term t) {
>   ... 
> }
> {code}
> which I would like better as it would be more consistent with the idea of the 
> query parser to be a very strict and defined parser.
> I will upload a patch in a second which implements the extension based 
> approach I guess I will add a second patch with regex in core soon too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-2039) Regex support and beyond in JavaCC QueryParser

2012-09-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened LUCENE-2039:
--


Reopening, as I believe the grammar as implemented is a bit flawed.

A simple query of foo/bar will now fail since the slash in the middle of the 
term is seen as the start of a regex.

> Regex support and beyond in JavaCC QueryParser
> --
>
> Key: LUCENE-2039
> URL: https://issues.apache.org/jira/browse/LUCENE-2039
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Minor
> Fix For: 4.0-ALPHA, 4.0
>
> Attachments: LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, 
> LUCENE-2039_field_ext.patch, LUCENE-2039_field_ext.patch, LUCENE-2039.patch, 
> LUCENE-2039_wrap_master_parser.patch
>
>
> Since the early days the standard query parser was limited to the queries 
> living in core, adding other queries or extending the parser in any way 
> always forced people to change the grammar file and regenerate. Even if you 
> change the grammar you have to be extremely careful how you modify the parser 
> so that other parts of the standard parser are affected by customisation 
> changes. Eventually you had to live with all the limitation the current 
> parser has like tokenizing on whitespaces before a tokenizer / analyzer has 
> the chance to look at the tokens. 
> I was thinking about how to overcome the limitation and add regex support to 
> the query parser without introducing any dependency to core. I added a new 
> special character that basically prevents the parser from interpreting any of 
> the characters enclosed in the new special characters. I choose the forward 
> slash  '/' as the delimiter so that everything in between two forward slashes 
> is basically escaped and ignored by the parser. All chars embedded within 
> forward slashes are treated as one token even if it contains other special 
> chars like * []?{} or whitespaces. This token is subsequently passed to a 
> pluggable "parser extension" with builds a query from the embedded string. I 
> do not interpret the embedded string in any way but leave all the subsequent 
> work to the parser extension. Such an extension could be another full 
> featured query parser itself or simply a ctor call for regex query. The 
> interface remains quiet simple but makes the parser extendible in an easy way 
> compared to modifying the javaCC sources.
> The downsides of this patch is clearly that I introduce a new special char 
> into the syntax but I guess that would not be that much of a deal as it is 
> reflected in the escape method though. It would truly be nice to have more 
> than once extension an have this even more flexible so treat this patch as a 
> kickoff though.
> Another way of solving the problem with RegexQuery would be to move the JDK 
> version of regex into the core and simply have another method like:
> {code}
> protected Query newRegexQuery(Term t) {
>   ... 
> }
> {code}
> which I would like better as it would be more consistent with the idea of the 
> query parser to be a very strict and defined parser.
> I will upload a patch in a second which implements the extension based 
> approach I guess I will add a second patch with regex in core soon too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned LUCENE-4362:
--

Assignee: Erick Erickson

> ban tab-indented source
> ---
>
> Key: LUCENE-4362
> URL: https://issues.apache.org/jira/browse/LUCENE-4362
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
>Assignee: Erick Erickson
> Attachments: LUCENE-4362_core.patch, LUCENE-4362.patch
>
>
> This makes code really difficult to read and work with.
> Its easy enough to prevent.
> {noformat}
> Index: build.xml
> ===
> --- build.xml (revision 1380979)
> +++ build.xml (working copy)
> @@ -77,11 +77,12 @@
>  
>
>
> +  
>  
>
>
>  
> -The following files contain @author 
> tags or nocommits:${line.separator}${validate.patternsFound}
> +The following files contain @author 
> tags, tabs or nocommits:${line.separator}${validate.patternsFound}
>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-4362) ban tab-indented source

2012-09-06 Thread Erick Erickson
Nah, here's my alter ego: http://www.redstate.com/

Anyway, the address I try to use for all this is
erick.erick...@gmail.com, thanks
for straightening this out, I just assigned this one to myself. I'll
attach a patch shortly.

Erick



On Wed, Sep 5, 2012 at 11:39 AM, Dawid Weiss
 wrote:
>> erick.erick...@gmail.com (that one had permissions), but that either
>
> It's probably Erick's dark side alter ego:
> http://goo.gl/orGMC
>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache

2012-09-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449665#comment-13449665
 ] 

Adrien Grand commented on SOLR-3393:


Hi hoss, thanks for bringing this up!

 - {{put}} should return the previous value instead of returning the value that 
has been put into the cache.
 - I don't like the {{evictDecay}} option, I assume it is here to prevent the 
most frequently used entry from being evicted in case all entries have a 
frequency >= maxFreq but on the other hand it makes every put operation run in 
O( n ) so maybe we should just remove it and add a message in the class 
javadocs to warn users that entries that have a frequency >= maxFreq are 
evicted according to a LRU policy.
 - Maybe we should remove warmDecay as well, I understand it is here to try to 
prevent cache pollution but it makes the cache behave differently in case there 
are commits: if an entry is retrieved 5 times before a commit and 5 times 
after, it will be considered less frequently used than an entry that has been 
retrieved 8 times after the commit, this is counterintuitive.
 - I think Entry.value and Entry.frequency don't need to be volatile?
 - maxCacheSize - 1 is probably a too high default value for maxFreq. It can 
make doEviction (and consequently put) run in O( n ). Maybe we should make it 
configurable with something like log( n ) or 10 as a default value? Moreover, 
lower values of maxFreq are less prone to cache pollution.

Regarding your (2), I personally don't mind if it is a little slower on 
average. I would expect the get method to be slower with this impl but on the 
other hand ConcurrentLFUCache seems to provide no warranty that it will be able 
to evict entries as fast as they get inserted into the cache  so I think this 
new impl is better.

> Implement an optimized LFUCache
> ---
>
> Key: SOLR-3393
> URL: https://issues.apache.org/jira/browse/SOLR-3393
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.6, 4.0-ALPHA
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



solrconfig issue: wiki doc says update log feature is not enabled by default

2012-09-06 Thread Jack Krupansky
The wiki doc for updateLog says that this feature is “not enabled by 
default” when in fact it is enabled by default in the example 
solrconfig.xml for Solr 4.0-BETA.


The question is whether the example is wrong or the doc is wrong. All I know 
is that they do not match.


See:
http://wiki.apache.org/solr/RealTimeGet

Also, it probably makes sense to add at list a reference to updateLog on the 
main solrconfig wiki doc page. It currently says: "TODO: Document /get - see 
[RealTimeGet]"


See:
http://wiki.apache.org/solr/SolrConfigXml

-- Jack Krupansky 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4364:


Attachment: LUCENE-4364.patch

re-synced to trunk after changing the assert to a hard check.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449615#comment-13449615
 ] 

Robert Muir commented on LUCENE-4364:
-

while reviewing the CFS reading code i see the classic 
assert-should-be-check-put-return-value:
{noformat}
assert !mapping.containsKey(id): "id=" + id + " was written multiple times in 
the CFS";
mapping.put(id, fileEntry);
{noformat}

Ill fix this and re-sync up the patch.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3793) duplicate (deleted) documents included in result set when using field faceting with fq

2012-09-06 Thread Guenter Hipler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449609#comment-13449609
 ] 

Guenter Hipler commented on SOLR-3793:
--

Thanks Yonik,
I'm going to test it with the upcoming nightly build
Günter

> duplicate (deleted) documents included in result set when using field 
> faceting with fq
> --
>
> Key: SOLR-3793
> URL: https://issues.apache.org/jira/browse/SOLR-3793
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0-BETA
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.0
>
> Attachments: SOLR-3793.patch
>
>
> Günter Hipler reported on the solr-user mailing list that he was seeing 
> inconsistencies in facet counts compared to the numFound when drilling down 
> onto those facets (using "fq") - in particular: when adding an "fq" such as 
> `fq={!term+f%3DnavNetwork}nebis`, the resulting numFound was higher then the 
> number of docs reported by the facet constraint for nebis in the base request.
> I've been able to trivially reproduce this using the example data from Solr 
> 4.0-BETA, trunk@r1381400, and branch_4x@r1381400 (details in comment to 
> follow)
> Important things to note from Günter's email thread with his assessment of 
> the problem...
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201208.mbox/%3ccam_u7jfdpnrgfmwmntnachcdcjw4yb-rlbbvrw_wp_jdob_...@mail.gmail.com%3E
> bq. The behaviour is not consistent. Some of the facets provide the correct 
> result, some not.  What I can't say for sure: The behaviour was correct (if 
> I'm not wrong) once the whole index was newly created. After running some 
> updates I got these results.
> bq. I'm going to setup a new index with the Lucene 4.0 version from March (to 
> be more exactly: it's version 4.0-2012-03-09_11-29-20) to see what are the 
> results even in case of frequent updates ... the version deployed in march 
> doesn't contain the error I now come across in Beta4.0 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4364:


Attachment: LUCENE-4364.patch

updated patch: I removed the assert 0L (its a contract of the method slice) 
from the openSlice method, and added back the IAE if you try to slice from a 
clone.

as far as the openFullSlice, the assert is correct (for 4.x maybe), but in 
trunk we don't even need this method: its a back compat shim for reading older 
3.x formatted .cfs files, which trunk no longer supports. this is just dead 
code: I removed it: we should deprecate it from 4.x

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3793) duplicate (deleted) documents included in result set when using field faceting with fq

2012-09-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-3793.


Resolution: Fixed

Committed to trunk and 4x
http://svn.apache.org/viewvc?rev=1381568&view=rev
http://svn.apache.org/viewvc?rev=1381569&view=rev

> duplicate (deleted) documents included in result set when using field 
> faceting with fq
> --
>
> Key: SOLR-3793
> URL: https://issues.apache.org/jira/browse/SOLR-3793
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0-BETA
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.0
>
> Attachments: SOLR-3793.patch
>
>
> Günter Hipler reported on the solr-user mailing list that he was seeing 
> inconsistencies in facet counts compared to the numFound when drilling down 
> onto those facets (using "fq") - in particular: when adding an "fq" such as 
> `fq={!term+f%3DnavNetwork}nebis`, the resulting numFound was higher then the 
> number of docs reported by the facet constraint for nebis in the base request.
> I've been able to trivially reproduce this using the example data from Solr 
> 4.0-BETA, trunk@r1381400, and branch_4x@r1381400 (details in comment to 
> follow)
> Important things to note from Günter's email thread with his assessment of 
> the problem...
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201208.mbox/%3ccam_u7jfdpnrgfmwmntnachcdcjw4yb-rlbbvrw_wp_jdob_...@mail.gmail.com%3E
> bq. The behaviour is not consistent. Some of the facets provide the correct 
> result, some not.  What I can't say for sure: The behaviour was correct (if 
> I'm not wrong) once the whole index was newly created. After running some 
> updates I got these results.
> bq. I'm going to setup a new index with the Lucene 4.0 version from March (to 
> be more exactly: it's version 4.0-2012-03-09_11-29-20) to see what are the 
> results even in case of frequent updates ... the version deployed in march 
> doesn't contain the error I now come across in Beta4.0 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: JIRA assign issue

2012-09-06 Thread Steven A Rowe
Hi Jacek,

I've just added you to the Contributors group for the Solr project on JIRA.  
This should allow you to assign issues to yourself.

Steve

-Original Message-
From: Jacek Plebanek [mailto:pleba...@gmail.com] 
Sent: Thursday, September 06, 2012 8:32 AM
To: dev@lucene.apache.org
Subject: JIRA assign issue

Hi,

I would like to assign some JIRA issues to me
(https://issues.apache.org/jira/browse/SOLR-3799).

Can I get privileges to do that?


Jacek


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



JIRA assign issue

2012-09-06 Thread Jacek Plebanek
Hi,

I would like to assign some JIRA issues to me
(https://issues.apache.org/jira/browse/SOLR-3799).

Can I get privileges to do that?


Jacek


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3805) Federated Search: Collections representation

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3805:


 Summary: Federated Search: Collections representation
 Key: SOLR-3805
 URL: https://issues.apache.org/jira/browse/SOLR-3805
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Different federated search algorithms can use different kind of information 
about collections and different way to store them. The interface and data model 
should be therefore as general as possible.

The simple implementation could be based on set of keywords describing each 
collection.

Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3804) Federated Search: External collections adapters

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3804:


 Summary: Federated Search: External collections adapters
 Key: SOLR-3804
 URL: https://issues.apache.org/jira/browse/SOLR-3804
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


In order to send requests to external collections, we need mechanism for 
pluggable custom adapters (connectors).

responsibilities:
- translate solr query to native query
- send request
- get response and translate to solr model

TODO:
- interfaces
- data model
- simple example implementation(s)

Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3803) Federated Search: Results merging

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3803:


 Summary: Federated Search: Results merging
 Key: SOLR-3803
 URL: https://issues.apache.org/jira/browse/SOLR-3803
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Merging results stage is closely related to search query handling in 
QueryComponent. In federated environment it is pretty complex part (could 
require sending requests to shards or collections representations) and can 
implement various algorithms. I think that it should be implemented as separate 
module pluggable to QueryComponent or even as whole separate Component.

The simple implementation could use CollectionSelectionComponent results.

Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3802) Federated Search: QueryComponent

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3802:


 Summary: Federated Search: QueryComponent
 Key: SOLR-3802
 URL: https://issues.apache.org/jira/browse/SOLR-3802
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Federated query support in QueryComponent could look similar to current 
distributed query support. If it's possible, it should not implement any 
algorithms (like merging results algorithm) - just query handling process 
skeleton.

Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3801) Federated Search: CollectionSelectionComponent

2012-09-06 Thread Jacek Plebanek (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Plebanek updated SOLR-3801:
-

Description: 
Component used to rank collections (shards) according to search query string 
(or/and other factors). Typically it will use collections representation or 
special queries to indexes to predict which of them will return most relevant 
results.

This component would be usefull also standalone (outside of Federated Search 
process).

Any suggestions welcome!

  was:
Component used to rank collections (shards) according to search query string 
(or/and other factors). Typically it will use collections representation or 
special queries to indexes to predict which of them will return most relevant 
results.

This component would be usefull also standalone (outside of Federated Search 
process).


> Federated Search: CollectionSelectionComponent
> --
>
> Key: SOLR-3801
> URL: https://issues.apache.org/jira/browse/SOLR-3801
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 5.0
>Reporter: Jacek Plebanek
>  Labels: federated_search
> Fix For: 5.0
>
>
> Component used to rank collections (shards) according to search query string 
> (or/and other factors). Typically it will use collections representation or 
> special queries to indexes to predict which of them will return most relevant 
> results.
> This component would be usefull also standalone (outside of Federated Search 
> process).
> Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3801) Federated Search: CollectionSelectionComponent

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3801:


 Summary: Federated Search: CollectionSelectionComponent
 Key: SOLR-3801
 URL: https://issues.apache.org/jira/browse/SOLR-3801
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Component used to rank collections (shards) according to search query string 
(or/and other factors). Typically it will use collections representation or 
special queries to indexes to predict which of them will return most relevant 
results.

This component would be usefull also standalone (outside of Federated Search 
process).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3800) Federated Search: SearchHandler

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3800:


 Summary: Federated Search: SearchHandler
 Key: SOLR-3800
 URL: https://issues.apache.org/jira/browse/SOLR-3800
 Project: Solr
  Issue Type: Sub-task
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Federated request support in SearchHandler could look similar to current 
DistributedSearch support: design based on STAGE flags and optional requests 
sending to collections (shards). The main difference would be in supported 
stages and in additional request available (request to collections 
representation module). Something like:

available stages:
- COLLECTION SELECTION
- PREPARE MERGE RULES
- GET DOCUMENTS

available actions:
- send search requests to shards (to adapters and solr instances)
- send requests to indexes representation

invoked Component's methods:
- federatedProcess()
- handleFederatedResponses()
- handleCollectionRepresentationResponses()


Any suggestions welcome!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449593#comment-13449593
 ] 

Robert Muir commented on LUCENE-4364:
-

its completely untested except with offset=0 and length=length, thats my 
problem (that doesnt count).

its also totally unnecessary to support: there is absolutely, positively, zero 
use-case.
ill change the patch to add it back. ill also remove the assert 
getFilePointer() == 0L (nothing depends on that).


> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3799) Federated search support - include documents from external collections in Solr search results.

2012-09-06 Thread Jacek Plebanek (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Plebanek updated SOLR-3799:
-

Description: 
Following discussion on dev@lucene.apache.org 
(http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3caba41fbe-72a8-467e-bf33-3d0ca1ed8...@cominvent.com%3E
 , 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201208.mbox/%3C1345800198.2303.68.camel@oo%3E)
 i would like to introduce the idea of Federated Search in Solr.

It would be nice to have support for real Federated Search in Solr - very 
helpfull for people who would like to include some external search results in 
their Solr-based system.

By Federated Search i mean searching across not only distributed Solr instances 
(existing DistributedSearch in Solr) but also other kind of external search 
services.

Typical federated search process includes:
- collection selection step
- results merging
- adapters for external collections connection
- collections representations (used in collection selection and/or result 
merging)


I'm thinking about creating full solution with basic example implementation of 
each module.

Things to do that comes to my mind are:
1. federated request support in SearchHandler: the place where everything is 
tight up.
2. CollectionSelectionComponent: which should be independent, so one can use it 
separately.
3. federated search support in QueryComponent: with no hard-coded agorithms if 
it's possible.
4. Results merging rules module: as pluggable part of QueryComponent or as 
separate MergingComponent.
5. Adapter (connector) to external collection: interface and example 
implementation.
6. Collections representation: interface and default implementation: Used to 
store informations about indexes/collections.


The typical use case would look like this:
- user sends search request
- Solr decides to which indexes delegate the request (collection selection): 
for example by comparing user's query with collection representations.
- Solr decides how many and which documents get from each collection (merge 
rules): for example by using previous step results.
- Solr sends user's query to collections (Solr instances and/or external 
collections through dedicated adapters)
- Solr merges and retuns the results.


Design requirements:
- lightweight implementation
- designed as Solr feature, not as something on top of Solr or as Solr extension
- easy to use and customize out of the box
- allow for extension/reimplementation by users

Any suggestions/discussions welcome!

  was:
Following discussion on dev@lucene.apache.org 
(http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3caba41fbe-72a8-467e-bf33-3d0ca1ed8...@cominvent.com%3E
 , 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201208.mbox/%3C1345800198.2303.68.camel@oo%3E)
 i would like to introduce the idea of Federated Search in Solr.

It would be nice to have support for real Federated Search in Solr - very 
helpfull for people who would like to include some external search results in 
their Solr-based system.

By Federated Search i mean searching across not only distributed Solr instances 
(existing DistributedSearch in Solr) but also other kind of external search 
services.

Typical federated search process includes:
- collection selection step
- results merging
- adapters for external collections connection
- collections representations (used in collection selection and/or result 
merging)


I'm thinking about creating full solution with basic example implementation of 
each module.

Things to do that comes to my mind are:
1. federated request support in SearchHandler: the place where everything is 
tight up.
2. CollectionSelectionComponent: which should be independent, so one can use it 
separately.
3. federated search support in QueryComponent: with no hard-coded agorithms if 
it's possible.
4. Results merging rules module: as pluggable part of QueryComponent or as 
separate MergingComponent.
5. Adapter (connector) to external collection: interface and example 
implementation.
6. Collections representation: interface and default implementation: Used to 
store informations about indexes/collections.


The typical use case would look like this:
- user sends search request
- Solr decides to which indexes delegate the request (collection selection): 
for example by comparing user's query with collection representations.
- Solr decides how many and which documents get from each collection (merge 
rules): for example by using previous step results.
- Solr sends user's query to collections (Solr instances and/or external 
collections through dedicated adapters)
- Solr merges and retuns the results.


Design requirements:
- lightweight implementation
- designed as Solr feature, not as something on top of Solr or as Solr extension
- easy to use and customize out of the box
- allow for extension/reimplementation by users


> Federated sea

[jira] [Updated] (SOLR-3799) Federated search support - include documents from external collections in Solr search results.

2012-09-06 Thread Jacek Plebanek (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Plebanek updated SOLR-3799:
-

Description: 
Following discussion on dev@lucene.apache.org 
(http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3caba41fbe-72a8-467e-bf33-3d0ca1ed8...@cominvent.com%3E
 , 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201208.mbox/%3C1345800198.2303.68.camel@oo%3E)
 i would like to introduce the idea of Federated Search in Solr.

It would be nice to have support for real Federated Search in Solr - very 
helpfull for people who would like to include some external search results in 
their Solr-based system.

By Federated Search i mean searching across not only distributed Solr instances 
(existing DistributedSearch in Solr) but also other kind of external search 
services.

Typical federated search process includes:
- collection selection step
- results merging
- adapters for external collections connection
- collections representations (used in collection selection and/or result 
merging)


I'm thinking about creating full solution with basic example implementation of 
each module.

Things to do that comes to my mind are:
1. federated request support in SearchHandler: the place where everything is 
tight up.
2. CollectionSelectionComponent: which should be independent, so one can use it 
separately.
3. federated search support in QueryComponent: with no hard-coded agorithms if 
it's possible.
4. Results merging rules module: as pluggable part of QueryComponent or as 
separate MergingComponent.
5. Adapter (connector) to external collection: interface and example 
implementation.
6. Collections representation: interface and default implementation: Used to 
store informations about indexes/collections.


The typical use case would look like this:
- user sends search request
- Solr decides to which indexes delegate the request (collection selection): 
for example by comparing user's query with collection representations.
- Solr decides how many and which documents get from each collection (merge 
rules): for example by using previous step results.
- Solr sends user's query to collections (Solr instances and/or external 
collections through dedicated adapters)
- Solr merges and retuns the results.


Design requirements:
- lightweight implementation
- designed as Solr feature, not as something on top of Solr or as Solr extension
- easy to use and customize out of the box
- allow for extension/reimplementation by users

  was:
Following discussion on dev@lucene.apache.org ([May 2012] 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3caba41fbe-72a8-467e-bf33-3d0ca1ed8...@cominvent.com%3E
 , [August 2012] 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201208.mbox/%3C1345800198.2303.68.camel@oo%3E)
 i would like to introduce the idea of Federated Search in Solr.

It would be nice to have support for real Federated Search in Solr - very 
helpfull for people who would like to include some external search results in 
their Solr-based system.

By Federated Search i mean searching across not only distributed Solr instances 
(existing DistributedSearch in Solr) but also other kind of external search 
services.

Typical federated search process includes:
- collection selection step
- results merging
- adapters for external collections connection
- collections representations (used in collection selection and/or result 
merging)


I'm thinking about creating full solution with basic example implementation of 
each module.

Things to do that comes to my mind are:
1. federated request support in SearchHandler: the place where everything is 
tight up.
2. CollectionSelectionComponent: which should be independent, so one can use it 
separately.
3. federated search support in QueryComponent: with no hard-coded agorithms if 
it's possible.
4. Results merging rules module: as pluggable part of QueryComponent or as 
separate MergingComponent.
5. Adapter (connector) to external collection: interface and example 
implementation.
6. Collections representation: interface and default implementation: Used to 
store informations about indexes/collections.


The typical use case would look like this:
- user sends search request
- Solr decides to which indexes delegate the request [collection selection]: 
for example by comparing user's query with collection representations.
- Solr decides how many and which documents get from each collection [merge 
rules]: for example by using previous step results.
- Solr sends user's query to collections (Solr instances and/or external 
collections through dedicated adapters)
- Solr merges and retuns the results.


Design requirements:
- lightweight implementation
- designed as Solr feature, not as something on top of Solr or as Solr extension
- easy to use and customize out of the box
- allow for extension/reimplementation by users


> Federated search support -

[jira] [Created] (SOLR-3799) Federated search support - include documents from external collections in Solr search results.

2012-09-06 Thread Jacek Plebanek (JIRA)
Jacek Plebanek created SOLR-3799:


 Summary: Federated search support - include documents from 
external collections in Solr search results.
 Key: SOLR-3799
 URL: https://issues.apache.org/jira/browse/SOLR-3799
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 5.0
Reporter: Jacek Plebanek
 Fix For: 5.0


Following discussion on dev@lucene.apache.org ([May 2012] 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3caba41fbe-72a8-467e-bf33-3d0ca1ed8...@cominvent.com%3E
 , [August 2012] 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201208.mbox/%3C1345800198.2303.68.camel@oo%3E)
 i would like to introduce the idea of Federated Search in Solr.

It would be nice to have support for real Federated Search in Solr - very 
helpfull for people who would like to include some external search results in 
their Solr-based system.

By Federated Search i mean searching across not only distributed Solr instances 
(existing DistributedSearch in Solr) but also other kind of external search 
services.

Typical federated search process includes:
- collection selection step
- results merging
- adapters for external collections connection
- collections representations (used in collection selection and/or result 
merging)


I'm thinking about creating full solution with basic example implementation of 
each module.

Things to do that comes to my mind are:
1. federated request support in SearchHandler: the place where everything is 
tight up.
2. CollectionSelectionComponent: which should be independent, so one can use it 
separately.
3. federated search support in QueryComponent: with no hard-coded agorithms if 
it's possible.
4. Results merging rules module: as pluggable part of QueryComponent or as 
separate MergingComponent.
5. Adapter (connector) to external collection: interface and example 
implementation.
6. Collections representation: interface and default implementation: Used to 
store informations about indexes/collections.


The typical use case would look like this:
- user sends search request
- Solr decides to which indexes delegate the request [collection selection]: 
for example by comparing user's query with collection representations.
- Solr decides how many and which documents get from each collection [merge 
rules]: for example by using previous step results.
- Solr sends user's query to collections (Solr instances and/or external 
collections through dedicated adapters)
- Solr merges and retuns the results.


Design requirements:
- lightweight implementation
- designed as Solr feature, not as something on top of Solr or as Solr extension
- easy to use and customize out of the box
- allow for extension/reimplementation by users

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449591#comment-13449591
 ] 

Uwe Schindler commented on LUCENE-4364:
---

I forgot to mention: My patch also fixes a "small bug" in MMapIndexInput: If 
you closed a clone, this did not change it's state at all, the close() method 
of clones was completely ignored. I changed that to at least unset the buffers 
for this clone/slice, so later calls throw AlreadyClosed. Clones of that one 
(previously created) are not affected. The top-level cleanup also unsets all 
clones of course again, but thats fine. Theoretically, "closed" clones could be 
removed from the map, but that is unneeded, because they will diappear 
automatically (as soon as they are unreferenced).

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4340) Move all codecs but Lucene40 to a module

2012-09-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-4340.
--

Resolution: Fixed
  Assignee: Adrien Grand

> Move all codecs but Lucene40 to a module
> 
>
> Key: LUCENE-4340
> URL: https://issues.apache.org/jira/browse/LUCENE-4340
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/codecs, general/build
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.0, 4.0
>
> Attachments: LUCENE-4340-bloom.patch, LUCENE-4340.patch, 
> LUCENE-4340.patch, LUCENE-4340.sh
>
>
> We should move all concrete postings formats and codecs but Lucene40 to a 
> module.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4340) Move all codecs but Lucene40 to a module

2012-09-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449590#comment-13449590
 ] 

Adrien Grand commented on LUCENE-4340:
--

Thanks for your comments, Uwe. I just committed (r1381504 and r1381512 on 
trunk, r1381541 on branch 4.x).

> Move all codecs but Lucene40 to a module
> 
>
> Key: LUCENE-4340
> URL: https://issues.apache.org/jira/browse/LUCENE-4340
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/codecs, general/build
>Reporter: Adrien Grand
> Fix For: 5.0, 4.0
>
> Attachments: LUCENE-4340-bloom.patch, LUCENE-4340.patch, 
> LUCENE-4340.patch, LUCENE-4340.sh
>
>
> We should move all concrete postings formats and codecs but Lucene40 to a 
> module.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449583#comment-13449583
 ] 

Uwe Schindler edited comment on LUCENE-4364 at 9/6/12 10:51 PM:


I have nothing against that, but it is *not* completely untested, look at the 
code :-) When you clone a slice, the offset of the original must be applied by 
the code when calling the private slice method - and that is done now. This was 
necessary to unify the clone and slice method to one common code. cloning is 
now only slice with 0..length.

But I agree we should not support this until we remove the IndexInputSlicer 
class at all and make every IndexInput sliceable (in a later issue). There is 
no chance that user code can call this method, as it is private to 
ByteBufferIndexInput.

  was (Author: thetaphi):
I have nothing against that, but it is *not* completely untested, look at 
the code :-) When you clone a slice, the offset of the original must be applied 
by the code when calling the private slice method - and that is done now. This 
was necessary to unify the clone and slice method to one common code.

But I agree we should not support this until we remove the IndexInputSlicer 
class at all and make every IndexInput sliceable (in a later issue). There is 
no chance that user code can call this method, as it is private to 
ByteBufferIndexInput.
  
> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449583#comment-13449583
 ] 

Uwe Schindler commented on LUCENE-4364:
---

I have nothing against that, but it is *not* completely untested, look at the 
code :-) When you clone a slice, the offset of the original must be applied by 
the code when calling the private slice method - and that is done now. This was 
necessary to unify the clone and slice method to one common code.

But I agree we should not support this until we remove the IndexInputSlicer 
class at all and make every IndexInput sliceable (in a later issue). There is 
no chance that user code can call this method, as it is private to 
ByteBufferIndexInput.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3793) duplicate (deleted) documents included in result set when using field faceting with fq

2012-09-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-3793:
---

Attachment: SOLR-3793.patch

Here's a patch to fix the problem.

The issue was when UnInvertedField faceting cached big terms as filters, it 
failed to set/use liveDocs.  Later, an "fq" was used that retrieved the filter 
from the cache and used that filter as liveDocs, bringing deleted docs back 
from the dead. 

> duplicate (deleted) documents included in result set when using field 
> faceting with fq
> --
>
> Key: SOLR-3793
> URL: https://issues.apache.org/jira/browse/SOLR-3793
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0-BETA
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.0
>
> Attachments: SOLR-3793.patch
>
>
> Günter Hipler reported on the solr-user mailing list that he was seeing 
> inconsistencies in facet counts compared to the numFound when drilling down 
> onto those facets (using "fq") - in particular: when adding an "fq" such as 
> `fq={!term+f%3DnavNetwork}nebis`, the resulting numFound was higher then the 
> number of docs reported by the facet constraint for nebis in the base request.
> I've been able to trivially reproduce this using the example data from Solr 
> 4.0-BETA, trunk@r1381400, and branch_4x@r1381400 (details in comment to 
> follow)
> Important things to note from Günter's email thread with his assessment of 
> the problem...
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201208.mbox/%3ccam_u7jfdpnrgfmwmntnachcdcjw4yb-rlbbvrw_wp_jdob_...@mail.gmail.com%3E
> bq. The behaviour is not consistent. Some of the facets provide the correct 
> result, some not.  What I can't say for sure: The behaviour was correct (if 
> I'm not wrong) once the whole index was newly created. After running some 
> updates I got these results.
> bq. I'm going to setup a new index with the Lucene 4.0 version from March (to 
> be more exactly: it's version 4.0-2012-03-09_11-29-20) to see what are the 
> results even in case of frequent updates ... the version deployed in march 
> doesn't contain the error I now come across in Beta4.0 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449557#comment-13449557
 ] 

Robert Muir commented on LUCENE-4364:
-

We should put the IllegalStateException back. there is no sense in pretending 
to support slices from clones, its totally untested and not useful.

If its useful somehow in the future and we have good tests thats different, but 
currently its useless and untested.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449555#comment-13449555
 ] 

Robert Muir commented on LUCENE-4364:
-

it was an intermediate step. unlike unmaphack, its a little more broken for 
chunkSize to be a setter.

with one of the previous patches, i noticed that if you were to change this at 
certain times it could be bad news. so I wanted to prevent the possibility of 
problems... probably not an isssue now.


> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3734) Forever loop in schema browser

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449544#comment-13449544
 ] 

Stefan Matheis (steffkes) commented on SOLR-3734:
-

bq. It seems, if i'm understanding correctly, that the root problem is that the 
UI expects all copySources returned by show=schema to be real field names that 
were already by the previous numTerms=0 request - but this is not guaranteed, 
nor is it abnormal to have these copyFields based on dynamicFields.

steffkes 0 : hoss 1 :/ exactly what you said - didn't look close enough .. i'll 
rework the patch and try to get your two points in it!

> Forever loop in schema browser
> --
>
> Key: SOLR-3734
> URL: https://issues.apache.org/jira/browse/SOLR-3734
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, web gui
>Reporter: Lance Norskog
> Attachments: SOLR-3734.patch, 
> SOLR-3734_schema_browser_blocks_solr_conf_dir.zip
>
>
> When I start Solr with the attached conf directory, and hit the Schema 
> Browser, the loading circle spins permanently. 
> I don't know if the problem is in the UI or in Solr. The UI does not display 
> the Ajax solr calls, and I don't have a debugging proxy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3784) Schema-Browser hangs because of similarity

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449542#comment-13449542
 ] 

Stefan Matheis (steffkes) commented on SOLR-3784:
-

That's fine, go ahead Greg

> Schema-Browser hangs because of similarity
> --
>
> Key: SOLR-3784
> URL: https://issues.apache.org/jira/browse/SOLR-3784
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Stefan Matheis (steffkes)
>Assignee: Greg Bowyer
> Attachments: SOLR-3784.patch, SOLR-3784.patch
>
>
> Opening the Schema-Browser with the Default Configuration, switching the 
> selection to type=int throws an error:
> {code}Uncaught TypeError: Cannot call method 'esc' of undefined // 
> schema-browser.js:893{code}
> On the first Look, this was introduced by SOLR-3572 .. right? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3791) SortedMapBackedCache.java throws NullPointerException

2012-09-06 Thread Steffen Moelter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steffen Moelter updated SOLR-3791:
--

Affects Version/s: 4.0-BETA

> SortedMapBackedCache.java throws NullPointerException
> -
>
> Key: SOLR-3791
> URL: https://issues.apache.org/jira/browse/SOLR-3791
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 3.6.1, 4.0-BETA
> Environment: Debian
>Reporter: Steffen Moelter
> Attachments: SOLR-3791.patch
>
>
> As datasource is a mysql via jdbc configured in the DIH. There are some sql 
> statements in entities, that delivers NULL values in some fields.
> When this entities do have processor="CachedSqlEntityProcessor", a 
> NullPointerException will be thrown. I Tried it on different machines, with 
> different copies of the index.
> e.g.:
>  name="locator_ids"
> query="
>   SELECT l1.id AS l1id, l2.id AS l2id, l3.id AS l3id, l4.id AS l4id, 
> l5.id AS l5id, l6.id AS l6id, l7.id AS l7id FROM locators l1
> LEFT JOIN locators l2 ON l1.parent_id = l2.id
>  and so on delivers a result like:
> +---+--+--+--+--+--+--+
> | l1id  | l2id | l3id | l4id | l5id | l6id | l7id |
> +---+--+--+--+--+--+--+
> | 21843 |  855 |  223 |   66 |   12 |1 | NULL |
> +---+--+--+--+--+--+--+
> The SortedMapBackedCache throws the NullPointer.
> Staktrace:
> java.lang.NullPointerException
>   at java.util.TreeMap.getEntry(TreeMap.java:341)
>   at java.util.TreeMap.get(TreeMap.java:272)
>   at 
> org.apache.solr.handler.dataimport.SortedMapBackedCache.add(SortedMapBackedCache.java:57)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.populateCache(DIHCacheSupport.java:124)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.getSimpleCacheData(DIHCacheSupport.java:199)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.getCacheData(DIHCacheSupport.java:147)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:132)
>   at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.pullRow(EntityProcessorWrapper.java:330)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:296)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:683)
> I had to give an onError="continue" to the entity, in order to get the proper 
> stacktrace. Leaving out the onError property (default=Abort) suppresses the 
> stacktrace in EntityProcessorWrapper.java line 332   
> The DIH version 3.5.0 is not affected, works fine for me

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3791) SortedMapBackedCache.java throws NullPointerException

2012-09-06 Thread Steffen Moelter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449538#comment-13449538
 ] 

Steffen Moelter commented on SOLR-3791:
---

Hi James,
your patch didn't work in this way. I have changed it to:

(starting at line 57 in SortedMapBackedCache)

 //silently ignore if this row has a null key...
 if(pk==null) {
  return;
}
List> thisKeysRecs = theMap.get(pk);
if (thisKeysRecs == null) {

this seems to work fine on my machines.
I don't know how to generate a patch, so that I could attach it here.
The solr 4.0.0-BETA seems to be affected as well, started the tests yesterday

Thanks for your patch, that helped me a lot



> SortedMapBackedCache.java throws NullPointerException
> -
>
> Key: SOLR-3791
> URL: https://issues.apache.org/jira/browse/SOLR-3791
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 3.6.1
> Environment: Debian
>Reporter: Steffen Moelter
> Attachments: SOLR-3791.patch
>
>
> As datasource is a mysql via jdbc configured in the DIH. There are some sql 
> statements in entities, that delivers NULL values in some fields.
> When this entities do have processor="CachedSqlEntityProcessor", a 
> NullPointerException will be thrown. I Tried it on different machines, with 
> different copies of the index.
> e.g.:
>  name="locator_ids"
> query="
>   SELECT l1.id AS l1id, l2.id AS l2id, l3.id AS l3id, l4.id AS l4id, 
> l5.id AS l5id, l6.id AS l6id, l7.id AS l7id FROM locators l1
> LEFT JOIN locators l2 ON l1.parent_id = l2.id
>  and so on delivers a result like:
> +---+--+--+--+--+--+--+
> | l1id  | l2id | l3id | l4id | l5id | l6id | l7id |
> +---+--+--+--+--+--+--+
> | 21843 |  855 |  223 |   66 |   12 |1 | NULL |
> +---+--+--+--+--+--+--+
> The SortedMapBackedCache throws the NullPointer.
> Staktrace:
> java.lang.NullPointerException
>   at java.util.TreeMap.getEntry(TreeMap.java:341)
>   at java.util.TreeMap.get(TreeMap.java:272)
>   at 
> org.apache.solr.handler.dataimport.SortedMapBackedCache.add(SortedMapBackedCache.java:57)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.populateCache(DIHCacheSupport.java:124)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.getSimpleCacheData(DIHCacheSupport.java:199)
>   at 
> org.apache.solr.handler.dataimport.DIHCacheSupport.getCacheData(DIHCacheSupport.java:147)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:132)
>   at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.pullRow(EntityProcessorWrapper.java:330)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:296)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:683)
> I had to give an onError="continue" to the entity, in order to get the proper 
> stacktrace. Leaving out the onError property (default=Abort) suppresses the 
> stacktrace in EntityProcessorWrapper.java line 332   
> The DIH version 3.5.0 is not affected, works fine for me

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13449529#comment-13449529
 ] 

Uwe Schindler commented on LUCENE-4364:
---

One question: Why did you change the chunkSize to be passed in ctor? For this 
patch its unrelated, or do I miss something. We now pass everything through 
ctor, but then its inconsequent to make the useUnmapHack to be a setter (unless 
we make it a static for all new MMapDirectories)?

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4364) MMapDirectory makes too many maps for CFS

2012-09-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4364:
--

Attachment: LUCENE-4364.patch

I combined the slicing and clone methods to be one. When doing this it now also 
supports slicing clones or slicing slices (as offset is added). I commented out 
the IllegalStateException, but we can reenable it to disallow.
I also made in the abstract class all methods reading from buffers final, to 
help Hotspot (MMapIndexInput is already final, but this makes it more 
reuseable).
I also removed the class name from some Exception messages like AlreadyClosed, 
because the toString() method already contains type.

> MMapDirectory makes too many maps for CFS
> -
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, 
> LUCENE-4364.patch, LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me 
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for 
> an offset+length slice of that, it should be using .duplicate()d buffers of 
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we 
> should do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3668) New Admin : DataImport : Specifying Custom Parameters

2012-09-06 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-3668.
-

Resolution: Fixed

Committed revision 1381518. trunk
Committed revision 1381520. 4x

> New Admin : DataImport : Specifying Custom Parameters
> -
>
> Key: SOLR-3668
> URL: https://issues.apache.org/jira/browse/SOLR-3668
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
> Environment: MacOS X 10.7.4, Safari 5.1.7
>Reporter: Chantal Ackermann
>Assignee: Stefan Matheis (steffkes)
> Attachments: SOLR-3668.patch
>
>
> I'm trying to run the following direct call via the WebGUI:
> http://localhost:9090/solr/issues/dataimport?command=full-import&importfile=/absolute/path/to/file.xml
> The above direct call produces this log output:
> 24.07.2012 15:18:40 org.apache.solr.handler.dataimport.XPathEntityProcessor 
> initQuery
> WARNUNG: Failed for url : /absolute/path/to/file.xml
> When giving an existing file, DIH works. But this is enough to show the 
> difference between direct call and call via WebGUI.
> Steps I do in the WebGUI:
> 0. In a multicore environment where one core is called "issues"
> 1. Open the tab of core "issues", and there the sub-item "Dataimport":
> http://localhost:9090/solr/#/issues/dataimport//dataimport
> 2. Specify a custom parameter in the text field labled "Custom Parameters" 
> like "importfile=/absolute/path/to/file.xml"
> Resulting log output:
> 24.07.2012 15:22:47 org.apache.solr.handler.dataimport.XPathEntityProcessor 
> initQuery
> WARNUNG: Failed for url : 
> java.lang.RuntimeException: java.io.FileNotFoundException: Could not find 
> file: 
> (no filename specified)
> When trying with an existing file, the same output (no filename) is logged.
> I've tried to find out how to specify the custom parameters by looking into 
> dataimport.js but it did not help me (I did not dwell on it, though). If it 
> would work by specifying the parameter in a different way it would be great 
> if a little info would be added right next to the field.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 5060 - Still Failing!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/5060/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 5053 - Still Failing!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/5053/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 5059 - Still Failing!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/5059/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 5052 - Still Failing!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/5052/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 5058 - Still Failing!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/5058/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 5051 - Failure!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/5051/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 5057 - Failure!

2012-09-06 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/5057/

No tests ran.

Build Log:
[...truncated 26 lines...]
ERROR: Failed to update http://svn.apache.org/repos/asf/lucene/dev/nightly
org.tmatesoft.svn.core.SVNException: svn: Target path '/lucene/dev/nightly' 
does not exist
svn: REPORT of /repos/asf/!svn/vcc/default: 500 Internal Server Error 
(http://svn.apache.org)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.runReport(DAVRepository.java:1284)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.update(DAVRepository.java:830)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.update(SVNUpdateClient.java:564)
at 
org.tmatesoft.svn.core.wc.SVNUpdateClient.doUpdate(SVNUpdateClient.java:401)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:136)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:136)
at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:788)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:769)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:753)
at hudson.FilePath.act(FilePath.java:842)
at hudson.FilePath.act(FilePath.java:824)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:743)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:685)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1249)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:589)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494)
at hudson.model.Run.execute(Run.java:1488)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: org.tmatesoft.svn.core.SVNErrorMessage: svn: Target path 
'/lucene/dev/nightly' does not exist
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:200)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:146)
at 
org.tmatesoft.svn.core.SVNErrorMessage.create(SVNErrorMessage.java:89)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.DAVErrorHandler.endElement(DAVErrorHandler.java:72)
at 
org.tmatesoft.svn.core.internal.io.dav.handlers.BasicDAVHandler.endElement(BasicDAVHandler.java:99)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1741)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2898)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:776)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readData(HTTPConnection.java:741)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.readError(HTTPConnection.java:222)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.readError(HTTPRequest.java:290)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPRequest.dispatch(HTTPRequest.java:213)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection._request(HTTPConnection.java:379)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:292)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:283)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection

  1   2   >