Re: Outstanding issues for 3.0.3

2012-07-27 Thread Christopher Currens
Itamar,

Where do we stand on the clean up now?  Is there anything in particular
that you're doing that you'd like help with?  I have some free time today
and am eager to get this version released.


Thanks,
Christopher


On Sat, Jul 21, 2012 at 1:02 PM, Prescott Nasser geobmx...@hotmail.comwrote:


 Alright, I'll hold off a bit.

 
  Date: Sat, 21 Jul 2012 22:59:32 +0300
  Subject: Re: Outstanding issues for 3.0.3
  From: ita...@code972.com
  To: lucene-net-u...@lucene.apache.org
  CC: lucene-net-dev@lucene.apache.org
 
  Actually there was some clean up work I started doing and would want to
  complete, and also sign off on the suspected corruption issue we raised.
  I'm afraid I won't have much time this week to properly do all that, but
  I'll keep you posted.
 
  On Sat, Jul 21, 2012 at 10:20 PM, Prescott Nasser geobmx...@hotmail.com
 wrote:
 
  
   Alright, latest patch fixed what could be done with the cls issues at
   present. With that, I think we are ready to roll with a release. If
 people
   could please take some time to run all the test as well as whatever
 other
   tests they might run. We've had some issues with tests only happening
 on
   some systems so I want to make sure we have those bases covered. Unless
   there is anything else that should be done, I'll leave every one a
 week to
   run their tests. Next saturday I will tag the trunk and cut a release
 with
   both 3.5 and 4.0 binaries. Great work everyone. ~P
Date: Mon, 9 Jul 2012 18:02:30 -0700
Subject: Re: Outstanding issues for 3.0.3
From: currens.ch...@gmail.com
To: lucene-net-dev@lucene.apache.org
   
I can set a different build target, but I can't set the actual
 framework
   to
3.5 without doing it for all build configurations. On top of that,
 3.5
needs System.Core to be referenced, which is done automatically in
 .NET 4
(I'm not sure if MSBuild v4 does it automatically?). I did kinda get
 it
working by putting a TargetFrameworkVersion tag of 4.0 in Debug and
   Release
configurations and 3.5 in Debug 3.5 and Release 3.5 configurations,
 but
that's a little...well, difficult to maintain by hand since visual
 studio
doesn't allow you to set different framework versions per
 configuration,
and visual studio seemed to be having trouble with references, since
 both
frameworks were being referenced.
   
On Mon, Jul 9, 2012 at 5:57 PM, Prescott Nasser 
 geobmx...@hotmail.com
   wrote:
   

 What do you mean doesn't work at the project level? I created a
   different
 build target NET35 and then we had Debug and Release still, that
   seemed to
 work for me. But I feel like I'm missing something in your
   explaination.
 Good work though!
  Date: Mon, 9 Jul 2012 17:51:36 -0700
  Subject: Re: Outstanding issues for 3.0.3
  From: currens.ch...@gmail.com
  To: lucene-net-dev@lucene.apache.org
 
  I've got it working, compiling and all test passing...The only
   caveat is
  that I'm not sure the best way to multi-target. It doesn't really
   work
 on
  a project level, so you'd have to create two separate projects,
 one
   for
  .NET 4 and the other for 3.5. To aid me, I wrote a small tool
 that
 creates
  copies of all of the 4.0 projects and solutions to work against
 the
   3.5
  framework. Anyone have experience with multi-targeting like this?
 
 
  Thanks,
  Christopher
 
  On Mon, Jul 9, 2012 at 11:29 AM, Prescott Nasser 
   geobmx...@hotmail.com
 wrote:
 
  
   Have at it.
  
   
Date: Mon, 9 Jul 2012 11:20:06 -0700
Subject: Re: Outstanding issues for 3.0.3
From: currens.ch...@gmail.com
To: lucene-net-dev@lucene.apache.org
   
If it's alright with you, I'll work on it a little bit in
 that
 branch,
   and
see what kind of progress I can make, since I have some time
   right
 now.
   
On Mon, Jul 9, 2012 at 11:06 AM, Prescott Nasser 
 geobmx...@hotmail.com
   wrote:
   

 I made some progress on 480 - checked into the 3.5 branch,
   there is
   more
 work to be done we could potentially move it to 3.0.3, but
 I
   put it
   into
 3.5 because I felt that we were closer to having this
   released, and
   adding
 those changes would add a fair amount of change so close
 to the
   release. I
 can add it back to the schedule, though I'm mostly just
 doing
 administrative work for the next two weeks though - I have
 a
   few
   things I
 have to take care of

 
  Date: Mon, 9 Jul 2012 10:21:42 -0700
  Subject: Re: Outstanding issues for 3.0.3
  From: currens.ch...@gmail.com
  To: lucene-net-dev@lucene.apache.org
 
  

Re: Outstanding issues for 3.0.3

2012-07-27 Thread Christopher Currens
I've got resharper and can help with that if you'd like to coordinate it.
 I can take a one or some of the contrib projects or part of the main
library, or *shudder* the any of the test libraries.  The code has needed
come cleaning up for a while and some of the clean up work is an
optimization some levels, so I'm definitely okay with spending some time
doing that.  I'm okay with waiting longer as long as something is getting
done.


Thanks,
Christopher

On Fri, Jul 27, 2012 at 9:00 AM, Itamar Syn-Hershko ita...@code972.comwrote:

 The cleanup consists mainly of going file by file with ReSharper and trying
 to get them as green as possible. Making a lot of fields readonly, removing
 unused vars and stuff like that. There are still loads of files left.

 I was also hoping to get to updating the spatial module with some recent
 updates, and to also support polygon searches. But that may take a bit more
 time, so it's really up to you guys (or we can open a vote for it).

 On Fri, Jul 27, 2012 at 6:35 PM, Christopher Currens 
 currens.ch...@gmail.com wrote:

  Itamar,
 
  Where do we stand on the clean up now?  Is there anything in particular
  that you're doing that you'd like help with?  I have some free time today
  and am eager to get this version released.
 
 
  Thanks,
  Christopher
 
 
  On Sat, Jul 21, 2012 at 1:02 PM, Prescott Nasser geobmx...@hotmail.com
  wrote:
 
  
   Alright, I'll hold off a bit.
  
   
Date: Sat, 21 Jul 2012 22:59:32 +0300
Subject: Re: Outstanding issues for 3.0.3
From: ita...@code972.com
To: lucene-net-u...@lucene.apache.org
CC: lucene-net-dev@lucene.apache.org
   
Actually there was some clean up work I started doing and would want
 to
complete, and also sign off on the suspected corruption issue we
  raised.
I'm afraid I won't have much time this week to properly do all that,
  but
I'll keep you posted.
   
On Sat, Jul 21, 2012 at 10:20 PM, Prescott Nasser 
  geobmx...@hotmail.com
   wrote:
   

 Alright, latest patch fixed what could be done with the cls issues
 at
 present. With that, I think we are ready to roll with a release. If
   people
 could please take some time to run all the test as well as whatever
   other
 tests they might run. We've had some issues with tests only
 happening
   on
 some systems so I want to make sure we have those bases covered.
  Unless
 there is anything else that should be done, I'll leave every one a
   week to
 run their tests. Next saturday I will tag the trunk and cut a
 release
   with
 both 3.5 and 4.0 binaries. Great work everyone. ~P
  Date: Mon, 9 Jul 2012 18:02:30 -0700
  Subject: Re: Outstanding issues for 3.0.3
  From: currens.ch...@gmail.com
  To: lucene-net-dev@lucene.apache.org
 
  I can set a different build target, but I can't set the actual
   framework
 to
  3.5 without doing it for all build configurations. On top of
 that,
   3.5
  needs System.Core to be referenced, which is done automatically
 in
   .NET 4
  (I'm not sure if MSBuild v4 does it automatically?). I did kinda
  get
   it
  working by putting a TargetFrameworkVersion tag of 4.0 in Debug
 and
 Release
  configurations and 3.5 in Debug 3.5 and Release 3.5
 configurations,
   but
  that's a little...well, difficult to maintain by hand since
 visual
   studio
  doesn't allow you to set different framework versions per
   configuration,
  and visual studio seemed to be having trouble with references,
  since
   both
  frameworks were being referenced.
 
  On Mon, Jul 9, 2012 at 5:57 PM, Prescott Nasser 
   geobmx...@hotmail.com
 wrote:
 
  
   What do you mean doesn't work at the project level? I created a
 different
   build target NET35 and then we had Debug and Release still,
 that
 seemed to
   work for me. But I feel like I'm missing something in your
 explaination.
   Good work though!
Date: Mon, 9 Jul 2012 17:51:36 -0700
Subject: Re: Outstanding issues for 3.0.3
From: currens.ch...@gmail.com
To: lucene-net-dev@lucene.apache.org
   
I've got it working, compiling and all test passing...The
 only
 caveat is
that I'm not sure the best way to multi-target. It doesn't
  really
 work
   on
a project level, so you'd have to create two separate
 projects,
   one
 for
.NET 4 and the other for 3.5. To aid me, I wrote a small tool
   that
   creates
copies of all of the 4.0 projects and solutions to work
 against
   the
 3.5
framework. Anyone have experience with multi-targeting like
  this?
   
   
Thanks,
Christopher
   
On Mon, Jul 9, 2012 at 11:29 AM, Prescott Nasser 
 geobmx...@hotmail.com
   wrote:
   

 Have at it.

 

[jira] [Resolved] (LUCENE-3747) Support Unicode 6.1.0

2012-07-27 Thread Steven Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rowe resolved LUCENE-3747.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.0
 Assignee: Steven Rowe
Lucene Fields: New,Patch Available  (was: New)

Committed to branch_4x: 
[r1366298|http://svn.apache.org/viewvc?view=revisionrevision=1366298].

 Support Unicode 6.1.0
 -

 Key: LUCENE-3747
 URL: https://issues.apache.org/jira/browse/LUCENE-3747
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 4.0-ALPHA
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor
 Fix For: 4.0, 5.0

 Attachments: LUCENE-3747.patch, LUCENE-3747.patch


 Now that Unicode 6.1.0 has been released, Lucene/Solr should support it.
 JFlex trunk now supports Unicode 6.1.0.
 Tasks include:
 * Upgrade ICU4J to v49 (after it's released, on 2012-03-21, according to 
 http://icu-project.org).
 * Use {{icu}} module tools to regenerate the supplementary character 
 additions to JFlex grammars.
 * Version the JFlex grammars: copy the current implementations to 
 {{*Impl3X}}; cause the versioning tokenizer wrappers to instantiate this 
 version when the {{Version}} c-tor param is in the range 3.1 to the version 
 in which these changes are released (excluding the range endpoints); then 
 change the specified Unicode version in the non-versioned JFlex grammars from 
 6.0 to 6.1.
 * Regenerate JFlex scanners, including {{StandardTokenizerImpl}}, 
 {{UAX29URLEmailTokenizerImpl}}, and {{HTMLStripCharFilter}}.
 * Using {{generateJavaUnicodeWordBreakTest.pl}}, generate and then run 
 {{WordBreakTestUnicode_6_1_0.java}}  under 
 {{modules/analysis/common/src/test/org/apache/lucene/analysis/core/}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4.x - Build # 49 - Failure

2012-07-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-4.x/49/

No tests ran.

Build Log:
[...truncated 15823 lines...]
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.appending...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.intblock...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene3x...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40.values...
  [javadoc] Loading source files for package org.apache.lucene.codecs.memory...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.codecs.pulsing...
  [javadoc] Loading source files for package org.apache.lucene.codecs.sep...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.simpletext...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.6.0_32
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [jar] Building jar: 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/core/lucene-core-4.0-2012-07-27_07-15-27-javadoc.jar

javadocs-modules:

javadocs:
 [echo] Building analyzers-common...

jflex-uptodate-check:

jflex-notice:
 [echo] 
 [echo]   One or more of the JFlex .jflex files is newer than its 
corresponding
 [echo]   .java file.  Run the jflex target to regenerate the 
artifacts.
 [echo] 

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/ivy-settings.xml

resolve:

common.init:

compile-lucene-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

compile-core:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

module-build.javadocs:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/analyzers-common
 [echo] Building analyzers-common...

download-java6-javadoc-packagelist:
 [copy] Copying 1 file to 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/analyzers-common
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene.analysis.ar...
  [javadoc] Loading source files for package org.apache.lucene.analysis.bg...
  [javadoc] Loading source files for package org.apache.lucene.analysis.br...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ca...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.charfilter...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cjk...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cn...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.commongrams...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound.hyphenation...
  [javadoc] Loading source files for package org.apache.lucene.analysis.core...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cz...
  [javadoc] Loading source files for package org.apache.lucene.analysis.da...
  [javadoc] Loading source files for package org.apache.lucene.analysis.de...
  [javadoc] Loading source files for package org.apache.lucene.analysis.el...
  [javadoc] Loading source files for package org.apache.lucene.analysis.en...
  [javadoc] Loading source files for package org.apache.lucene.analysis.es...
  [javadoc] Loading source files for package org.apache.lucene.analysis.eu...
  

RE: [JENKINS] Lucene-4.x - Build # 49 - Failure

2012-07-27 Thread Steven A Rowe
Committed javadoc fix in r1366300. - Steve

-Original Message-
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.6.0_32
  [javadoc] Building tree for all the packages and classes...
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.java:123:
 warning - Tag @link: reference not found: AttributeFactory
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.java:123:
 warning - Tag @link: reference not found: AttributeFactory
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/analyzers-common/stylesheet.css...
  [javadoc] 2 warnings


[JENKINS] Solr-4.x - Build # 50 - Failure

2012-07-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-4.x/50/

All tests passed

Build Log:
[...truncated 28349 lines...]
  [jar] Building jar: 
/usr/home/hudson/hudson-slave/workspace/Solr-4.x/checkout/lucene/build/core/lucene-core-4.0-2012-07-27_07-21-43-javadoc.jar

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:
 [echo] Building analyzers-common...

jflex-uptodate-check:

jflex-notice:
 [echo] 
 [echo]   One or more of the JFlex .jflex files is newer than its 
corresponding
 [echo]   .java file.  Run the jflex target to regenerate the 
artifacts.
 [echo] 

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/usr/home/hudson/hudson-slave/workspace/Solr-4.x/checkout/lucene/ivy-settings.xml

resolve:

common.init:

compile-lucene-core:

jflex-uptodate-check:

jflex-notice:

javacc-uptodate-check:

javacc-notice:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/usr/home/hudson/hudson-slave/workspace/Solr-4.x/checkout/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

compile-core:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

module-build.javadocs:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Solr-4.x/checkout/lucene/build/docs/analyzers-common
 [echo] Building analyzers-common...

download-java6-javadoc-packagelist:
 [copy] Copying 1 file to 
/usr/home/hudson/hudson-slave/workspace/Solr-4.x/checkout/lucene/build/docs/analyzers-common
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene.analysis.ar...
  [javadoc] Loading source files for package org.apache.lucene.analysis.bg...
  [javadoc] Loading source files for package org.apache.lucene.analysis.br...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ca...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.charfilter...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cjk...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cn...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.commongrams...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound.hyphenation...
  [javadoc] Loading source files for package org.apache.lucene.analysis.core...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cz...
  [javadoc] Loading source files for package org.apache.lucene.analysis.da...
  [javadoc] Loading source files for package org.apache.lucene.analysis.de...
  [javadoc] Loading source files for package org.apache.lucene.analysis.el...
  [javadoc] Loading source files for package org.apache.lucene.analysis.en...
  [javadoc] Loading source files for package org.apache.lucene.analysis.es...
  [javadoc] Loading source files for package org.apache.lucene.analysis.eu...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fa...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fr...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ga...
  [javadoc] Loading source files for package org.apache.lucene.analysis.gl...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hu...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.hunspell...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hy...
  [javadoc] Loading source files for package org.apache.lucene.analysis.id...
  [javadoc] Loading source files for package org.apache.lucene.analysis.in...
  [javadoc] Loading source files for package org.apache.lucene.analysis.it...
  [javadoc] Loading source files for package org.apache.lucene.analysis.lv...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.miscellaneous...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ngram...
  [javadoc] Loading source files for package org.apache.lucene.analysis.nl...
  [javadoc] Loading source files for package org.apache.lucene.analysis.no...
  [javadoc] Loading source files for package org.apache.lucene.analysis.path...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.pattern...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.position...
  [javadoc] Loading source files for package org.apache.lucene.analysis.pt...
  [javadoc] Loading source files for 

[jira] [Created] (SOLR-3684) Frequently full gc while do pressure index

2012-07-27 Thread Raintung Li (JIRA)
Raintung Li created SOLR-3684:
-

 Summary: Frequently full gc while do pressure index
 Key: SOLR-3684
 URL: https://issues.apache.org/jira/browse/SOLR-3684
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 4.0-ALPHA
 Environment: System: Linux
Java process: 4G memory
Jetty: 1000 threads 
Index: 20 field
Core: 5

Reporter: Raintung Li
Priority: Critical


Recently we test the Solr index throughput and performance, configure the 20 
fields do test, the field type is normal text_general, start 1000 threads for 
Jetty, and define 5 cores.

After test continued for some time, the solr process throughput is down very 
quickly. After check the root cause, find the java process always do the full 
GC. 
Check the heap dump, the main object is StandardTokenizer, it is be saved in 
the CloseableThreadLocal by IndexSchema.SolrIndexAnalyzer.

In the Solr, will use the PerFieldReuseStrategy for the default reuse component 
strategy, that means one field has one own StandardTokenizer if it use standard 
analyzer,  and standardtokenizer will occur 32KB memory because of zzBuffer 
char array.

The worst case: Total memory = live threads*cores*fields*32KB

In the test case, the memory is 1000*5*20*32KB= 3.2G for StandardTokenizer, and 
those object only thread die can be released.

Suggestion:
Every request only handles by one thread that means one document only analyses 
by one thread.  For one thread will parse the document’s field step by step, so 
the same field type can use the same reused component. While thread switches 
the same type’s field analyzes only reset the same component input stream, it 
can save a lot of memory for same type’s field.

Total memory will be = live threads*cores*(different fields types)*32KB

The source code modifies that it is simple; I can provide the modification 
patch for IndexSchema.java: 
private class SolrIndexAnalyzer extends AnalyzerWrapper {
  
private class SolrFieldReuseStrategy extends ReuseStrategy {

  /**
   * {@inheritDoc}
   */
  @SuppressWarnings(unchecked)
  public TokenStreamComponents getReusableComponents(String 
fieldName) {
MapAnalyzer, TokenStreamComponents componentsPerField = 
(MapAnalyzer, TokenStreamComponents) getStoredValue();
return componentsPerField != null ? 
componentsPerField.get(analyzers.get(fieldName)) : null;
  }

  /**
   * {@inheritDoc}
   */
  @SuppressWarnings(unchecked)
  public void setReusableComponents(String fieldName, 
TokenStreamComponents components) {
MapAnalyzer, TokenStreamComponents componentsPerField = 
(MapAnalyzer, TokenStreamComponents) getStoredValue();
if (componentsPerField == null) {
  componentsPerField = new HashMapAnalyzer, 
TokenStreamComponents();
  setStoredValue(componentsPerField);
}
componentsPerField.put(analyzers.get(fieldName), components);
  }
}

protected final static HashMapString, Analyzer analyzers;
/**
 * Implementation of {@link ReuseStrategy} that reuses components per-field 
by
 * maintaining a Map of TokenStreamComponent per field name.
 */

SolrIndexAnalyzer() {
  super(new solrFieldReuseStrategy());
  analyzers = analyzerCache();
}

protected HashMapString, Analyzer analyzerCache() {
  HashMapString, Analyzer cache = new HashMapString, Analyzer();
  for (SchemaField f : getFields().values()) {
Analyzer analyzer = f.getType().getAnalyzer();
cache.put(f.getName(), analyzer);
  }
  return cache;
}

@Override
protected Analyzer getWrappedAnalyzer(String fieldName) {
  Analyzer analyzer = analyzers.get(fieldName);
  return analyzer != null ? analyzer : 
getDynamicFieldType(fieldName).getAnalyzer();
}

@Override
protected TokenStreamComponents wrapComponents(String fieldName, 
TokenStreamComponents components) {
  return components;
}
  }

  private class SolrQueryAnalyzer extends SolrIndexAnalyzer {
@Override
protected HashMapString, Analyzer analyzerCache() {
  HashMapString, Analyzer cache = new HashMapString, Analyzer();
   for (SchemaField f : getFields().values()) {
Analyzer analyzer = f.getType().getQueryAnalyzer();
cache.put(f.getName(), analyzer);
  }
  return cache;
}

@Override
protected Analyzer getWrappedAnalyzer(String fieldName) {
  Analyzer analyzer = analyzers.get(fieldName);
  return analyzer != null ? analyzer : 
getDynamicFieldType(fieldName).getQueryAnalyzer();
}
  }


--
This message is automatically generated by JIRA.
If you think it was 

[jira] [Commented] (LUCENE-4218) contrary to documentation Document.get(field) on numeric field returns null

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423729#comment-13423729
 ] 

Uwe Schindler commented on LUCENE-4218:
---

Before 4.0, this was exactly the case like documented. So we should either fix 
the docs or let it return string representation. I prefer the latter, as 
Document.get() is a very generic name.

 contrary to documentation Document.get(field) on numeric field returns null
 ---

 Key: LUCENE-4218
 URL: https://issues.apache.org/jira/browse/LUCENE-4218
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.0-ALPHA
 Environment: Darwin e4-ce-8f-0f-c2-b0.dummy.porta.siemens.net 10.8.0 
 Darwin Kernel Version 10.8.0: Tue Jun  7 16:32:41 PDT 2011; 
 root:xnu-1504.15.3~1/RELEASE_X86_64 x86_64
Reporter: Jamie
Priority: Critical
  Labels: Document
 Fix For: 4.0


 A call to Numeric num = indexableField.numericValue() comes up with a correct 
 value, whereas Document.get(field) yields null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux-Java7-64 - Build # 719 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux-Java7-64/719/

1 tests failed.
REGRESSION:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
reload exception doesn't mention bad prolog: El contenido no está permitido en 
el prólogo.

Stack Trace:
java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
contenido no está permitido en el prólogo.
at 
__randomizedtesting.SeedInfo.seed([D9E83A9C662F9249:4312F16482D01C4D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 7722 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.39s J1 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart
[junit4:junit4] 

[jira] [Commented] (SOLR-3683) fail with error if analyzer has both a class and nested factories

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423732#comment-13423732
 ] 

Uwe Schindler commented on SOLR-3683:
-

+1

 fail with error if analyzer has both a class and nested factories
 -

 Key: SOLR-3683
 URL: https://issues.apache.org/jira/browse/SOLR-3683
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.0, 5.0


 IndexSchema should complain if someone configures an analyzer that has both 
 an explicit class as well as nested factories.
 Example of confusion...
 https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201207.mbox/%3c1343218757.79608.yahoomailclas...@web121705.mail.ne1.yahoo.com%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3668) New Admin : DataImport : Specifying Custom Parameters

2012-07-27 Thread Chantal Ackermann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423752#comment-13423752
 ] 

Chantal Ackermann commented on SOLR-3668:
-

Hi Stefan,

thanks for fixing! I had misconfigured my DIH at that time, and it did not 
output the INFO messages you posted.
I'm importing directly via /dataimport - so no hurry from my part.

C

 New Admin : DataImport : Specifying Custom Parameters
 -

 Key: SOLR-3668
 URL: https://issues.apache.org/jira/browse/SOLR-3668
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0-ALPHA
 Environment: MacOS X 10.7.4, Safari 5.1.7
Reporter: Chantal Ackermann
Assignee: Stefan Matheis (steffkes)

 I'm trying to run the following direct call via the WebGUI:
 http://localhost:9090/solr/issues/dataimport?command=full-importimportfile=/absolute/path/to/file.xml
 The above direct call produces this log output:
 24.07.2012 15:18:40 org.apache.solr.handler.dataimport.XPathEntityProcessor 
 initQuery
 WARNUNG: Failed for url : /absolute/path/to/file.xml
 When giving an existing file, DIH works. But this is enough to show the 
 difference between direct call and call via WebGUI.
 Steps I do in the WebGUI:
 0. In a multicore environment where one core is called issues
 1. Open the tab of core issues, and there the sub-item Dataimport:
 http://localhost:9090/solr/#/issues/dataimport//dataimport
 2. Specify a custom parameter in the text field labled Custom Parameters 
 like importfile=/absolute/path/to/file.xml
 Resulting log output:
 24.07.2012 15:22:47 org.apache.solr.handler.dataimport.XPathEntityProcessor 
 initQuery
 WARNUNG: Failed for url : 
 java.lang.RuntimeException: java.io.FileNotFoundException: Could not find 
 file: 
 (no filename specified)
 When trying with an existing file, the same output (no filename) is logged.
 I've tried to find out how to specify the custom parameters by looking into 
 dataimport.js but it did not help me (I did not dwell on it, though). If it 
 would work by specifying the parameter in a different way it would be great 
 if a little info would be added right next to the field.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux-Java7-64 - Build # 552 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux-Java7-64/552/

1 tests failed.
REGRESSION:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
reload exception doesn't mention bad prolog: El contenido no está permitido en 
el prólogo.

Stack Trace:
java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
contenido no está permitido en el prólogo.
at 
__randomizedtesting.SeedInfo.seed([309F02A8311E5A1B:AA65C950D5E1D41F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 7315 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.31s J0 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart
[junit4:junit4] 

[JENKINS] Lucene-Solr-trunk-Linux-Java7-64 - Build # 720 - Still Failing!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux-Java7-64/720/

1 tests failed.
FAILED:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
reload exception doesn't mention bad prolog: El contenido no está permitido en 
el prólogo.

Stack Trace:
java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
contenido no está permitido en el prólogo.
at 
__randomizedtesting.SeedInfo.seed([25BE60F9F99BB6F0:BF44AB011D6438F4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 7665 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.36s J0 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart
[junit4:junit4] 

[jira] [Commented] (SOLR-1781) Replication index directories not always cleaned up

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423779#comment-13423779
 ] 

Markus Jelsma commented on SOLR-1781:
-

There's still a problem with old index directories not being cleaned up and 
strange replication on start up. I'll write to the ML for this, the problem is 
likely larger than just cleaning up.

 Replication index directories not always cleaned up
 ---

 Key: SOLR-1781
 URL: https://issues.apache.org/jira/browse/SOLR-1781
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 1.4
 Environment: Windows Server 2003 R2, Java 6b18
Reporter: Terje Sten Bjerkseth
Assignee: Mark Miller
 Fix For: 4.0, 5.0

 Attachments: 
 0001-Replication-does-not-always-clean-up-old-directories.patch, 
 SOLR-1781.patch, SOLR-1781.patch


 We had the same problem as someone described in 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201001.mbox/%3c222a518d-ddf5-4fc8-a02a-74d4f232b...@snooth.com%3e.
  A partial copy of that message:
 We're using the new replication and it's working pretty well. There's  
 one detail I'd like to get some more information about.
 As the replication works, it creates versions of the index in the data  
 directory. Originally we had index/, but now there are dated versions  
 such as index.20100127044500/, which are the replicated versions.
 Each copy is sized in the vicinity of 65G. With our current hard drive  
 it's fine to have two around, but 3 gets a little dicey. Sometimes  
 we're finding that the replication doesn't always clean up after  
 itself. I would like to understand this better, or to not have this  
 happen. It could be a configuration issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-tests-only-4.x-java7 - Build # 204 - Failure

2012-07-27 Thread Robert Muir
This test is broken for locales other than 'en' !

On Fri, Jul 27, 2012 at 1:13 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-tests-only-4.x-java7/204/

 1 tests failed.
 FAILED:  
 org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

 Error Message:
 reload exception doesn't mention bad prolog: El contenido no está permitido 
 en el prólogo.

 Stack Trace:
 java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
 contenido no está permitido en el prólogo.
 at 
 __randomizedtesting.SeedInfo.seed([769CD3367D6B0633:EC6618CE99948837]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
 at 
 org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
 at 
 org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
 at 
 

[jira] [Created] (SOLR-3685) Solr

2012-07-27 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-3685:
---

 Summary: Solr 
 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1


There's a serious problem with restarting nodes, not cleaning old or unused 
index directories and sudden replication and Java being killed by the OS due to 
excessive memory allocation. Since SOLR-1781 was fixed index directories get 
cleaned up when a node is being restarted cleanly, however, old or unused index 
directories still pile up if Solr crashes or is being killed by the OS, 
happening here.

We have a six-node 64-bit Linux test cluster with each node having two shards. 
There's 512MB RAM available and no swap. Each index is roughly 27MB so about 
50MB per node, this fits easily and works fine. However, if a node is being 
restarted, Solr will consistently crash because it immediately eats up all RAM. 
If swap is enabled Solr will eat an additional few 100MB's right after start up.

This cannot be solved by restarting Solr, it will just crash again and leave 
index directories in place until the disk is full. The only way i can restart a 
node safely is to delete the index directories and have it replicate from 
another node. If i then restart the node it will crash almost consistently.

I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-3685:


Summary: solrcloud crashes on startup due to excessive memory consumption  
(was: Solr )

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-3685:


Attachment: info.log

Here's a log for a node where the Java process is being killed by the OS. I can 
reproduce this consistently.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423782#comment-13423782
 ] 

Markus Jelsma commented on SOLR-3685:
-

I forgot to add that it doesn't matter if updates are sent to the cluster. A 
node will start to replicate on startup when it's update to date as well and 
crash subsequently.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423785#comment-13423785
 ] 

Uwe Schindler commented on SOLR-3685:
-

How much heap do you assign to Solr's Java process (-Xmx)? 512 MB physical RAM 
is very few. The Jetty default is as far as I remember larger. If the OS kills 
processes in its OOM process killer, we cannot do so much, as those processes 
are killed with a hard sigkill (-9) not sigterm.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423786#comment-13423786
 ] 

Markus Jelsma commented on SOLR-3685:
-

I should have added this. I allocate just 98MB to the heap and 32 to the 
permgen so there just 130MB allocated.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423787#comment-13423787
 ] 

Uwe Schindler commented on SOLR-3685:
-

Is it 32 bit or 64 bit JVM?

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3539) rethink softCommit=true|false param on commits?

2012-07-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423788#comment-13423788
 ] 

Yonik Seeley commented on SOLR-3539:


Anyone else have thoughts around this?
One performance concern of mine revolves around commit - the vast majority of 
people used it for visibility of documents, not for persistence at a specific 
time.

I'm warming to the idea of a flush param instead of softCommit, and it seems 
like perhaps it should default to false for 4.0

 rethink softCommit=true|false param on commits?
 ---

 Key: SOLR-3539
 URL: https://issues.apache.org/jira/browse/SOLR-3539
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Fix For: 4.0


 I think the current NTR related options when doing a commit, particularly 
 openSearcher=true|false and softCommit=true|false, is confusing, and we 
 should rethink them before they get baked into the user API in 4.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423794#comment-13423794
 ] 

Markus Jelsma commented on SOLR-3685:
-

Java 1.6.0-26 64bit, just as Linux.

I should also note now that i made an error in the configuration. I thought i 
had reduced the DocumentCache size to 64 but the node it was testing on had a 
size of 1024 configured and redistributed the config over the cluster via 
config bootstrap.

This still leaves the problem that Solr itself should run out of memory and not 
the OS as the cache is part of the heap. It also should clean old index 
directories. So this issue may consist of multiple problems.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423796#comment-13423796
 ] 

Uwe Schindler commented on SOLR-3685:
-

OK, I wanted to come back:
From what I see, 96 MB of heap is very few for Solr. Tests are running with 
-Xmx512. But regarding memory consumtion (Java's heap OOMs), Mark Miller might 
know better.

But Solr will not use all available RAM, as you are on 64 bit Java, Solr 
defaults to MMapDirectory - I recommend to read: 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

It will allocate from the system only heap + what Java itsself needs. 
Everything else is only allocated as adress space to directly acces file system 
cache. So the real memory usage of Solr is not what top reports in column 
VIRT but in column RES (resident memory). VIRT can be much higher 
(multiples of system RAM) with MMapDirectoy, as it only shows virtual address 
space allocated. This cannot cause Kernel-OOM to get active and kill processes, 
if that happens you have too few RAM for kernel, Solr + tools, sorry.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423798#comment-13423798
 ] 

Robert Muir commented on LUCENE-4258:
-

I don't think its progress if we add a design that *can only work with omitTFAP 
and no norms*,
and can only update individual terms, but not fields.

it means to support these things we have to also totally clear out whats there, 
and then introduce a new design

In fact this issue shouldnt be called incremental field updates: its not. its 
term updates or something else entirely different.


 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423801#comment-13423801
 ] 

Markus Jelsma commented on SOLR-3685:
-

Hi - i don't look at virtual memory but RESident memory. My Solr install here 
will eat up to 512MB RESIDENT MEMORY and is killed by the OS. The virtual 
memory will then be almost 800MB, while both indexes are just 27MB in size. 
This sounds a lot of VIRT and RES for a tiny index and tiny heap.

Also, Solr will run fine and fast with just 100MB of memory, the index is still 
very small.

Thanks

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4263:


Attachment: LUCENE-4263.patch

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423806#comment-13423806
 ] 

Robert Muir commented on LUCENE-4263:
-

I left the servlet-api alone (I dont want to deal with that), but fixed all the 
others.

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4260) Factor subPackages out of resourceloader interface

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423807#comment-13423807
 ] 

Uwe Schindler commented on LUCENE-4260:
---

+1, commit this. Solr can use its subpackage stuff internally for advanved use 
cases. SolrResourceLoader will automatically apply the default subpackages to 
subpackage-less calls to newInstance()

 Factor subPackages out of resourceloader interface
 --

 Key: LUCENE-4260
 URL: https://issues.apache.org/jira/browse/LUCENE-4260
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-4260.patch


 From Uwe on LUCENE-4257:
 The comment about the subpackages: This should in reality not be in 
 ResourceLoader, its too Solr-specific. It is used internally by Solr, to 
 resolve those solr. fake packages depending on the context. We should 
 remove that from the general interface and only handle it internally in 
 SolrResourceLoader.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4260) Factor subPackages out of resourceloader interface

2012-07-27 Thread Chris Male (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423811#comment-13423811
 ] 

Chris Male commented on LUCENE-4260:


+1

 Factor subPackages out of resourceloader interface
 --

 Key: LUCENE-4260
 URL: https://issues.apache.org/jira/browse/LUCENE-4260
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-4260.patch


 From Uwe on LUCENE-4257:
 The comment about the subpackages: This should in reality not be in 
 ResourceLoader, its too Solr-specific. It is used internally by Solr, to 
 resolve those solr. fake packages depending on the context. We should 
 remove that from the general interface and only handle it internally in 
 SolrResourceLoader.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2012-07-27 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423813#comment-13423813
 ] 

Shai Erera commented on LUCENE-4258:


bq. can only update individual terms, but not fields

Who said that? One of the update operations I listed is REPLACE_FIELD, which 
means replace the field's content entirely with the new content.

bq. I don't think its progress if we add a design that can only work with 
omitTFAP and no norms

I never said that will be the design. What I said is that in order to update a 
field at the term level, we'll start with such fields only. The rest of the 
fields (i.e. w/ norms, payloads and what not) will be updated through 
REPLACE_FIELD. The way I see it, we still address all the issues, only for some 
fields we require a whole field replace, and not an optimized term-based 
update. That can be improved further along, or not.

bq. In fact this issue shouldnt be called incremental field updates: its not. 
its term updates or something else entirely different.

That is my idea of incremental field updates and I'm not sure that it's not 
your idea as well :). You seem to only want to support REPLACE_FIELD, while I 
say that for some field types we can support UPDATE_FIELD (i.e. at the term 
level), that's it !

 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423815#comment-13423815
 ] 

Uwe Schindler commented on LUCENE-4263:
---

+1

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4253) ThaiAnalyzer fail to tokenize word.

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423816#comment-13423816
 ] 

Robert Muir commented on LUCENE-4253:
-

{quote}
This makes retrieval and proper recognition of the documents which contain the 
phrase SOME THAI PHRASE almost impossible.
{quote}

Right, i just mean the problem is general. 'a' is a english stopword but this 
can screw up 
things like 'L.A.' (Los Angeles) and other terms, because of how they are 
tokenized. This is just a common tradeoff.

Its just that with the current Thai tokenizer and the overly aggressive list, 
its much more pronounced.

{quote}
I don't have a chance to view the thesis that research over those stop words. 
In my own opinion, the only set of words that shall not cause a truncated is a 
set of conjunction words.
{quote}

Yes, this paper seems difficult to get a hold of. 

But I think its definitely a good idea should try to reduce the current list to 
not be so large. It should be
less aggressive.



 ThaiAnalyzer fail to tokenize word.
 ---

 Key: LUCENE-4253
 URL: https://issues.apache.org/jira/browse/LUCENE-4253
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: Realtime Branch
 Environment: Windows 7 SP1.
 Java 1.7.0-b147
Reporter: Nattapong Sirilappanich

 Method 
 protected TokenStreamComponents createComponents(String,Reader)
 return a component that unable to tokenize Thai word.
 The current return statement is:
 return new TokenStreamComponents(source, new StopFilter(matchVersion,
 result, stopwords));
 My experiment is change the return statement to:
 return new TokenStreamComponents(source, result);
 It give me a correct result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423818#comment-13423818
 ] 

Uwe Schindler commented on LUCENE-4263:
---

servlet-api is special and should be handled. servlet-api-3.0 is needed as 
runtime lib for jetty. But for compiling solr we should use servlet-api-2.4 
(see previous discussions). I am not sure if this is correctly implemented in 
Solr, but we should do this, otherwise we might add method calls to servlet API 
in SolrDispatchFilter that breaks running the WAR file with JBoss,...

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423819#comment-13423819
 ] 

Robert Muir commented on LUCENE-4263:
-

Yeah I didnt want to deal with that one (it really is its own issue). But I 
think we can fix these other conflicts?

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423821#comment-13423821
 ] 

Uwe Schindler commented on SOLR-3685:
-

I have no idea what libraries bundled by Solr do outside, but as you poroblem 
seems to be related to cloud, it might be another thing in JVMs: DirectMemory 
(allocated by ByteBuffer.allocateDirect()). By default the JVM allows up to the 
heap size to be allocated on this space external to heap, so your -Xmx is only 
half of the truth. Solr by itsself does not use direct memory (only mmapped 
memory, but that is not resident), but I am not sure about Zookeeper and all 
that cloud stuff (and maybe plugins like TIKA-extraction).

You can limit direct memory with: -XX:MaxDirectMemorySize=size

The VIRT column can contain aditionally 2-3 times your index size depending on 
pending commits, merges,...

Please report back what this changes!

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423822#comment-13423822
 ] 

Uwe Schindler commented on LUCENE-4263:
---

As I said in the comment before: +1

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4260) Factor subPackages out of resourceloader interface

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4260.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.0

 Factor subPackages out of resourceloader interface
 --

 Key: LUCENE-4260
 URL: https://issues.apache.org/jira/browse/LUCENE-4260
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4260.patch


 From Uwe on LUCENE-4257:
 The comment about the subpackages: This should in reality not be in 
 ResourceLoader, its too Solr-specific. It is used internally by Solr, to 
 resolve those solr. fake packages depending on the context. We should 
 remove that from the general interface and only handle it internally in 
 SolrResourceLoader.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows-Java7-64 - Build # 394 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows-Java7-64/394/

1 tests failed.
REGRESSION:  
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testBGSearchTaskThreads

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([EB61B30EE4A6CD8F:365476D25C651C2D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testBGSearchTaskThreads(TestPerfTasksLogic.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 5225 lines...]
[junit4:junit4] Suite: org.apache.lucene.benchmark.byTask.TestPerfTasksLogic
[junit4:junit4] FAILURE 2.28s | TestPerfTasksLogic.testBGSearchTaskThreads
[junit4:junit4] Throwable #1: java.lang.AssertionError
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([EB61B30EE4A6CD8F:365476D25C651C2D]:0)
[junit4:junit4]at org.junit.Assert.fail(Assert.java:92)
[junit4:junit4]at org.junit.Assert.assertTrue(Assert.java:43)
[junit4:junit4]at org.junit.Assert.assertTrue(Assert.java:54)
[junit4:junit4]at 

[jira] [Updated] (SOLR-3668) New Admin : DataImport : Specifying Custom Parameters

2012-07-27 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) updated SOLR-3668:


Attachment: SOLR-3668.patch

Chantal, would you mind applying this one and have another look at it? 

 New Admin : DataImport : Specifying Custom Parameters
 -

 Key: SOLR-3668
 URL: https://issues.apache.org/jira/browse/SOLR-3668
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0-ALPHA
 Environment: MacOS X 10.7.4, Safari 5.1.7
Reporter: Chantal Ackermann
Assignee: Stefan Matheis (steffkes)
 Attachments: SOLR-3668.patch


 I'm trying to run the following direct call via the WebGUI:
 http://localhost:9090/solr/issues/dataimport?command=full-importimportfile=/absolute/path/to/file.xml
 The above direct call produces this log output:
 24.07.2012 15:18:40 org.apache.solr.handler.dataimport.XPathEntityProcessor 
 initQuery
 WARNUNG: Failed for url : /absolute/path/to/file.xml
 When giving an existing file, DIH works. But this is enough to show the 
 difference between direct call and call via WebGUI.
 Steps I do in the WebGUI:
 0. In a multicore environment where one core is called issues
 1. Open the tab of core issues, and there the sub-item Dataimport:
 http://localhost:9090/solr/#/issues/dataimport//dataimport
 2. Specify a custom parameter in the text field labled Custom Parameters 
 like importfile=/absolute/path/to/file.xml
 Resulting log output:
 24.07.2012 15:22:47 org.apache.solr.handler.dataimport.XPathEntityProcessor 
 initQuery
 WARNUNG: Failed for url : 
 java.lang.RuntimeException: java.io.FileNotFoundException: Could not find 
 file: 
 (no filename specified)
 When trying with an existing file, the same output (no filename) is logged.
 I've tried to find out how to specify the custom parameters by looking into 
 dataimport.js but it did not help me (I did not dwell on it, though). If it 
 would work by specifying the parameter in a different way it would be great 
 if a little info would be added right next to the field.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423841#comment-13423841
 ] 

Markus Jelsma commented on SOLR-3685:
-

Ok, i have increased my DocumentCache again to reproduce the problem and 
configured from -XX:MaxDirectMemorySize=100m to 10m but RES is still climbing 
at the same rate as before so no change. We don't use Tika only Zookeeper.

About virtual memory. That also climbes to ~800Mb which is many times more than 
the index size. There are no pending commits or merges right after start up.

There may be some cloud replication related process that eats the RAM.

Thanks

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4208) Spatial distance relevancy should use score of 1/distance

2012-07-27 Thread Chris Male (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423851#comment-13423851
 ] 

Chris Male commented on LUCENE-4208:


I'm considered this obfuscates the actual distance too much, making it 
difficult to retrieve x again.  It's not impossible but suddenly anybody 
wanting to retrieve the actual distance must calculate c again.

 Spatial distance relevancy should use score of 1/distance
 -

 Key: LUCENE-4208
 URL: https://issues.apache.org/jira/browse/LUCENE-4208
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
 Fix For: 4.0


 The SpatialStrategy.makeQuery() at the moment uses the distance as the score 
 (although some strategies -- TwoDoubles if I recall might not do anything 
 which would be a bug).  The distance is a poor value to use as the score 
 because the score should be related to relevancy, and the distance itself is 
 inversely related to that.  A score of 1/distance would be nice.  Another 
 alternative is earthCircumference/2 - distance, although I like 1/distance 
 better.  Maybe use a different constant than 1.
 Credit: this is Chris Male's idea.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4208) Spatial distance relevancy should use score of 1/distance

2012-07-27 Thread Chris Male (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423851#comment-13423851
 ] 

Chris Male edited comment on LUCENE-4208 at 7/27/12 1:02 PM:
-

I'm concerned this obfuscates the actual distance too much, making it difficult 
to retrieve x again.  It's not impossible but suddenly anybody wanting to 
retrieve the actual distance must calculate c again.

  was (Author: cmale):
I'm considered this obfuscates the actual distance too much, making it 
difficult to retrieve x again.  It's not impossible but suddenly anybody 
wanting to retrieve the actual distance must calculate c again.
  
 Spatial distance relevancy should use score of 1/distance
 -

 Key: LUCENE-4208
 URL: https://issues.apache.org/jira/browse/LUCENE-4208
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
 Fix For: 4.0


 The SpatialStrategy.makeQuery() at the moment uses the distance as the score 
 (although some strategies -- TwoDoubles if I recall might not do anything 
 which would be a bug).  The distance is a poor value to use as the score 
 because the score should be related to relevancy, and the distance itself is 
 inversely related to that.  A score of 1/distance would be nice.  Another 
 alternative is earthCircumference/2 - distance, although I like 1/distance 
 better.  Maybe use a different constant than 1.
 Credit: this is Chris Male's idea.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3312) Break out StorableField from IndexableField

2012-07-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423853#comment-13423853
 ] 

Uwe Schindler commented on LUCENE-3312:
---

Hi Nikola, should I merge up the branch to trunk? Do you have anything to 
commit?

 Break out StorableField from IndexableField
 ---

 Key: LUCENE-3312
 URL: https://issues.apache.org/jira/browse/LUCENE-3312
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Michael McCandless
Assignee: Nikola Tankovic
  Labels: gsoc2012, lucene-gsoc-12
 Fix For: Field Type branch

 Attachments: lucene-3312-patch-01.patch, lucene-3312-patch-02.patch, 
 lucene-3312-patch-03.patch, lucene-3312-patch-04.patch, 
 lucene-3312-patch-05.patch, lucene-3312-patch-06.patch, 
 lucene-3312-patch-07.patch


 In the field type branch we have strongly decoupled
 Document/Field/FieldType impl from the indexer, by having only a
 narrow API (IndexableField) passed to IndexWriter.  This frees apps up
 use their own documents instead of the user-space impls we provide
 in oal.document.
 Similarly, with LUCENE-3309, we've done the same thing on the
 doc/field retrieval side (from IndexReader), with the
 StoredFieldsVisitor.
 But, maybe we should break out StorableField from IndexableField,
 such that when you index a doc you provide two Iterables -- one for the
 IndexableFields and one for the StorableFields.  Either can be null.
 One downside is possible perf hit for fields that are both indexed 
 stored (ie, we visit them twice, lookup their name in a hash twice,
 etc.).  But the upside is a cleaner separation of concerns in API

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1366336 - in /lucene/dev/branches/branch_4x: ./ lucene/ lucene/analysis/ lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ lucene/analysis/common/src/java/org/apach

2012-07-27 Thread Steven A Rowe
Thanks Uwe. - Steve

-Original Message-
From: uschind...@apache.org [mailto:uschind...@apache.org] 
Sent: Friday, July 27, 2012 7:03 AM
To: comm...@lucene.apache.org
Subject: svn commit: r1366336 - in /lucene/dev/branches/branch_4x: ./ lucene/ 
lucene/analysis/ 
lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ 
lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std31/ 
lucene/analysis/common...

Author: uschindler
Date: Fri Jul 27 11:02:39 2012
New Revision: 1366336

URL: http://svn.apache.org/viewvc?rev=1366336view=rev
Log:
Merged revision(s) 1366335 from lucene/dev/trunk:
Fix license

Modified:
lucene/dev/branches/branch_4x/   (props changed)
lucene/dev/branches/branch_4x/lucene/   (props changed)
lucene/dev/branches/branch_4x/lucene/analysis/   (props changed)

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.jflex

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std31/StandardTokenizerImpl31.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std31/StandardTokenizerImpl31.jflex

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std31/UAX29URLEmailTokenizerImpl31.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std31/UAX29URLEmailTokenizerImpl31.jflex

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std34/StandardTokenizerImpl34.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std34/StandardTokenizerImpl34.jflex

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std34/UAX29URLEmailTokenizerImpl34.java

lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std34/UAX29URLEmailTokenizerImpl34.jflex

Modified: 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.java
URL: 
http://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.java?rev=1366336r1=1366335r2=1366336view=diff
==
--- 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.java
 (original)
+++ 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.java
 Fri Jul 27 11:02:39 2012
@@ -2,7 +2,7 @@
 
 package org.apache.lucene.analysis.standard;
 
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
  * this work for additional information regarding copyright ownership.

Modified: 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.jflex
URL: 
http://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.jflex?rev=1366336r1=1366335r2=1366336view=diff
==
--- 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.jflex
 (original)
+++ 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicTokenizerImpl.jflex
 Fri Jul 27 11:02:39 2012
@@ -1,6 +1,6 @@
 package org.apache.lucene.analysis.standard;
 
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
  * this work for additional information regarding copyright ownership.

Modified: 
lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.java
URL: 
http://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.java?rev=1366336r1=1366335r2=1366336view=diff
==
--- 

[jira] [Resolved] (LUCENE-4263) remove conflicting versions of the same jar dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4263.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.0

I will leave all servlet-api jars up to Uwe. Ill figure out how to add an 
exception for it in LUCENE-4262.

 remove conflicting versions of the same jar dependencies
 

 Key: LUCENE-4263
 URL: https://issues.apache.org/jira/browse/LUCENE-4263
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Robert Muir
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4263.patch


 There are 3 instances of this in our build:
 asm-3.1.jar
 asm-debug-all-4.0.jar
 commons-compress-1.2.jar
 commons-compress-1.3.jar
 servlet-api-2.4.jar
 servlet-api-3.0.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux-Java7-64 - Build # 722 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux-Java7-64/722/

1 tests failed.
REGRESSION:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
reload exception doesn't mention bad prolog: El contenido no está permitido en 
el prólogo.

Stack Trace:
java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
contenido no está permitido en el prólogo.
at 
__randomizedtesting.SeedInfo.seed([3B0A7CB954F1A5B:994A6C3371B0945F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 7115 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.41s J1 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart
[junit4:junit4] 

Re: [JENKINS] Lucene-Solr-tests-only-4.x-java7 - Build # 204 - Failure

2012-07-27 Thread Chris Hostetter

: This test is broken for locales other than 'en' !

Shit ... i didn't realize those error messages were localized.

I'll disable and circle back when i have some time


: On Fri, Jul 27, 2012 at 1:13 AM, Apache Jenkins Server
: jenk...@builds.apache.org wrote:
:  Build: https://builds.apache.org/job/Lucene-Solr-tests-only-4.x-java7/204/
: 
:  1 tests failed.
:  FAILED:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart
: 
:  Error Message:
:  reload exception doesn't mention bad prolog: El contenido no está permitido 
en el prólogo.
: 
:  Stack Trace:
:  java.lang.AssertionError: reload exception doesn't mention bad prolog: El 
contenido no está permitido en el prólogo.
:  at 
__randomizedtesting.SeedInfo.seed([769CD3367D6B0633:EC6618CE99948837]:0)
:  at org.junit.Assert.fail(Assert.java:93)
:  at org.junit.Assert.assertTrue(Assert.java:43)
:  at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:255)
:  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
:  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:  at java.lang.reflect.Method.invoke(Method.java:601)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
:  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
:  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
:  at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
:  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:  at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
:  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
:  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
:  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
:  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
:  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
:  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:  at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
:  at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
:  at 
org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51)
:  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
:  at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
:  at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
:  at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
:  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
:  at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
:  at 

[jira] [Updated] (LUCENE-4262) add ivy sync=true, consolidate licensing, disallow conflicting dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4262:


Attachment: LUCENE-4262.patch

I tried this out: seems to work well.

I moved all license .txt/.sha1 to lucene/licenses and solr/licenses 
respectively.

I added sync=true (its a property though) so that old jars get cleaned during 
resolve (eliminating the clean-jars hassle).

I disabled this sync for solr-core and solrj since they share the same lib/ 
directory, but its working everywhere else.

Everything works, but i have no idea how to fix the license checker. Sha1 
checker needs to also be fixed (or is it in the license checker?). 

 add ivy sync=true, consolidate licensing, disallow conflicting dependencies
 ---

 Key: LUCENE-4262
 URL: https://issues.apache.org/jira/browse/LUCENE-4262
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-4262.patch


 From a discussion on IRC:
 * Hoss had the typical clean-jars problem
 * I mentioned we should instead use the ivy option of sync=true, which is 
 just like retrieve except nukes any stuff (e.g. old jars) that shouldn't be 
 there.
 * the problem with this is it would also nuke the .sha1 and licensing stuff 
 (its easy to try and see).
 So the idea came about we could just have a licenses/ folder for each 
 product (lucene, solr) and this has the .sha1's and licensing stuff.
 Steve asked what if different modules depended on different versions of the 
 same jar: this is bogus and we should just disallow it! 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-3634) make CoreAdminHandler STATUS action report back info about core initialization failures

2012-07-27 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-3634:



@Ignore(SOLR-3634: some errors are localized and tests fail in non en, 
disabling temporarily)

Committed revision 1366460. - trunk
Committed revision 1366464. - 4x


 make CoreAdminHandler STATUS action report back info about core 
 initialization failures 
 

 Key: SOLR-3634
 URL: https://issues.apache.org/jira/browse/SOLR-3634
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.0, 5.0

 Attachments: SOLR-3634.patch, SOLR-3634.patch


 see parent issue for context.
 * CoreContainer should keep a record of failures related to core 
 initialization, keyed off of the core name
 * failures should be recorded for both solr.xml initialzation, as well as 
 explicit CREATE/RELOAD requests that fail
 * record of failures should be tracked indefinitely until some valid core 
 begins using that name
 ** ie: actions which register a SolrCore with a name (RELOAD, CREATE, SWAP, 
 ALIAS) should reset any existing failures associated with that name
 * failures should be available to clients via CoreAdminHandler STATUS actions

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4262) add ivy sync=true, consolidate licensing, disallow conflicting dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4262:


Attachment: LUCENE-4262.patch

hacked up the license checker for lucene.

solr not yet working

 add ivy sync=true, consolidate licensing, disallow conflicting dependencies
 ---

 Key: LUCENE-4262
 URL: https://issues.apache.org/jira/browse/LUCENE-4262
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-4262.patch, LUCENE-4262.patch


 From a discussion on IRC:
 * Hoss had the typical clean-jars problem
 * I mentioned we should instead use the ivy option of sync=true, which is 
 just like retrieve except nukes any stuff (e.g. old jars) that shouldn't be 
 there.
 * the problem with this is it would also nuke the .sha1 and licensing stuff 
 (its easy to try and see).
 So the idea came about we could just have a licenses/ folder for each 
 product (lucene, solr) and this has the .sha1's and licensing stuff.
 Steve asked what if different modules depended on different versions of the 
 same jar: this is bogus and we should just disallow it! 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3685) solrcloud crashes on startup due to excessive memory consumption

2012-07-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424026#comment-13424026
 ] 

Mark Miller commented on SOLR-3685:
---

Seems this is perhaps two or three issues here.

1. The resource usage. This may just be because replication causes two 
searchers to be open at the same time briefly? I really don't have any guesses 
at the moment. 

2. On a non graceful shutdown, old index dirs may end up left behind. We could 
look at cleaning them up on startup, but that should be it's own issue.

3. You claim you are replicating on startup even though the shards should be in 
sync. You should not be replicating in that case.

 solrcloud crashes on startup due to excessive memory consumption
 

 Key: SOLR-3685
 URL: https://issues.apache.org/jira/browse/SOLR-3685
 Project: Solr
  Issue Type: Bug
  Components: replication (java), SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Debian GNU/Linux Squeeze 64bit
 Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 4.1

 Attachments: info.log


 There's a serious problem with restarting nodes, not cleaning old or unused 
 index directories and sudden replication and Java being killed by the OS due 
 to excessive memory allocation. Since SOLR-1781 was fixed index directories 
 get cleaned up when a node is being restarted cleanly, however, old or unused 
 index directories still pile up if Solr crashes or is being killed by the OS, 
 happening here.
 We have a six-node 64-bit Linux test cluster with each node having two 
 shards. There's 512MB RAM available and no swap. Each index is roughly 27MB 
 so about 50MB per node, this fits easily and works fine. However, if a node 
 is being restarted, Solr will consistently crash because it immediately eats 
 up all RAM. If swap is enabled Solr will eat an additional few 100MB's right 
 after start up.
 This cannot be solved by restarting Solr, it will just crash again and leave 
 index directories in place until the disk is full. The only way i can restart 
 a node safely is to delete the index directories and have it replicate from 
 another node. If i then restart the node it will crash almost consistently.
 I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3925) Spatial field types should not store doc frequencies or positions

2012-07-27 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-3925.
--

   Resolution: Fixed
Fix Version/s: 5.0

Committed 1-liner to 4x  trunk just now

 Spatial field types should not store doc frequencies or positions
 -

 Key: LUCENE-3925
 URL: https://issues.apache.org/jira/browse/LUCENE-3925
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Simon Willnauer
Assignee: David Smiley
Priority: Minor
 Fix For: 4.0, 5.0

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 It appears the corrections is simply to supply IndexOptions.DOCS_ONLY

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3051) Solr 3.1 ant eclipse doesn't create .project.

2012-07-27 Thread Eric Pugh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh closed SOLR-3051.
---

Resolution: Implemented

This bug has been dealt with already at some point!

 Solr 3.1 ant eclipse doesn't create .project.
 ---

 Key: SOLR-3051
 URL: https://issues.apache.org/jira/browse/SOLR-3051
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 3.1
Reporter: Eric Pugh
Priority: Minor
 Attachments: dot.project, fix_eclipse.patch


 I know this is pretty small potatoes, but I went to apply a patch to some 3.1 
 code, and ran ant eclipse, but my project didn't import cleanly.  I had 
 checked it out from the 3.1 branch in SVN.  I checked and the eclipse 
 .project file is missing.  
 Not sure what the policy of updates to already released versions is when it's 
 this minor?  I'm tossing a patch file out here for 3.1. 
 One thought I had is can we version the project name in the eclipse file 
 when we branch?   I typically have a couple of versions of Solr checked out, 
 and I have to manually tweak them to have them all loaded in Eclipse.  If 
 this sounds like something that we want to add to the build, I can attach a 
 ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-tests-only-4.x-java7 - Build # 204 - Failure

2012-07-27 Thread Chris Hostetter

: Shit ... i didn't realize those error messages were localized.

I'm still going to make the test not care if the message is localized, 
but...

FWIW: i just tried the reproduce line from this failure and confirmed that 
in my JVM, these SAX exceptions are *not* localized (I knew i had testsed 
enough locale iters to have seem plenty of non 'en' locales!) ... so 
there's definitely still some variablity in the reproducability of our 
tests and how various JVMs work

ant test  -Dtestcase=CoreContainerCoreInitFailuresTest 
-Dtests.method=testFlowBadFromStart -Dtests.seed=769CD3367D6B0633 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=es_CO 
-Dtests.timezone=Asia/Bangkok -Dtests.file.encoding=US-ASCII

...anyone know what controls if/when things like SAXParseException 
messages get localized?


-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4265) Name eclipse project after the branch: lucene_solr_branch_3x, lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.

2012-07-27 Thread Eric Pugh (JIRA)
Eric Pugh created LUCENE-4265:
-

 Summary: Name eclipse project after the branch:  
lucene_solr_branch_3x, lucene_solr_branch_4x, lucene_solr_trunk instead of all 
the same.
 Key: LUCENE-4265
 URL: https://issues.apache.org/jira/browse/LUCENE-4265
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 3.6.1, 4.0-ALPHA
Reporter: Eric Pugh
Priority: Minor
 Attachments: 3x_name.patch

Right now when you run ant eclipse all the solr/lucene projects are called 
lucene_solr, but what if I have both 4x and 3x, and now trunk?  I have to 
manually edit the classpath.  why not name them after the branch they are on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4265) Name eclipse project after the branch: lucene_solr_branch_3x, lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.

2012-07-27 Thread Eric Pugh (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh updated LUCENE-4265:
--

Attachment: 3x_name.patch

Patch for 3x branch

 Name eclipse project after the branch:  lucene_solr_branch_3x, 
 lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.
 

 Key: LUCENE-4265
 URL: https://issues.apache.org/jira/browse/LUCENE-4265
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.0-ALPHA, 3.6.1
Reporter: Eric Pugh
Priority: Minor
 Attachments: 3x_name.patch


 Right now when you run ant eclipse all the solr/lucene projects are called 
 lucene_solr, but what if I have both 4x and 3x, and now trunk?  I have to 
 manually edit the classpath.  why not name them after the branch they are on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4265) Name eclipse project after the branch: lucene_solr_branch_3x, lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.

2012-07-27 Thread Eric Pugh (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh updated LUCENE-4265:
--

Attachment: dot.project

patch for 4x branch

 Name eclipse project after the branch:  lucene_solr_branch_3x, 
 lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.
 

 Key: LUCENE-4265
 URL: https://issues.apache.org/jira/browse/LUCENE-4265
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.0-ALPHA, 3.6.1
Reporter: Eric Pugh
Priority: Minor
 Attachments: 3x_name.patch, dot.project


 Right now when you run ant eclipse all the solr/lucene projects are called 
 lucene_solr, but what if I have both 4x and 3x, and now trunk?  I have to 
 manually edit the classpath.  why not name them after the branch they are on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4262) add ivy sync=true, consolidate licensing, disallow conflicting dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4262:


Attachment: LUCENE-4262.patch

updated patch. now everything works.

I will commit soon and end this clean-jars hell (for the most part, open 
separate issue for solr/core and solr/solrj sharing the same lib directory, we 
need to fix that)

 add ivy sync=true, consolidate licensing, disallow conflicting dependencies
 ---

 Key: LUCENE-4262
 URL: https://issues.apache.org/jira/browse/LUCENE-4262
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-4262.patch, LUCENE-4262.patch, LUCENE-4262.patch


 From a discussion on IRC:
 * Hoss had the typical clean-jars problem
 * I mentioned we should instead use the ivy option of sync=true, which is 
 just like retrieve except nukes any stuff (e.g. old jars) that shouldn't be 
 there.
 * the problem with this is it would also nuke the .sha1 and licensing stuff 
 (its easy to try and see).
 So the idea came about we could just have a licenses/ folder for each 
 product (lucene, solr) and this has the .sha1's and licensing stuff.
 Steve asked what if different modules depended on different versions of the 
 same jar: this is bogus and we should just disallow it! 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-tests-only-4.x-java7 - Build # 204 - Failure

2012-07-27 Thread Robert Muir
On Fri, Jul 27, 2012 at 2:29 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:

 FWIW: i just tried the reproduce line from this failure and confirmed that
 in my JVM, these SAX exceptions are *not* localized (I knew i had testsed
 enough locale iters to have seem plenty of non 'en' locales!) ... so
 there's definitely still some variablity in the reproducability of our
 tests and how various JVMs work

 ant test  -Dtestcase=CoreContainerCoreInitFailuresTest 
 -Dtests.method=testFlowBadFromStart -Dtests.seed=769CD3367D6B0633 
 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=es_CO 
 -Dtests.timezone=Asia/Bangkok -Dtests.file.encoding=US-ASCII

 ...anyone know what controls if/when things like SAXParseException
 messages get localized?


Hoss and I debugged this. This is a bug in java7.

in java6 it works correctly. in java7 Exception.getMessage returns a
localized message. But it should not do this, it should only do this
in getLocalizedMessage.

-- 
lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4262) add ivy sync=true, consolidate licensing, disallow conflicting dependencies

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424087#comment-13424087
 ] 

Robert Muir commented on LUCENE-4262:
-

I committed this. ill wait to see if jenkins takes before backporting.

 add ivy sync=true, consolidate licensing, disallow conflicting dependencies
 ---

 Key: LUCENE-4262
 URL: https://issues.apache.org/jira/browse/LUCENE-4262
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-4262.patch, LUCENE-4262.patch, LUCENE-4262.patch


 From a discussion on IRC:
 * Hoss had the typical clean-jars problem
 * I mentioned we should instead use the ivy option of sync=true, which is 
 just like retrieve except nukes any stuff (e.g. old jars) that shouldn't be 
 there.
 * the problem with this is it would also nuke the .sha1 and licensing stuff 
 (its easy to try and see).
 So the idea came about we could just have a licenses/ folder for each 
 product (lucene, solr) and this has the .sha1's and licensing stuff.
 Steve asked what if different modules depended on different versions of the 
 same jar: this is bogus and we should just disallow it! 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4266) Update eclipse setting to match ivy configuration changes

2012-07-27 Thread Jessica Bonnie (JIRA)
Jessica Bonnie created LUCENE-4266:
--

 Summary: Update eclipse setting to match ivy configuration changes
 Key: LUCENE-4266
 URL: https://issues.apache.org/jira/browse/LUCENE-4266
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.0-ALPHA
Reporter: Jessica Bonnie
Priority: Minor


Fixes dependencies for building code in eclipse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4266) Update eclipse setting to match ivy configuration changes

2012-07-27 Thread Jessica Bonnie (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Bonnie updated LUCENE-4266:
---

Attachment: eclipse_classpath.patch

 Update eclipse setting to match ivy configuration changes
 -

 Key: LUCENE-4266
 URL: https://issues.apache.org/jira/browse/LUCENE-4266
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.0-ALPHA
Reporter: Jessica Bonnie
Priority: Minor
 Attachments: eclipse_classpath.patch


 Fixes dependencies for building code in eclipse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4262) add ivy sync=true, consolidate licensing, disallow conflicting dependencies

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4262.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.0

we should fix the solr/core and solr/solrj sharing the same lib/ so that we can 
end this clean-jars forever :)

 add ivy sync=true, consolidate licensing, disallow conflicting dependencies
 ---

 Key: LUCENE-4262
 URL: https://issues.apache.org/jira/browse/LUCENE-4262
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4262.patch, LUCENE-4262.patch, LUCENE-4262.patch


 From a discussion on IRC:
 * Hoss had the typical clean-jars problem
 * I mentioned we should instead use the ivy option of sync=true, which is 
 just like retrieve except nukes any stuff (e.g. old jars) that shouldn't be 
 there.
 * the problem with this is it would also nuke the .sha1 and licensing stuff 
 (its easy to try and see).
 So the idea came about we could just have a licenses/ folder for each 
 product (lucene, solr) and this has the .sha1's and licensing stuff.
 Steve asked what if different modules depended on different versions of the 
 same jar: this is bogus and we should just disallow it! 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4266) Update eclipse setting to match ivy configuration changes

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4266.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.0

Thanks!

 Update eclipse setting to match ivy configuration changes
 -

 Key: LUCENE-4266
 URL: https://issues.apache.org/jira/browse/LUCENE-4266
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.0-ALPHA
Reporter: Jessica Bonnie
Priority: Minor
 Fix For: 4.0, 5.0

 Attachments: eclipse_classpath.patch


 Fixes dependencies for building code in eclipse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4265) Name eclipse project after the branch: lucene_solr_branch_3x, lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.

2012-07-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4265.
-

   Resolution: Fixed
Fix Version/s: 3.6.2
   5.0
   4.0

Thanks!

 Name eclipse project after the branch:  lucene_solr_branch_3x, 
 lucene_solr_branch_4x, lucene_solr_trunk instead of all the same.
 

 Key: LUCENE-4265
 URL: https://issues.apache.org/jira/browse/LUCENE-4265
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.0-ALPHA, 3.6.1
Reporter: Eric Pugh
Priority: Minor
 Fix For: 4.0, 5.0, 3.6.2

 Attachments: 3x_name.patch, dot.project


 Right now when you run ant eclipse all the solr/lucene projects are called 
 lucene_solr, but what if I have both 4x and 3x, and now trunk?  I have to 
 manually edit the classpath.  why not name them after the branch they are on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3686) fix solr/core and solr/solrj not to share a lib/ directory

2012-07-27 Thread Robert Muir (JIRA)
Robert Muir created SOLR-3686:
-

 Summary: fix solr/core and solr/solrj not to share a lib/ directory
 Key: SOLR-3686
 URL: https://issues.apache.org/jira/browse/SOLR-3686
 Project: Solr
  Issue Type: Bug
Reporter: Robert Muir


This makes the build system hairy.
it also prevents us from using ivy's sync=true (LUCENE-4262) 
which totally prevents the issue of outdated jars.

We should fix this so each has its own lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4267) binary packaging should include licenses/

2012-07-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-4267:
---

 Summary: binary packaging should include licenses/
 Key: LUCENE-4267
 URL: https://issues.apache.org/jira/browse/LUCENE-4267
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir


I heavy-committed LUCENE-4262 to enable ivy's sync=true (which means not just 
get the right jars, but delete shit that shouldnt be there) to end the whole 
clean-jars issue.

Its working except for the solr/lib (SOLR-3686) which we must fix for a number 
of reasons.

Anyway, because of this I made a lucene/licenses and solr/licenses directories 
respectively that contain all the .sha1/license/notice for 3rd party jars, so 
ivy wouldnt delete them.

we should update build patterns so these directories are in the binary release, 
its valuable information on our 3rd party licensing and additional verification 
for consumers.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4267) binary packaging should include licenses/

2012-07-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424219#comment-13424219
 ] 

Hoss Man commented on LUCENE-4267:
--

bq. we should update build patterns so these directories are in the binary 
release, its valuable information on our 3rd party licensing and additional 
verification for consumers.

valuable seems like an understatement ... Unless i'm missing something it's 
absolutely critical that we include these licenses.  If we include some third 
party jar in our binary pacakges, we damn well better include the 
license/NOTICE for those jars, or we are probably violating those licenses.

Personally: i think we should revamp the licenses/sha checker code in our 
build so that it not only compares the jars with what it finds in /licenses, it 
should also copy what it finds in /licenses into the same directory as those 
jars -- that way our binary packaging logic in ant should continue to work the 
same as it did in 4.0.0-ALPHA, and users can always see at a glance what hte 
license/NOTICE is for any thirdy-party jar, because it will be sitting right 
there next to it in the same directory.

If we don't do this, if thye have to hunt in a special directory in the pacakge 
to find these license files, they might not realize that's where they are, and 
could assume they are ASL2.0 since they are included in an ASF release.

 binary packaging should include licenses/
 -

 Key: LUCENE-4267
 URL: https://issues.apache.org/jira/browse/LUCENE-4267
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 I heavy-committed LUCENE-4262 to enable ivy's sync=true (which means not just 
 get the right jars, but delete shit that shouldnt be there) to end the whole 
 clean-jars issue.
 Its working except for the solr/lib (SOLR-3686) which we must fix for a 
 number of reasons.
 Anyway, because of this I made a lucene/licenses and solr/licenses 
 directories respectively that contain all the .sha1/license/notice for 3rd 
 party jars, so ivy wouldnt delete them.
 we should update build patterns so these directories are in the binary 
 release, its valuable information on our 3rd party licensing and additional 
 verification for consumers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3634) make CoreAdminHandler STATUS action report back info about core initialization failures

2012-07-27 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-3634.


Resolution: Fixed

change test to work even on java7 where SAXParseException.getMessage() returns 
a localized msg even though that defeats the whole point of 
Throwable.getLocalizedMessage()

Committed revision 1366568. - trunk
Committed revision 1366569. - 4x


 make CoreAdminHandler STATUS action report back info about core 
 initialization failures 
 

 Key: SOLR-3634
 URL: https://issues.apache.org/jira/browse/SOLR-3634
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.0, 5.0

 Attachments: SOLR-3634.patch, SOLR-3634.patch


 see parent issue for context.
 * CoreContainer should keep a record of failures related to core 
 initialization, keyed off of the core name
 * failures should be recorded for both solr.xml initialzation, as well as 
 explicit CREATE/RELOAD requests that fail
 * record of failures should be tracked indefinitely until some valid core 
 begins using that name
 ** ie: actions which register a SolrCore with a name (RELOAD, CREATE, SWAP, 
 ALIAS) should reset any existing failures associated with that name
 * failures should be available to clients via CoreAdminHandler STATUS actions

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux-Java6-64 - Build # 658 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux-Java6-64/658/

1 tests failed.
FAILED:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
init failure doesn't mention problem: org.xml.sax.SAXParseException: Content is 
not allowed in prolog.

Stack Trace:
java.lang.AssertionError: init failure doesn't mention problem: 
org.xml.sax.SAXParseException: Content is not allowed in prolog.
at 
__randomizedtesting.SeedInfo.seed([3077B16E40EBB0A8:AA8D7A96A4143EAC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 6508 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.34s J0 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart

[jira] [Commented] (SOLR-3595) Currency types do not support range queries when multiValued

2012-07-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424237#comment-13424237
 ] 

Hoss Man commented on SOLR-3595:


bq. I'd prefer to fail early, fail often and catch this when the schema file 
was being parsed, but I don't see a convenient place to do that. Am I 
overlooking the obvious again?

can't we just put hte logic in CurrencyField.init(), after the call to 
super.init() ... 

{code}
if (this.isMultiValued()) { 
 throw new SolrException(CurrencyField's can not be multivalued:  + 
this.typeName) 
} 
{code}

?

(see BadIndexSchemaTest for an example of how to test that it fails properly)

 Currency types do not support range queries when multiValued
 

 Key: SOLR-3595
 URL: https://issues.apache.org/jira/browse/SOLR-3595
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.0-ALPHA
Reporter: Erick Erickson
Priority: Minor
  Labels: CurrencyField
 Fix For: 4.0


 You can define the currency type as multiValued. However, if you do (and have 
 more than one value), range queries, at least, do not work. See the thread 
 titled Filtering a query by range returning unexpected results.
 I'm not at all sure that currency type _should_ support multivalued. For 
 instance, how would one handle storing multiple values for a currency type in 
 different currencies (e.g. USD and EUR)? I don't know enough about the 
 internals to understand if it's possible, this JIRA is the result of a 
 question on the users list.
 If we decide that currency should _not_ support multiValued, it seems a check 
 at startup is in order on the fail early, fail loudly principle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux-Java6-64 - Build # 658 - Failure!

2012-07-27 Thread Chris Hostetter

: Error Message:
: init failure doesn't mention problem: org.xml.sax.SAXParseException: Content 
is not allowed in prolog.

Grrr... another JVM variation.In *my* java6 JVM, 
SAXParseException.toString() includes the systemId and line numbers of the 
errors, but aprently not on this java6 JVM.

I'll tighten down more.


-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux-Java6-64 - Build # 1438 - Failure!

2012-07-27 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux-Java6-64/1438/

1 tests failed.
FAILED:  
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart

Error Message:
init failure doesn't mention problem: org.xml.sax.SAXParseException: Content is 
not allowed in prolog.

Stack Trace:
java.lang.AssertionError: init failure doesn't mention problem: 
org.xml.sax.SAXParseException: Content is not allowed in prolog.
at 
__randomizedtesting.SeedInfo.seed([65A8134820C03FF:9CA04ACC66F38DFB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.CoreContainerCoreInitFailuresTest.testFlowBadFromStart(CoreContainerCoreInitFailuresTest.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:818)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:825)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:671)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:697)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:736)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 6903 lines...]
[junit4:junit4] Suite: org.apache.solr.core.CoreContainerCoreInitFailuresTest
[junit4:junit4] FAILURE 0.30s J0 | 
CoreContainerCoreInitFailuresTest.testFlowBadFromStart

[jira] [Updated] (SOLR-3683) fail with error if analyzer has both a class and nested factories

2012-07-27 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-3683:
---

Attachment: SOLR-3683.patch

 fail with error if analyzer has both a class and nested factories
 -

 Key: SOLR-3683
 URL: https://issues.apache.org/jira/browse/SOLR-3683
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.0, 5.0

 Attachments: SOLR-3683.patch


 IndexSchema should complain if someone configures an analyzer that has both 
 an explicit class as well as nested factories.
 Example of confusion...
 https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201207.mbox/%3c1343218757.79608.yahoomailclas...@web121705.mail.ne1.yahoo.com%3E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1280) Fields used update processor

2012-07-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1280:
---

Attachment: SOLR-1280.patch

Patch implementing the fields used technique using a JavaScript update 
processor.

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1280) Fields used update processor

2012-07-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424257#comment-13424257
 ] 

Erik Hatcher commented on SOLR-1280:


I've added a patch that I'll commit to trunk and 4_x (with the script update 
processor chain commented out just like dedupe and langid is now) that 
implements this field used trick as a few (not so elegant, but straightforward 
standard JavaScript) lines using a regexes pattern (/attr_.*/).

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4267) binary packaging should include licenses/

2012-07-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424260#comment-13424260
 ] 

Robert Muir commented on LUCENE-4267:
-

{quote}
valuable seems like an understatement ... Unless i'm missing something it's 
absolutely critical that we include these licenses. If we include some third 
party jar in our binary pacakges, we damn well better include the 
license/NOTICE for those jars, or we are probably violating those licenses.
{quote}

Nope, thats the responsibility of LICENSE.txt, NOTICE.txt. Of course the 
hilarious thing is nobody in apache can actually agree on the format of these 
files, so they could be ascii-art of my ass and still be legit :)

{quote}
Personally: i think we should revamp the licenses/sha checker code in our 
build so that it not only compares the jars with what it finds in /licenses, it 
should also copy what it finds in /licenses into the same directory as those 
jars – that way our binary packaging logic in ant should continue to work the 
same as it did in 4.0.0-ALPHA, and users can always see at a glance what hte 
license/NOTICE is for any thirdy-party jar, because it will be sitting right 
there next to it in the same directory.

If we don't do this, if thye have to hunt in a special directory in the pacakge 
to find these license files, they might not realize that's where they are, and 
could assume they are ASL2.0 since they are included in an ASF release.
{quote}

I dont think we should copy anything, it makes our packaging more confusing. 
Its fine to have a licenses/ folder, this is actually additional extra to 
apache's minimum NOTICE.txt (which can contain a picture of my ass, again, 
nobody truly knows for sure the format, see the incufuckubator list for more 
details on that).

The licenses/ stuff is additional for us, as a PMC, since we know apache is 
totally fucked up on this, to be able to properly manage our 100+ third party 
licenses.



 binary packaging should include licenses/
 -

 Key: LUCENE-4267
 URL: https://issues.apache.org/jira/browse/LUCENE-4267
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 I heavy-committed LUCENE-4262 to enable ivy's sync=true (which means not just 
 get the right jars, but delete shit that shouldnt be there) to end the whole 
 clean-jars issue.
 Its working except for the solr/lib (SOLR-3686) which we must fix for a 
 number of reasons.
 Anyway, because of this I made a lucene/licenses and solr/licenses 
 directories respectively that contain all the .sha1/license/notice for 3rd 
 party jars, so ivy wouldnt delete them.
 we should update build patterns so these directories are in the binary 
 release, its valuable information on our 3rd party licensing and additional 
 verification for consumers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1280) Fields used update processor

2012-07-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424261#comment-13424261
 ] 

Erik Hatcher commented on SOLR-1280:


Committed a slightly updated (with CHANGES entry and more comments) version to 
trunk (r1366588) and 4_x (r1366589).

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1280) Fields used update processor

2012-07-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-1280.


   Resolution: Implemented
Fix Version/s: 5.0
   4.0

Added this as a commented out pieces of a JavaScript update processor

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Fix For: 4.0, 5.0

 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1725) Script based UpdateRequestProcessorFactory

2012-07-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424263#comment-13424263
 ] 

Erik Hatcher commented on SOLR-1725:


Committed an example script/configuration to trunk (r1366588) and 4_x 
(r1366589).


 Script based UpdateRequestProcessorFactory
 --

 Key: SOLR-1725
 URL: https://issues.apache.org/jira/browse/SOLR-1725
 Project: Solr
  Issue Type: New Feature
  Components: update
Affects Versions: 1.4
Reporter: Uri Boness
Assignee: Erik Hatcher
  Labels: UpdateProcessor
 Fix For: 4.0

 Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, 
 SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, 
 SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, 
 SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch


 A script based UpdateRequestProcessorFactory (Uses JDK6 script engine 
 support). The main goal of this plugin is to be able to configure/write 
 update processors without the need to write and package Java code.
 The update request processor factory enables writing update processors in 
 scripts located in {{solr.solr.home}} directory. The functory accepts one 
 (mandatory) configuration parameter named {{scripts}} which accepts a 
 comma-separated list of file names. It will look for these files under the 
 {{conf}} directory in solr home. When multiple scripts are defined, their 
 execution order is defined by the lexicographical order of the script file 
 name (so {{scriptA.js}} will be executed before {{scriptB.js}}).
 The script language is resolved based on the script file extension (that is, 
 a *.js files will be treated as a JavaScript script), therefore an extension 
 is mandatory.
 Each script file is expected to have one or more methods with the same 
 signature as the methods in the {{UpdateRequestProcessor}} interface. It is 
 *not* required to define all methods, only those hat are required by the 
 processing logic.
 The following variables are define as global variables for each script:
  * {{req}} - The SolrQueryRequest
  * {{rsp}}- The SolrQueryResponse
  * {{logger}} - A logger that can be used for logging purposes in the script

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3288) audit tutorial before 4.0 release

2012-07-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3288:
---

Priority: Blocker  (was: Major)

Let's go ahead and make this a blocker. I'll be taking a sweep through of this 
stuff myself too.

 audit tutorial before 4.0 release
 -

 Key: SOLR-3288
 URL: https://issues.apache.org/jira/browse/SOLR-3288
 Project: Solr
  Issue Type: Task
Reporter: Hoss Man
Assignee: Hoss Man
Priority: Blocker
 Fix For: 4.0


 Prior to the 4.0 release, audit the tutorial and verify...
 * command line output looks reasonable
 * analysis examples/discussion matches field types used
 * links to admin UI are correct for new UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3674) PathHierarchyTokenizerFactory needs better docs, tests, and example config

2012-07-27 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-3674:
---

Attachment: SOLR-3674.patch

 PathHierarchyTokenizerFactory needs better docs, tests, and example config
 --

 Key: SOLR-3674
 URL: https://issues.apache.org/jira/browse/SOLR-3674
 Project: Solr
  Issue Type: Improvement
Affects Versions: 3.1, 4.0-ALPHA
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.0

 Attachments: SOLR-3674.patch


 As noted by Alok Bhandari on the solr-user mailing list, the example 
 configuration of PathHierarchyTokenizerFactory (in both the example 
 schema.xml and in the factory javadocs) is not very useful to most people 
 because it uses the tokenizer both at index and at query time.
 So a query like {{q=text_path_field:/aaa/bbb/ccc}} will not only match 
 /aaa/bbb/ccc and /aaa/bbb/ccc/ddd but also /aaa/ and /aaa/ which 
 is not likely what people expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org