RE: [Lucene.Net] Roadmap

2011-11-21 Thread Digy

Chris,

Sorry, if you took my comments about pain of porting personally. That
wasn't my intension.

+1 for all your changes/divergences. I made/could have made them too.

DIGY

-Original Message-
From: Christopher Currens [mailto:currens.ch...@gmail.com] 
Sent: Monday, November 21, 2011 11:45 PM
To: lucene-net-dev@lucene.apache.org
Subject: Re: [Lucene.Net] Roadmap

Digy,

I used 2.9.4 trunk as the base for the 3.0.3 branch, but I looked to the
code in 2.9.4g as a reference for many things, particularly the Support
classes.  We hit many of the same issues I'm sure, I moved some of the
anonymous classes into a base class where you could inject functions,
though not all could be replaced, nor did I replace all that could have
been.  Some of our code is different, I went for the option for
WeakDictionary to be completely generic, as in wrapping a generic
dictionary with WeakKeyT instead of wrapping the already existing
WeakHashTable in support.  In hindsight, it may have just been easier to
convert the WeakHashTable to generic, but alas, I'm only realizing that
now.  There is a problem with my WeakDictionary, specifically the function
that determines when to clean/compact the dictionary and remove the dead
keys.  I need a better heuristic of deciding when to run the clean.  That's
a performance issue though.

Regarding the pain of porting, I am a changed man.  It's nice, in a sad
way, to know that I'm not the only one who experienced those difficulties.
 I used to be in the camp that porting code that differed from java
wouldn't be difficult at all.  However, now I code corrected!  It threw me
a curve-ball, for sure.  I DO think a line-by-line port can definitely
include the things talked about below, ie the changes to Dispose and the
changes to IEnumerableT.  Those changes, I thing, can be made without a
heavy impact on the porting process.

There was one fairly large change I opted to use that differed quite a bit
from Java, however, and that was the use of the TPL in
ParallelMultiSearcher.  It was far easier to port this way, and I don't
think it affects the porting process too much.  Java uses a helper class
defined at the bottom of the source file that handles it, I'm simply using
a built-in one instead.  I just need to be careful about it, it would be
really easy to get carried away with it.


Thanks,
Christopher

On Mon, Nov 21, 2011 at 1:20 PM, Digy digyd...@gmail.com wrote:

 Hi Chris,

 First of all, thank you for your great work on 3.0.3 branch.
 I suppose you took 2.9.4 as a code base to make 3.0.3 port since some of
 your problems are the same with those I faced in 2.9.4g branch.
 (e.g,
Support/MemoryMappedDirectory.cs (but never used in core),
IDisposable,
introduction of some ActionTs, FuncTs ,
foreach instead of GetEnumerator/MoveNext,
IEquatableT,
WeakDictionaryT,
SetT
etc.
 )

 Since I also used 3.0.3 as a reference, maybe we can use some of 2.9.4g's
 code in 3.0.3 when necessary(I haven't had time to look into 3.0.3 deeply)

 Just to ensure the coordination, maybe you should create a new issue in
 JIRA, so that people send patches to that issue instead of directly
 commiting.


 @Prescott,
 2.9.4g is not behind of 2.9.4 in bug fixes  features level. So, It is (I
 think) ready for another release.(I use it in all my projects since long).


 PS: Hearing the pain of porting codes that greatly differ from Java made
 me just smile( sorry for that:( ). Be ready for responses that get beyond
 the criticism between With all due respect  Just my $0.02
paranthesis.

 DIGY

 -Original Message-
 From: Christopher Currens [mailto:currens.ch...@gmail.com]
 Sent: Monday, November 21, 2011 10:19 PM
 To: lucene-net-dev@lucene.apache.org; casper...@caspershouse.com
 Subject: Re: [Lucene.Net] Roadmap

 Some of the Lucene classes have Dispose methods, well, ones that call
Close
 (and that Close method may or may not call base.Close(), if needed or
not).
  Virtual dispose methods can be dangerous only in that they're easy to
 implement wrong.  However, it shouldn't be too bad, at least with a
 line-by-line port, as we would make the call to the base class whenever
 Lucene does, and that would (should) give us the same behavior,
implemented
 properly.  I'm not aware of differences in the JVM, regarding inheritance
 and base methods being called automatically, particularly Close methods.

 Slightly unrelated, another annoyance is the use of Java Iterators vs C#
 Enumerables.  A lot of our code is there simply because there are
 Iterators, but it could be converted to Enumerables. The whole HasNext,
 Next vs C#'s MoveNext(), Current is annoying, but it's used all over in
the
 base code, and would have to be changed there as well.  Either way, I
would
 like to push for that before 3.0.3 is relased.  IMO, small changes like
 this still keep the code similar to the line-by-line port, in that it
 doesn't add any difficulties in 

Re: [Lucene.Net] Roadmap

2011-11-21 Thread Christopher Currens
Digy,

No worries.  I wasn't taking them personally.  You've been doing this for a
lot longer than I have, but I didn't understand you pain until I had to go
through it personally. :P

Have you looked at Contrib in a while?  There's a lot of projects that are
in Java's Contrib that are not in Lucene.Net?  Is this because there are
some that can't easily (if at all) be ported over to .NET or just because
they've been neglected?  I'm trying to get a handle on what's important to
port and what isn't.  Figured someone with experience could help me with a
starting point over deciding where to start with everything that's missing.


Thanks,
Christopher

On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:


 Chris,

 Sorry, if you took my comments about pain of porting personally. That
 wasn't my intension.

 +1 for all your changes/divergences. I made/could have made them too.

 DIGY

 -Original Message-
 From: Christopher Currens [mailto:currens.ch...@gmail.com]
 Sent: Monday, November 21, 2011 11:45 PM
 To: lucene-net-dev@lucene.apache.org
 Subject: Re: [Lucene.Net] Roadmap

 Digy,

 I used 2.9.4 trunk as the base for the 3.0.3 branch, but I looked to the
 code in 2.9.4g as a reference for many things, particularly the Support
 classes.  We hit many of the same issues I'm sure, I moved some of the
 anonymous classes into a base class where you could inject functions,
 though not all could be replaced, nor did I replace all that could have
 been.  Some of our code is different, I went for the option for
 WeakDictionary to be completely generic, as in wrapping a generic
 dictionary with WeakKeyT instead of wrapping the already existing
 WeakHashTable in support.  In hindsight, it may have just been easier to
 convert the WeakHashTable to generic, but alas, I'm only realizing that
 now.  There is a problem with my WeakDictionary, specifically the function
 that determines when to clean/compact the dictionary and remove the dead
 keys.  I need a better heuristic of deciding when to run the clean.  That's
 a performance issue though.

 Regarding the pain of porting, I am a changed man.  It's nice, in a sad
 way, to know that I'm not the only one who experienced those difficulties.
  I used to be in the camp that porting code that differed from java
 wouldn't be difficult at all.  However, now I code corrected!  It threw me
 a curve-ball, for sure.  I DO think a line-by-line port can definitely
 include the things talked about below, ie the changes to Dispose and the
 changes to IEnumerableT.  Those changes, I thing, can be made without a
 heavy impact on the porting process.

 There was one fairly large change I opted to use that differed quite a bit
 from Java, however, and that was the use of the TPL in
 ParallelMultiSearcher.  It was far easier to port this way, and I don't
 think it affects the porting process too much.  Java uses a helper class
 defined at the bottom of the source file that handles it, I'm simply using
 a built-in one instead.  I just need to be careful about it, it would be
 really easy to get carried away with it.


 Thanks,
 Christopher

 On Mon, Nov 21, 2011 at 1:20 PM, Digy digyd...@gmail.com wrote:

  Hi Chris,
 
  First of all, thank you for your great work on 3.0.3 branch.
  I suppose you took 2.9.4 as a code base to make 3.0.3 port since some of
  your problems are the same with those I faced in 2.9.4g branch.
  (e.g,
 Support/MemoryMappedDirectory.cs (but never used in core),
 IDisposable,
 introduction of some ActionTs, FuncTs ,
 foreach instead of GetEnumerator/MoveNext,
 IEquatableT,
 WeakDictionaryT,
 SetT
 etc.
  )
 
  Since I also used 3.0.3 as a reference, maybe we can use some of 2.9.4g's
  code in 3.0.3 when necessary(I haven't had time to look into 3.0.3
 deeply)
 
  Just to ensure the coordination, maybe you should create a new issue in
  JIRA, so that people send patches to that issue instead of directly
  commiting.
 
 
  @Prescott,
  2.9.4g is not behind of 2.9.4 in bug fixes  features level. So, It is (I
  think) ready for another release.(I use it in all my projects since
 long).
 
 
  PS: Hearing the pain of porting codes that greatly differ from Java
 made
  me just smile( sorry for that:( ). Be ready for responses that get beyond
  the criticism between With all due respect  Just my $0.02
 paranthesis.
 
  DIGY
 
  -Original Message-
  From: Christopher Currens [mailto:currens.ch...@gmail.com]
  Sent: Monday, November 21, 2011 10:19 PM
  To: lucene-net-dev@lucene.apache.org; casper...@caspershouse.com
  Subject: Re: [Lucene.Net] Roadmap
 
  Some of the Lucene classes have Dispose methods, well, ones that call
 Close
  (and that Close method may or may not call base.Close(), if needed or
 not).
   Virtual dispose methods can be dangerous only in that they're easy to
  implement wrong.  However, it shouldn't be too bad, at least with a
  line-by-line port, as we would 

RE: [Lucene.Net] Roadmap

2011-11-21 Thread Digy
My english isn't enough to understand this answer. I hope it is not related
with
employee-employer relationship as in the past.

DIGY

-Original Message-
From: Christopher Currens [mailto:currens.ch...@gmail.com] 
Sent: Tuesday, November 22, 2011 1:08 AM
To: lucene-net-dev@lucene.apache.org
Subject: Re: [Lucene.Net] Roadmap

To clarify, it wasn't as much *difficult* as it was more *painful*.  Above,
I was inferring that it was more difficult that the rest of the code, which
by comparison was easier.  It wasn't painless to try and map where code
changes were from the java classes into the .Net version.  I prefer that
style more for its readability and the niceties of working with a .Net
style of Lucene, however as I said before, it slowed down significantly the
porting process.  I hope it didn't come across that I thought that it was
bad code, because it's probably the most readable code we have in the
Contrib at the moment.

I want to make it clear that my intention right now is to get Lucene.Net up
to date with Java.  When I read the Java code, I understand its intent, and
I make sure the ported code represents it.  That takes enough time as it
is, moving to try and figure out where the code went in Lucene.Net, since
it wasn't a 1-1 map, was a MINOR annoyance, especially when you compare it
to the issues I had dealing with the differences between the two languages,
generics especialy.  That being said, I don't have a problem with code
being converted in a .Net idiomatic way, in fact, I welcome it, if it still
allows the changes to be ported with minimal effort.  I feel at this point
in the project, there are some limitations to how far I'd like it to
diverge.

Anyway, my opinion, which may not be in agreement with the group as a
whole, is that it would be better to bring the codebase up to date, or at
least more up to date with java's, and then maintaining a version with a
complete .net-concentric API.  I feel this would beeasier, as porting
Java's Lucene SVN commits by the week would be a relatively small workload.

On Mon, Nov 21, 2011 at 2:41 PM, Troy Howard thowar...@gmail.com wrote:

 So, if we're getting back to the line by line port discussion... I
 think either side of this discussion is too extreme. For the case in
 point Chris just mentioned (which I'm not really sure what part was so
 difficult, as I ported that library in about 30 minutes from
 scratch)... anything is a pain if it sticks out in the middle of doing
 something completely different.

 The only reason we are able to do this line by line is due to the
 general similarity between Java and C#'s language syntax. If we were
 porting Lucene to a completely different language, that had a totally
 different syntax, the process would go like this:

 - Look at the original code, understand it's intent
 - Create similar code in the new language that expresses the same intent

 When applying changes:

 - Look at the original code diffs, understanding the intent of the change
 - Look at the ported code, and apply the changed logic's meaning in
 that language

 So, is just a different thought process. In my opinion, it's a better
 process because it forces the developer to actually think about the
 code instead of blindly converting syntax (possibly slightly
 incorrectly and introducing regressions). While there is a large
 volume of unit tests in Lucene, they are unfortunately not really the
 right tests and make porting much more difficult, because it's hard to
 verify that your ported code behaves the same because you can't just
 rely on the unit tests to verify your port. Therefore, it's safer to
 follow a process that requires the developer to delve deeply into the
 meaning of the code. Following a line-by-line process is convenient,
 but doesn't focus on meaning, which I think is more important.

 Thanks,
 Troy

 On Mon, Nov 21, 2011 at 2:23 PM, Christopher Currens
 currens.ch...@gmail.com wrote:
  Digy,
 
  No worries.  I wasn't taking them personally.  You've been doing this
 for a
  lot longer than I have, but I didn't understand you pain until I had to
 go
  through it personally. :P
 
  Have you looked at Contrib in a while?  There's a lot of projects that
 are
  in Java's Contrib that are not in Lucene.Net?  Is this because there are
  some that can't easily (if at all) be ported over to .NET or just
because
  they've been neglected?  I'm trying to get a handle on what's important
 to
  port and what isn't.  Figured someone with experience could help me with
 a
  starting point over deciding where to start with everything that's
 missing.
 
 
  Thanks,
  Christopher
 
  On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:
 
 
  Chris,
 
  Sorry, if you took my comments about pain of porting personally. That
  wasn't my intension.
 
  +1 for all your changes/divergences. I made/could have made them too.
 
  DIGY
 
  -Original Message-
  From: Christopher Currens [mailto:currens.ch...@gmail.com]
  Sent: 

RE: [Lucene.Net] Roadmap

2011-11-21 Thread Scott Lombard
Chris,

Now that you have spent some time dealing with the porting what is your view
on creating a fully automated porting tool?  

Scott  

 -Original Message-
 From: Christopher Currens [mailto:currens.ch...@gmail.com] 
 Sent: Monday, November 21, 2011 5:23 PM
 To: lucene-net-dev@lucene.apache.org
 Subject: Re: [Lucene.Net] Roadmap
 
 Digy,
 
 No worries.  I wasn't taking them personally.  You've been 
 doing this for a lot longer than I have, but I didn't 
 understand you pain until I had to go through it personally. :P
 
 Have you looked at Contrib in a while?  There's a lot of 
 projects that are in Java's Contrib that are not in 
 Lucene.Net?  Is this because there are some that can't easily 
 (if at all) be ported over to .NET or just because they've 
 been neglected?  I'm trying to get a handle on what's 
 important to port and what isn't.  Figured someone with 
 experience could help me with a starting point over deciding 
 where to start with everything that's missing.
 
 
 Thanks,
 Christopher
 
 On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:
 
 
  Chris,
 
  Sorry, if you took my comments about pain of porting personally. 
  That wasn't my intension.
 
  +1 for all your changes/divergences. I made/could have made 
 them too.
 
  DIGY
 
  -Original Message-
  From: Christopher Currens [mailto:currens.ch...@gmail.com]
  Sent: Monday, November 21, 2011 11:45 PM
  To: lucene-net-dev@lucene.apache.org
  Subject: Re: [Lucene.Net] Roadmap
 
  Digy,
 
  I used 2.9.4 trunk as the base for the 3.0.3 branch, but I 
 looked to 
  the code in 2.9.4g as a reference for many things, particularly the 
  Support classes.  We hit many of the same issues I'm sure, I moved 
  some of the anonymous classes into a base class where you 
 could inject 
  functions, though not all could be replaced, nor did I replace all 
  that could have been.  Some of our code is different, I 
 went for the 
  option for WeakDictionary to be completely generic, as in 
 wrapping a 
  generic dictionary with WeakKeyT instead of wrapping the already 
  existing WeakHashTable in support.  In hindsight, it may have just 
  been easier to convert the WeakHashTable to generic, but alas, I'm 
  only realizing that now.  There is a problem with my 
 WeakDictionary, 
  specifically the function that determines when to clean/compact the 
  dictionary and remove the dead keys.  I need a better heuristic of 
  deciding when to run the clean.  That's a performance issue though.
 
  Regarding the pain of porting, I am a changed man.  It's 
 nice, in a 
  sad way, to know that I'm not the only one who experienced 
 those difficulties.
   I used to be in the camp that porting code that differed from java 
  wouldn't be difficult at all.  However, now I code corrected!  It 
  threw me a curve-ball, for sure.  I DO think a line-by-line 
 port can 
  definitely include the things talked about below, ie the changes to 
  Dispose and the changes to IEnumerableT.  Those changes, I thing, 
  can be made without a heavy impact on the porting process.
 
  There was one fairly large change I opted to use that 
 differed quite a 
  bit from Java, however, and that was the use of the TPL in 
  ParallelMultiSearcher.  It was far easier to port this way, and I 
  don't think it affects the porting process too much.  Java uses a 
  helper class defined at the bottom of the source file that 
 handles it, 
  I'm simply using a built-in one instead.  I just need to be careful 
  about it, it would be really easy to get carried away with it.
 
 
  Thanks,
  Christopher
 
  On Mon, Nov 21, 2011 at 1:20 PM, Digy digyd...@gmail.com wrote:
 
   Hi Chris,
  
   First of all, thank you for your great work on 3.0.3 branch.
   I suppose you took 2.9.4 as a code base to make 3.0.3 port since 
   some of your problems are the same with those I faced in 
 2.9.4g branch.
   (e.g,
  Support/MemoryMappedDirectory.cs (but never used in core),
  IDisposable,
  introduction of some ActionTs, FuncTs ,
  foreach instead of GetEnumerator/MoveNext,
  IEquatableT,
  WeakDictionaryT,
  SetT
  etc.
   )
  
   Since I also used 3.0.3 as a reference, maybe we can use some of 
   2.9.4g's code in 3.0.3 when necessary(I haven't had time to look 
   into 3.0.3
  deeply)
  
   Just to ensure the coordination, maybe you should create 
 a new issue 
   in JIRA, so that people send patches to that issue instead of 
   directly commiting.
  
  
   @Prescott,
   2.9.4g is not behind of 2.9.4 in bug fixes  features 
 level. So, It 
   is (I
   think) ready for another release.(I use it in all my 
 projects since
  long).
  
  
   PS: Hearing the pain of porting codes that greatly differ from 
   Java
  made
   me just smile( sorry for that:( ). Be ready for responses 
 that get 
   beyond the criticism between With all due respect  
 Just my $0.02
  paranthesis.
  
   DIGY
  
   -Original 

Re: [Lucene.Net] Roadmap

2011-11-21 Thread Christopher Currens
Next to impossible/really, really hard.  There are just some things that
don't map quite right.  Sharpen is great, but it seems you need to code
written in a way that makes it easily convertible, and I don't see the
folks at Lucene changing their coding style to do that.

An example: 3.0.3 changes classes that inherited from util.Parameter, to
java enums.  Java enums are more similar to classes than they are in C#.
 They can have methods, fields, etc.  I wound up converting them into enums
with extension methods and/or static classes (usually to generate the
enum).  The way the code was written in Java, there's no way a automated
tool could figure that out on its own, unless you had some sort of way to
tell it what to do before hand.

I imagine porting it by hand is probably easier, though it would be nice if
there was a tool that would at least convert the syntax from Java to C#, as
well as changing the naming scheme to a .NET compatible one.  However, that
only really helps if you're porting classes from scratch.  It could, also,
hide bugs, since it's possible, however unlikely, something could port
perfectly, but not behave the same way.

A class that has many calls to string.Substring is a good example of this.
 If the name of the function is changed to the .Net version (.substring to
.Substring), it would compile no problems, but they are very different.
 C#'s signatures is Substring(int start, int count) while Java's is
Substring(int startIndex, int endIndex).  It may work hiding issues, it may
throw an exception, depending on the data.  A porting tool would probably
know many of the differences like this, so it's sorta a moot point, in that
this relies on the skills of the developer anyway.

I may be wrong, but I just don't see this being a fully automated process
ever.  I would love to have something automated that at least fixed syntax
errors, though this would only work on a line-by-line port.  (Slightly off
topic, I think we should always have a line-by-line port, even if our
primary goals become focusing on a fully .Net style port)  Either way, any
sort of manual or partly-automated process would still require a lot of
work to make sure things are ported correctly.  I also think it's most
manageable if it were a tool that did it on a file per file basis (instead
of project level like Sharpen), for easy review and testing.


Thanks,
Christopher

On Mon, Nov 21, 2011 at 3:30 PM, Scott Lombard lombardena...@gmail.comwrote:

 Chris,

 Now that you have spent some time dealing with the porting what is your
 view
 on creating a fully automated porting tool?

 Scott

  -Original Message-
  From: Christopher Currens [mailto:currens.ch...@gmail.com]
  Sent: Monday, November 21, 2011 5:23 PM
  To: lucene-net-dev@lucene.apache.org
  Subject: Re: [Lucene.Net] Roadmap
 
  Digy,
 
  No worries.  I wasn't taking them personally.  You've been
  doing this for a lot longer than I have, but I didn't
  understand you pain until I had to go through it personally. :P
 
  Have you looked at Contrib in a while?  There's a lot of
  projects that are in Java's Contrib that are not in
  Lucene.Net?  Is this because there are some that can't easily
  (if at all) be ported over to .NET or just because they've
  been neglected?  I'm trying to get a handle on what's
  important to port and what isn't.  Figured someone with
  experience could help me with a starting point over deciding
  where to start with everything that's missing.
 
 
  Thanks,
  Christopher
 
  On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:
 
  
   Chris,
  
   Sorry, if you took my comments about pain of porting personally.
   That wasn't my intension.
  
   +1 for all your changes/divergences. I made/could have made
  them too.
  
   DIGY
  
   -Original Message-
   From: Christopher Currens [mailto:currens.ch...@gmail.com]
   Sent: Monday, November 21, 2011 11:45 PM
   To: lucene-net-dev@lucene.apache.org
   Subject: Re: [Lucene.Net] Roadmap
  
   Digy,
  
   I used 2.9.4 trunk as the base for the 3.0.3 branch, but I
  looked to
   the code in 2.9.4g as a reference for many things, particularly the
   Support classes.  We hit many of the same issues I'm sure, I moved
   some of the anonymous classes into a base class where you
  could inject
   functions, though not all could be replaced, nor did I replace all
   that could have been.  Some of our code is different, I
  went for the
   option for WeakDictionary to be completely generic, as in
  wrapping a
   generic dictionary with WeakKeyT instead of wrapping the already
   existing WeakHashTable in support.  In hindsight, it may have just
   been easier to convert the WeakHashTable to generic, but alas, I'm
   only realizing that now.  There is a problem with my
  WeakDictionary,
   specifically the function that determines when to clean/compact the
   dictionary and remove the dead keys.  I need a better heuristic of
   deciding when to run the clean.  That's a 

[Lucene.Net] [jira] [Created] (LUCENENET-457) Lucene locks directory with index after network related problems

2011-11-21 Thread Pavel Belousov (Created) (JIRA)
Lucene locks directory with index after network related problems


 Key: LUCENENET-457
 URL: https://issues.apache.org/jira/browse/LUCENENET-457
 Project: Lucene.Net
  Issue Type: Bug
  Components: Lucene.Net Core
 Environment: Windows Server 2008
Reporter: Pavel Belousov


I have a directory for my index in shared folder on another computer in the 
network. My service writes data to the index. Sometimes the service gets 
network related exceptions like The specified network name is no longer 
available.. After that the service cannot write anything to index because of 
lock, even if I delete write.lock file manually.

I have done a research and have found that Lucene API has IndexWriter.Unlock() 
method, but in my case is does not work.
I use NativeFSLockFactory class. Class NativeFSLock has private field LOCK_HELD 
with the list of current locks, but in my case (after network related issues) 
it has record with the lock (NativeFSLock uses it in Obtain() method) and I 
can't delete it through API. I suppose that method NativeFSLock.Release()(which 
is called from IndexWriter.Unlock()) should delete record from the field 
LOCK_HELD.

May be I'm wrong and there is an appoarch to handle such problems?

At the moment I have implemented the method which deletes the record from 
LOCK_HELD through reflection.

Thanks a lot. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[JENKINS] Lucene-Solr-tests-only-trunk-java7 - Build # 1014 - Failure

2011-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk-java7/1014/

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest.testMultiThreaded

Error Message:
java.lang.AssertionError: Some threads threw uncaught exceptions!

Stack Trace:
java.lang.RuntimeException: java.lang.AssertionError: Some threads threw 
uncaught exceptions!
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:650)
at org.apache.solr.SolrTestCaseJ4.tearDown(SolrTestCaseJ4.java:86)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:149)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:51)
at 
org.apache.lucene.util.LuceneTestCase.checkUncaughtExceptionsAfter(LuceneTestCase.java:678)
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:622)




Build Log (for compile errors):
[...truncated 11859 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3585) Integrate the JdbcDirectory-Code from COMPASS-Framework

2011-11-21 Thread fpientka (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fpientka updated LUCENE-3585:
-

Attachment: org.apache.lucene.store.jdbc.JdbcDirectory.zip

from COMPASS VERSION 2.2.0 GA (6 April, 2009)

 Integrate the JdbcDirectory-Code from COMPASS-Framework
 ---

 Key: LUCENE-3585
 URL: https://issues.apache.org/jira/browse/LUCENE-3585
 Project: Lucene - Java
  Issue Type: New Feature
  Components: core/index, core/store
Affects Versions: 3.4
 Environment: Source-code under 
 http://sourceforge.net/projects/compass/
Reporter: fpientka
Priority: Minor
  Labels: features
 Attachments: org.apache.lucene.store.jdbc.JdbcDirectory.zip


 The CompassJava Search Engine Framework 2.3.0 beta 1 built on top of Lucene 
 2.4.1 is not maintained any more.
 But the JdbcDirectory-Code from COMPASS-Framework is under the Apache 
 License, Version 2.0
 I use this code wiht recent Lucene Version 3.4 and I suggest to integrate it 
 in Lucene-Core, as an alternative Index-Store
 Lucene Jdbc Directory org.apache.lucene.store.jdbc.JdbcDirectory
 http://static.compassframework.org/docs/latest/jdbcdirectory.html
 It works only partialy with Lucene 3.4, but can be fiexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3448) Add FixedBitSet.and(other/DISI), andNot(other/DISI)

2011-11-21 Thread Uwe Schindler (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-3448:
--

Attachment: LUCENE-3448-fix-optimization.patch

This patch fixes the optimized OpenBitSetIterator and/or/andNot methods to 
correctly exhaust the iterator after applying the underlying bits.

 Add FixedBitSet.and(other/DISI), andNot(other/DISI)
 ---

 Key: LUCENE-3448
 URL: https://issues.apache.org/jira/browse/LUCENE-3448
 Project: Lucene - Java
  Issue Type: Sub-task
  Components: core/other
Affects Versions: 3.4, 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3448-fix-optimization.patch, LUCENE-3448.patch, 
 LUCENE-3448.patch


 For the parent issue, and() and andNot() on DISIs and other FixedBitSets are 
 missing. This issue will add those methods.
 The DISI methods (also the already existing or(DISI)) method will check for 
 OpenBitSetIterator and do an inplace operation using the bits as optimization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2904) BinaryUpdateRequestHandler should be able to accept multiple update requests from a stream

2011-11-21 Thread Shalin Shekhar Mangar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154131#comment-13154131
 ] 

Shalin Shekhar Mangar commented on SOLR-2904:
-

Committed revision 1204453 on trunk.

 BinaryUpdateRequestHandler should be able to accept multiple update requests 
 from a stream
 --

 Key: SOLR-2904
 URL: https://issues.apache.org/jira/browse/SOLR-2904
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 3.5, 4.0

 Attachments: SOLR-2904.patch


 BinaryUpdateRequestHandler should accept multiple update requests from a 
 single HTTP request's input stream. Currently it does not and that makes it 
 very difficult for StreamingUpdateSolrServer to use Javabin format.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Simon Willnauer
Please vote to release the RC1 artifacts at:

https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-rev1204425/

as Lucene 3.5.0 and Solr 3.5.0.

Simon Willnauer

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2438) Case Insensitive Search for Wildcard Queries

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154161#comment-13154161
 ] 

Robert Muir commented on SOLR-2438:
---

3.5 is branched. But just my opinion: we should never worry about this stuff.

I don't think we should ever freeze trunk or our stable branch.

If someone is working on a release candidate and hasn't branched, they can 
always branch
from a specific revision that they were working with before.

By the way Erick: nice work on the patch. I just took a quick glance (didn't 
test it),
but only have one question.

If the backwards compatibility path is to have legacyMultiTerm, can't we just 
control
its default based upon the schema version (and bump that?). It seems awkward to 
have
2 booleans that control the backwards: both legacyMultiTerm and 
luceneMatchVersion.

I guess at the end of the day I think the schema variable you added is a better 
approach,
because its not really a behavior of the lucene queryparser that changed, but a 
change
to the schema.


 Case Insensitive Search for Wildcard Queries
 

 Key: SOLR-2438
 URL: https://issues.apache.org/jira/browse/SOLR-2438
 Project: Solr
  Issue Type: Improvement
Reporter: Peter Sturge
Assignee: Erick Erickson
 Attachments: SOLR-2438.patch, SOLR-2438.patch, SOLR-2438.patch, 
 SOLR-2438.patch, SOLR-2438.patch


 This patch adds support to allow case-insensitive queries on wildcard 
 searches for configured TextField field types.
 This patch extends the excellent work done Yonik and Michael in SOLR-219.
 The approach here is different enough (imho) to warrant a separate JIRA issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Robert Muir
On Mon, Nov 21, 2011 at 7:47 AM, Simon Willnauer
simon.willna...@googlemail.com wrote:
 Please vote to release the RC1 artifacts at:

 https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-rev1204425/

 as Lucene 3.5.0 and Solr 3.5.0.

 Simon Willnauer


Thanks for building this release candidate Simon.

+1, I ran smokeTestRelease.py and it passed.


-- 
lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2438) Case Insensitive Search for Wildcard Queries

2011-11-21 Thread Simon Willnauer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154173#comment-13154173
 ] 

Simon Willnauer commented on SOLR-2438:
---

bq. 3.5 is branched. But just my opinion: we should never worry about this 
stuff.

+1 

 Case Insensitive Search for Wildcard Queries
 

 Key: SOLR-2438
 URL: https://issues.apache.org/jira/browse/SOLR-2438
 Project: Solr
  Issue Type: Improvement
Reporter: Peter Sturge
Assignee: Erick Erickson
 Attachments: SOLR-2438.patch, SOLR-2438.patch, SOLR-2438.patch, 
 SOLR-2438.patch, SOLR-2438.patch


 This patch adds support to allow case-insensitive queries on wildcard 
 searches for configured TextField field types.
 This patch extends the excellent work done Yonik and Michael in SOLR-219.
 The approach here is different enough (imho) to warrant a separate JIRA issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Michael McCandless
+1, smoke tester is happy for me too.

Mike McCandless

http://blog.mikemccandless.com

On Mon, Nov 21, 2011 at 7:47 AM, Simon Willnauer
simon.willna...@googlemail.com wrote:
 Please vote to release the RC1 artifacts at:

 https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-rev1204425/

 as Lucene 3.5.0 and Solr 3.5.0.

 Simon Willnauer

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2438) Case Insensitive Search for Wildcard Queries

2011-11-21 Thread Erick Erickson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154210#comment-13154210
 ] 

Erick Erickson commented on SOLR-2438:
--

bq: I guess at the end of the day I think the schema variable you added is a 
better approach

I was just thinking about this since someone on the user's list asked can this 
be applied to 3.4 and couldn't make that all work without headaches, precisely 
because there were two variables to contend with. Alright, I'll make 
matchVersion determine the default value of legacyMultiTerm, which should allow 
this patch to be applied to pre 3.6 code lines at the user's risk.

bq: 3.5 is branched. But just my opinion: we should never worry about this 
stuff.

Right, if it had been more than a couple of days it'd have been another story, 
but I ran into a few surprises when running tests, so delaying for a couple of 
days to insure no chance of screwing up seemed prudent... thanks.

 Case Insensitive Search for Wildcard Queries
 

 Key: SOLR-2438
 URL: https://issues.apache.org/jira/browse/SOLR-2438
 Project: Solr
  Issue Type: Improvement
Reporter: Peter Sturge
Assignee: Erick Erickson
 Attachments: SOLR-2438.patch, SOLR-2438.patch, SOLR-2438.patch, 
 SOLR-2438.patch, SOLR-2438.patch


 This patch adds support to allow case-insensitive queries on wildcard 
 searches for configured TextField field types.
 This patch extends the excellent work done Yonik and Michael in SOLR-219.
 The approach here is different enough (imho) to warrant a separate JIRA issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-3.x #307: POMs out of sync

2011-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-3.x/307/

No tests ran.

Build Log (for compile errors):
[...truncated 13437 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2382) DIH Cache Improvements

2011-11-21 Thread James Dyer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154257#comment-13154257
 ] 

James Dyer commented on SOLR-2382:
--

Noble,

I can't speak for every use case, but these were necessary for one of our 
applications.  The whole idea is it lets you load your caches in advance of 
indexing (DIHCacheWriter), then read back your caches at a later time when 
you're ready to index (DIHCacheProcessor).

- This is especially helpful if you have a lot of different data sources that 
each contribute a few data elements in each Solr record.  (we have at least 40 
data sources.)  

- If you have slow data sources, you can run multiple DIH scripts at the same 
time and build your caches simultaneously (My app builds 12 DIH Caches at a 
time as we have some slow legacy databases to content with).  

- If you have a some data sources that change infrequently and other that are 
changing all the time, you can build caches for the infrequently-changing data 
sources, making it unnecessary to re-acquire this data every time you do a 
delta update (this is actually a very common case.  Imagine having Solr loaded 
with Product metadata.  Most of the data would seldom change but things like 
prices, availability flags, stock numbers, etc, might change all the time.)

- The fact that you can do delta imports on caches allows users to optimize the 
indexing process further.  If you have multiple child-entity caches with data 
that mostly stays the same, but each has churn on a small percentage of the 
data, being able to just go in and delta update the cache lets you only 
re-acquire what changed.  Otherwise, you have to take every record that had a 
change in even 1 data source and re-acquire all of the data sources for every 
record.

- These last two points relate to the fact that Lucene cannot do an update 
but only a replace.  Being able to store your system-of-record data in caches 
alleviates the need to re-acquire all of your data sources every time you need 
to do an update on a few fields.

- Some systems do not have a separate system-of-record as the data being 
indexed to Solr is ephemeral or changes frequently.  Having the data in caches 
gives you the freedom to delta update the information or easily re-index all 
data at system upgrades, etc.  I could see for some users these caches 
factoring into their disaster recovery strategy.

- There is also a feature to partition the data into multiple caches, which 
would make it easier to subsequently index the data to separate shards.  We use 
this feature to index the data in parallel to the same core (we're using Solr 
1.4, which did not have a threads parameter), but this would apply to using 
multiple shards also.

Is this convincing enough to go ahead and work towards commit?

 DIH Cache Improvements
 --

 Key: SOLR-2382
 URL: https://issues.apache.org/jira/browse/SOLR-2382
 Project: Solr
  Issue Type: New Feature
  Components: contrib - DataImportHandler
Reporter: James Dyer
Priority: Minor
 Attachments: SOLR-2382-dihwriter.patch, SOLR-2382-dihwriter.patch, 
 SOLR-2382-dihwriter.patch, SOLR-2382-dihwriter.patch, 
 SOLR-2382-entities.patch, SOLR-2382-entities.patch, SOLR-2382-entities.patch, 
 SOLR-2382-entities.patch, SOLR-2382-entities.patch, SOLR-2382-entities.patch, 
 SOLR-2382-entities.patch, SOLR-2382-entities.patch, 
 SOLR-2382-properties.patch, SOLR-2382-properties.patch, 
 SOLR-2382-solrwriter-verbose-fix.patch, SOLR-2382-solrwriter.patch, 
 SOLR-2382-solrwriter.patch, SOLR-2382-solrwriter.patch, SOLR-2382.patch, 
 SOLR-2382.patch, SOLR-2382.patch, SOLR-2382.patch, SOLR-2382.patch, 
 SOLR-2382.patch, SOLR-2382.patch, SOLR-2382.patch


 Functionality:
  1. Provide a pluggable caching framework for DIH so that users can choose a 
 cache implementation that best suits their data and application.
  
  2. Provide a means to temporarily cache a child Entity's data without 
 needing to create a special cached implementation of the Entity Processor 
 (such as CachedSqlEntityProcessor).
  
  3. Provide a means to write the final (root entity) DIH output to a cache 
 rather than to Solr.  Then provide a way for a subsequent DIH call to use the 
 cache as an Entity input.  Also provide the ability to do delta updates on 
 such persistent caches.
  
  4. Provide the ability to partition data across multiple caches that can 
 then be fed back into DIH and indexed either to varying Solr Shards, or to 
 the same Core in parallel.
 Use Cases:
  1. We needed a flexible  scalable way to temporarily cache child-entity 
 data prior to joining to parent entities.
   - Using SqlEntityProcessor with Child Entities can cause an n+1 select 
 problem.
   - CachedSqlEntityProcessor only supports an in-memory HashMap as a Caching 
 

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

2011-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/303/

1 tests failed.
REGRESSION:  org.apache.lucene.facet.taxonomy.directory.TestIndexClose.testLeaks

Error Message:
java.lang.AssertionError: Some threads threw uncaught exceptions!

Stack Trace:
java.lang.RuntimeException: java.lang.AssertionError: Some threads threw 
uncaught exceptions!
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:650)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:149)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:51)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at 
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at 
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.lang.AssertionError: Some threads threw uncaught exceptions!
at org.junit.Assert.fail(Assert.java:91)
at 
org.apache.lucene.util.LuceneTestCase.checkUncaughtExceptionsAfter(LuceneTestCase.java:678)
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:622)
... 30 more




Build Log (for compile errors):
[...truncated 24882 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Stanislaw Osinski
+1, search results clustering works, both on Java 5 and 6.

Staszek

On Mon, Nov 21, 2011 at 13:47, Simon Willnauer 
simon.willna...@googlemail.com wrote:

 Please vote to release the RC1 artifacts at:


 https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-rev1204425/

 as Lucene 3.5.0 and Solr 3.5.0.

 Simon Willnauer

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-2667) Finish Solr Admin UI

2011-11-21 Thread Mark Miller (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154268#comment-13154268
 ] 

Mark Miller commented on SOLR-2667:
---

bq. I think the files admin-extra.html and admin-extra.menu-top.html are 
intended to be optional. If that's the case, it might be nicer to not log a 
stack trace when they are not present. Especially at a SEVERE priority.

We should look at this in another JIRA issue.

 Finish Solr Admin UI
 

 Key: SOLR-2667
 URL: https://issues.apache.org/jira/browse/SOLR-2667
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-2667-110722.patch


 In SOLR-2399, we added a new admin UI. The issue has gotten too long to 
 follow, so this is a new issue to track remaining tasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

2011-11-21 Thread Shai Erera
Perhaps it's a build error (i.e. not cleaning after the test)? This is the
exception that appears in the log:

The following exceptions were thrown by threads:
*** Thread: Lucene Merge Thread #0 ***
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)
Caused by: java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)
at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:383)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:451)
*** Thread: Lucene Merge Thread #1 ***
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)
Caused by: java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)
at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:383)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:451)
*** Thread: Lucene Merge Thread #2 ***
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)
Caused by: java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)
at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:383)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:451)
NOTE: reproduce with: ant test -Dtestcase=TestIndexClose
-Dtestmethod=testLeaks
-Dtests.seed=78e524cead5893e3:4edf9eedd4ed0a36:1912c3654ff9f2e1
-Dargs=-Dfile.encoding=ISO8859-1
NOTE: test params are: codec=Lucene3x,
sim=RandomSimilarityProvider(queryNorm=false,coord=false):
{$full_path$=DFR I(ne)Z(0.3), $payloads$=IB LL-DZ(0.3)}, locale=et_EE,
timezone=America/Argentina/Mendoza
NOTE: all tests run in this JVM:
[TestAdaptiveExample, TestSimpleExample, TestAssociationExample,
TestMultiCLExample, TestScoredDocIDsUtils,
CategoryListPayloadStreamTest, CategoryAttributesIterableTest,
CategoryAttributeImplTest, DefaultFacetIndexingParamsTest,
CategoryListParamsTest, PerDimensionIndexingParamsTest,

[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: TestLFUCache.java
LFUCache.java
ConcurrentLFUCache.java

I've renamed the user-facing class to LFUCache and created a test program based 
on the LRU version.  The tests are failing, though.  So far I can't figure out 
why.


 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, ConcurrentLFUCache.java, 
 ConcurrentLFUCache.java, FastLFUCache.java, FastLFUCache.java, LFUCache.java, 
 TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154344#comment-13154344
 ] 

Shawn Heisey commented on SOLR-2906:


I've re-added lastAccessed to the class, as a tiebreaker when hitcount is equal.

The test method prints out leastUsedItems and mostUsedItems.  Somehow, item 
number 50 is included in both.


 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, ConcurrentLFUCache.java, 
 ConcurrentLFUCache.java, FastLFUCache.java, FastLFUCache.java, LFUCache.java, 
 TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2011-11-21 Thread Ben Roubicek (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154368#comment-13154368
 ] 

Ben Roubicek commented on SOLR-2894:


Based on SOLR-792, it looked like there was some traction in getting 
distributed pivoting in the trunk codebase beyond the functional prototype.  
This feature has a lot of value within my company where we perform 50 separate 
queries where one would suffice if we had distributed pivot support.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Erik Hatcher

 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread Christopher Currens
Regarding the 3.0.3 branch I started last week, I've put in a lot of late
nights and gotten far more done in a week and a half than I expected.  The
list of changes is very large, and fortunately, I've documented it in some
files that are in the branches root of certain projects.  I'll list what
changes have been made so far, and some of the concerns I have about them,
as well as what still needs to be done.  You can read them all in detail in
the files that are in the branch.

All changes in 3.0.3 have been ported to the Lucene.Net and
Lucene.Net.Test, except BooleanClause, LockStressTest, MMapDirectory,
NIOFSDirectory, DummyConcurrentLock, NamedThreadFactory, and
ThreadInterruptedException.

MMapDirectory and NIOFSDirectory have never been ported in the first place
for 2.9.4, so I'm not worried about those.  LockStressTest is a
command-line tool, porting it should be easy, but not essential to a 3.0.3
release, IMO.  DummyConcurrentLock also seems unnecessary (and
non-portable) for .NET, since it's based around Java's Lock class and is
only used to bypass locking, which can be done by passing new Object() to
the method.
NamedThreadFactory I'm unsure about.  It's used in ParallelMultiSearcher
(in which I've opted to use the TPL), and seems to be only used for
debugging, possibly testing.  Either way, I'm not sure it's necessary.
 Also, named threads would mean we probably would have to move the class
from the TPL, which greatly simplified the code and parallelization of it
all, as I can't see a way to Set names for a Task.  I suppose it might be
possible, as Tasks have unique Ids, and you could use a Dictionary to map
the thread's name to the ID in the factory, but you'd have to create a
helper function that would allow you to find a task by its name, which
seems more work than the resulting benefits.  VS2010 already has better
support for debugging tasks over threads (I used it when writing the
class), frankly, it's amazing how easy it was to debug.

Other than the above, the entire code base in the core dlls is at 3.0.3,
which is exciting, as I'm really hoping we can get Lucene.Net up to the
current version of Java's 3.x branch, and start working on a line-by-line
port of 4.0.  Tests need to be written for some of the collections I've
made that emulate Java's, to make sure they're even behaving the same way.
 The good news is that all of the existing tests pass as a whole, so it
seems to be working, though I'd like the peace of mind of having tests for
them (being HashMapTKey, TValue, WeakDictionaryTKey, TValue and
IdentityCollectionTKey, TValue, it's quite possible any one of them could
be completely wrong in how they were put together.)

I'd also like to finally formalize the way we use IDisposable in
Lucene.Net, by marking the Close functions as obsolete, moving the code
into Dispose, and eventually (or immediately) removing the Close functions.
 There's so much change to the API, that now would be a good time to make
that change if we wanted to.  I'm hesitant to move from a line-by-line port
of Lucene.Net completely, but rather having it be close as possible.  The
main reason I feel this way, is when I was porting the Shingle namespace of
Contrib.Analyzers, Troy has written it in a .Net way which different
GREATLY from java lucene, and it did make porting it considerably more
difficult; to keep the language to a minimum, I'm just going to say it was
a pain, a huge pain in fact.  I love the idea of moving to a more .NET
design, but I'd like to maintain a line-by-line port anyway, as I think
porting changes is far easier and quicker that way.  At this point, I'm
more interested in getting Lucene.Net to 4.0 and caught up to java, than I
am anything else, hence the extra amount of time I've put into this project
over the past week and a half.  Though this isn't really a place for this
discussion.

The larger area of difficult for the port, however, is the Contrib section.
 There are two major problems with it that is slowing me down.  First,
there are a lot of classes that are outdated.  I've found versions of code
that still have the Apache 1.1 License attached to it, which makes the code
quite old.  Also, it was almost impossible for me to port a lot of changes
in Contrib.Analyzers, since the code was so old and different from Java's
2.9.4.

Second, we had almost no unit tests ported for any of the classes, which
means they have to be ported from scratch.

Third, there are a lot of contrib projects that have never been ported over
from java.  That list includes: smartcn (I believe this is an intelligent
Chinese analyzer), benchmark, collation, db, lucli, memory, misc,
queryparser, remote, surround, swing, wikipedia, xml-query-parser.
 However, it should be noted that I'm not even sure which, if any, SHOULD
be ported or even CAN be ported.

The progress on 3.0.3 Contrib is going steady, however.  The entire
Analyzers project (except for smartcn) has been ported, as well as the test
for them, which all 

[jira] [Commented] (SOLR-2724) Deprecate defaultSearchField and defaultOperator defined in schema.xml

2011-11-21 Thread Yury Kats (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154394#comment-13154394
 ] 

Yury Kats commented on SOLR-2724:
-

FWIW, PingRequest fails if there is no defaultSearchField

CommonsHttpSolrServer#ping produces:

org.apache.solr.common.SolrException: Ping query caused exception: no field 
name specified in query and no defaultSearchField defined in schema.xml 
at 
org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:77)
 
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:1407)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353) 
 
...
request: http://127.0.0.1:8983/solr/a1/admin/ping?wt=javabinversion=2'


 Deprecate defaultSearchField and defaultOperator defined in schema.xml
 --

 Key: SOLR-2724
 URL: https://issues.apache.org/jira/browse/SOLR-2724
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, search
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor
 Fix For: 3.5, 4.0

   Original Estimate: 2h
  Remaining Estimate: 2h

 I've always been surprised to see the defaultSearchField element and 
 solrQueryParser defaultOperator=OR/ defined in the schema.xml file since 
 the first time I saw them.  They just seem out of place to me since they are 
 more query parser related than schema related. But not only are they 
 misplaced, I feel they shouldn't exist. For query parsers, we already have a 
 df parameter that works just fine, and explicit field references. And the 
 default lucene query operator should stay at OR -- if a particular query 
 wants different behavior then use q.op or simply use OR.
 similarity Seems like something better placed in solrconfig.xml than in the 
 schema. 
 In my opinion, defaultSearchField and defaultOperator configuration elements 
 should be deprecated in Solr 3.x and removed in Solr 4.  And similarity 
 should move to solrconfig.xml. I am willing to do it, provided there is 
 consensus on it of course.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



test failure under Windows: trunk benchmark module: LineDocSourceTest: could not remove temp dir

2011-11-21 Thread Steven A Rowe
Under Windows 7, this test fails 100% of the time for me - no other Lucene/Solr 
test does this:

   cd modules/benchmark
   ant clean test -Dtestcase=LineDocSourceTest

Test output follows:

junit-sequential:
[junit] Testsuite: 
org.apache.lucene.benchmark.byTask.feeds.LineDocSourceTest
[junit] Tests run: 8, Failures: 0, Errors: 1, Time elapsed: 1.472 sec
[junit] 
[junit] - Standard Error -
[junit] java.io.IOException: could not delete 
C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp\one-line
[junit] at org.apache.lucene.util._TestUtil.rmDir(_TestUtil.java:85)
[junit] at 
org.apache.lucene.util.LuceneTestCase.clearTempDirectoriesAfterClass(LuceneTestCase.java:447)
[junit] at 
org.apache.lucene.util.LuceneTestCase.afterClassLuceneTestCaseJ4(LuceneTestCase.java:377)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
[junit] at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
[junit] at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
[junit] at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
[junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
[junit] at 
junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
[junit] path 
C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp 
allocated from
[junit] org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:65)
[junit] 
org.apache.lucene.benchmark.BenchmarkTestCase.beforeClassBenchmarkTestCase(BenchmarkTestCase.java:39)
[junit] NOTE: reproduce with (hopefully): ant test 
-Dtestcase=LineDocSourceTest -Dtests.seed=-10f2bc7ed2006d82:0:-285ae758d1d45d49 
-Dargs=-Dfile.encoding=Cp1252
[junit] NOTE: test params are: codec=Lucene40: 
{docid=PostingsFormat(name=Memory), body=PostingsFormat(name=MockRandom), 
docdatenum=PostingsFormat(name=Lucene40WithOrds), 
docdate=PostingsFormat(name=MockRandom), 
doctitle=MockFixedIntBlock(blockSize=897), 
doctimesecnum=PostingsFormat(name=NestedPulsing), 
mySpecialField=PostingsFormat(name=Memory), docname=Pulsing40(freqCutoff=18 
minBlockSize=98 maxBlockSize=231)}, 
sim=RandomSimilarityProvider(queryNorm=false,coord=false): {}, locale=da, 
timezone=Atlantic/Reykjavik
[junit] NOTE: all tests run in this JVM:
[junit] [LineDocSourceTest]
[junit] NOTE: Windows 7 6.1 amd64/Sun Microsystems Inc. 1.6.0_21 
(64-bit)/cpus=4,threads=1,free=124797192,total=128057344
[junit] -  ---
[junit] Testcase: 
org.apache.lucene.benchmark.byTask.feeds.LineDocSourceTest:   Caused an 
ERROR
[junit] java.lang.AssertionError: could not remove temp dir: 
C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp
[junit] java.lang.RuntimeException: java.lang.AssertionError: could not 
remove temp dir: 
C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp
[junit] at 
org.apache.lucene.util.LuceneTestCase.afterClassLuceneTestCaseJ4(LuceneTestCase.java:398)
[junit] at 
org.apache.lucene.util.LuceneTestCase.clearTempDirectoriesAfterClass(LuceneTestCase.java:459)
[junit] at 
org.apache.lucene.util.LuceneTestCase.afterClassLuceneTestCaseJ4(LuceneTestCase.java:377)
[junit] 
[junit] 
[junit] Test org.apache.lucene.benchmark.byTask.feeds.LineDocSourceTest 
FAILED


RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

2011-11-21 Thread Steven A Rowe
I wasn’t able to reproduce using Ant.

But I can consistently reproduce the failure with both of these Maven cmdlines 
under modules/facet/:

mvn 
-Dtests.seed=78e524cead5893e3:4edf9eedd4ed0a36:1912c3654ff9f2e1 
-Dtest=TestIndexClose clean test

mvn -Dtests.codec=Lucene3x -Dtest=TestIndexClose clean test

Without specifying -Dtests.codec or -Dtests.seed, “mvn clean test” usually 
succeeds; it looks like this is specific to the Lucene3x codec under Maven.

I ran all Lucene/Solr tests under Maven with “-Dtests.codec=Lucene3x” (using 
mvn’s --fail-at-end option), and the facet module’s TextIndexClose is the only 
failing test.

Maybe there is a Maven-specific test classpath ordering problem here?

Steve

From: Shai Erera [mailto:ser...@gmail.com]
Sent: Monday, November 21, 2011 11:13 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

Perhaps it's a build error (i.e. not cleaning after the test)? This is the 
exception that appears in the log:



The following exceptions were thrown by threads:

*** Thread: Lucene Merge Thread #0 ***

org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)

Caused by: java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)

  at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)

  at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)

  at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)

  at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)

  at 
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)

  at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)

  at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:383)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:451)

*** Thread: Lucene Merge Thread #1 ***

org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)

Caused by: java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)

  at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)

  at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)

  at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)

  at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)

  at 
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)

  at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)

  at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:383)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:451)

*** Thread: Lucene Merge Thread #2 ***

org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:513)

  at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:478)

Caused by: java.lang.IllegalStateException: sub-readers have different 
BytesRef.Comparators; cannot merge

  at org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)

  at org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)

  at 
org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)

  at 
org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51)

  at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)

  

[jira] [Created] (SOLR-2910) PingRequest fails if there is no defaultSearchField in the schema.xml

2011-11-21 Thread Yury Kats (Created) (JIRA)
PingRequest fails if there is no defaultSearchField in the schema.xml
-

 Key: SOLR-2910
 URL: https://issues.apache.org/jira/browse/SOLR-2910
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Yury Kats


PingRequest fails if there is no defaultSearchField in the schema.xml

CommonsHttpSolrServer#ping produces:

org.apache.solr.common.SolrException: Ping query caused exception: no field 
name specified in query and no defaultSearchField defined in schema.xml
at 
org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:77)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1407)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353)
...
request: http://127.0.0.1:8983/solr/a1/admin/ping?wt=javabinversion=2'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-3464:
---


I am reopening this issue as there is some API problem which makes 
openIfChanged not consistently useable with FilterIndexReader:

If you have a FilterIndexReader that did in the past not implement reopen(...), 
the base class IndexReader throwed UOE. This was fine, as a FilterIndexReader 
cannot support reopen unless specifically supported. A FilterIndexReader of 
course must reopen the delegate reader and then wrap it again to Filter. This 
was done by overriding reopen() methods, checking if the delegate returned 
another reader and if yes, wrapping it.

I tried to transform code that implement this pattern to Lucene 3.5RC1 but 
failed to do it in the clean way: Reopen was replaced by a static 
IR.openIfChanged(IR oldReader) that delegates to the specific IndexReaders 
implementation of doOpenIfChanged (which is protected).

To implement the above pattern, doOpenIfChanged must be overridden in 
FilterIndexReader (again the default *must* thorw UOE, otherwise reopening a 
filtered reader returns a non-filtered one). This method must call delegate's 
doOpenIfChanged and if it returns != null, wrap by our FilterIndexReader 
implementation. The problem: This cannot be implemented if the custom Filter is 
in a 3rd party package, as it cannot call the protected doOpenIfChanged. The 
workaround is to use IndexReader.openIfChanged(delegate), but this look borken 
and violates the pattern.

The good this with the workaround is that the VirtualMethod sophisticated 
backwards works correctly. We must at least document this behaviour in 
FilterIndexReader or fix the API.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Edited] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154430#comment-13154430
 ] 

Uwe Schindler edited comment on LUCENE-3464 at 11/21/11 8:03 PM:
-

I am reopening this issue as there is some API problem which makes 
openIfChanged not consistently useable with FilterIndexReader:

If you have a FilterIndexReader that did in the past not implement reopen(...), 
the base class IndexReader throwed UOE. This was fine, as a FilterIndexReader 
cannot support reopen unless specifically supported. A FilterIndexReader of 
course must reopen the delegate reader and then wrap it again to Filter. This 
was done by overriding reopen() methods, checking if the delegate returned 
another reader and if yes, wrapping it.

I tried to transform code that implement this pattern to Lucene 3.5RC1 but 
failed to do it in the clean way: Reopen was replaced by a static 
IR.openIfChanged(IR oldReader) that delegates to the specific IndexReaders 
implementation of doOpenIfChanged (which is protected).

To implement the above pattern, doOpenIfChanged must be overridden in 
FilterIndexReader (again the default *must* thorw UOE, otherwise reopening a 
filtered reader returns a non-filtered one). This method must call delegate's 
doOpenIfChanged and if it returns != null, wrap by our FilterIndexReader 
implementation. The problem: This cannot be implemented if the custom Filter is 
in a 3rd party package, as it cannot call the protected doOpenIfChanged. The 
workaround is to use IndexReader.openIfChanged(delegate), but this look borken 
and violates the pattern.

The good thing with the workaround is that the VirtualMethod sophisticated 
backwards works correctly. We must at least document this behaviour in 
FilterIndexReader or fix the API.

  was (Author: thetaphi):
I am reopening this issue as there is some API problem which makes 
openIfChanged not consistently useable with FilterIndexReader:

If you have a FilterIndexReader that did in the past not implement reopen(...), 
the base class IndexReader throwed UOE. This was fine, as a FilterIndexReader 
cannot support reopen unless specifically supported. A FilterIndexReader of 
course must reopen the delegate reader and then wrap it again to Filter. This 
was done by overriding reopen() methods, checking if the delegate returned 
another reader and if yes, wrapping it.

I tried to transform code that implement this pattern to Lucene 3.5RC1 but 
failed to do it in the clean way: Reopen was replaced by a static 
IR.openIfChanged(IR oldReader) that delegates to the specific IndexReaders 
implementation of doOpenIfChanged (which is protected).

To implement the above pattern, doOpenIfChanged must be overridden in 
FilterIndexReader (again the default *must* thorw UOE, otherwise reopening a 
filtered reader returns a non-filtered one). This method must call delegate's 
doOpenIfChanged and if it returns != null, wrap by our FilterIndexReader 
implementation. The problem: This cannot be implemented if the custom Filter is 
in a 3rd party package, as it cannot call the protected doOpenIfChanged. The 
workaround is to use IndexReader.openIfChanged(delegate), but this look borken 
and violates the pattern.

The good this with the workaround is that the VirtualMethod sophisticated 
backwards works correctly. We must at least document this behaviour in 
FilterIndexReader or fix the API.
  
 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154434#comment-13154434
 ] 

Robert Muir commented on LUCENE-3464:
-

{quote}
The problem: This cannot be implemented if the custom Filter is in a 3rd party 
package, as it cannot call the protected doOpenIfChanged.
{quote}

I don't understand this statement, because FilterReader extends IndexReader so 
it should be able to call protected IR methods.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Simon Willnauer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154437#comment-13154437
 ] 

Simon Willnauer commented on LUCENE-3464:
-

bq. I don't understand this statement, because FilterReader extends IndexReader 
so it should be able to call protected IR methods.


but not the delegates method though.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread casper...@caspershouse.com
+1 on the suggestion to move Close - IDisposable; not being able to use 
using is such a pain, and an eyesore on the code.


Although it will have to be done properly, and not just have Dispose call 
Close (you should have proper protected virtual Dispose methods to take 
inheritance into account, etc).


- Nick



From: Christopher Currens currens.ch...@gmail.com

Sent: Monday, November 21, 2011 2:56 PM

To: lucene-net-...@lucene.apache.org

Subject: Re: [Lucene.Net] Roadmap


Regarding the 3.0.3 branch I started last week, I've put in a lot of late

nights and gotten far more done in a week and a half than I expected.  The

list of changes is very large, and fortunately, I've documented it in some

files that are in the branches root of certain projects.  I'll list what

changes have been made so far, and some of the concerns I have about them,

as well as what still needs to be done.  You can read them all in detail 
in

the files that are in the branch.


All changes in 3.0.3 have been ported to the Lucene.Net and

Lucene.Net.Test, except BooleanClause, LockStressTest, MMapDirectory,

NIOFSDirectory, DummyConcurrentLock, NamedThreadFactory, and

ThreadInterruptedException.


MMapDirectory and NIOFSDirectory have never been ported in the first place

for 2.9.4, so I'm not worried about those.  LockStressTest is a

command-line tool, porting it should be easy, but not essential to a 3.0.3

release, IMO.  DummyConcurrentLock also seems unnecessary (and

non-portable) for .NET, since it's based around Java's Lock class and is

only used to bypass locking, which can be done by passing new Object() to

the method.

NamedThreadFactory I'm unsure about.  It's used in ParallelMultiSearcher

(in which I've opted to use the TPL), and seems to be only used for

debugging, possibly testing.  Either way, I'm not sure it's necessary.

Also, named threads would mean we probably would have to move the class

from the TPL, which greatly simplified the code and parallelization of it

all, as I can't see a way to Set names for a Task.  I suppose it might be

possible, as Tasks have unique Ids, and you could use a Dictionary to map

the thread's name to the ID in the factory, but you'd have to create a

helper function that would allow you to find a task by its name, which

seems more work than the resulting benefits.  VS2010 already has better

support for debugging tasks over threads (I used it when writing the

class), frankly, it's amazing how easy it was to debug.


Other than the above, the entire code base in the core dlls is at 3.0.3,

which is exciting, as I'm really hoping we can get Lucene.Net up to the

current version of Java's 3.x branch, and start working on a line-by-line

port of 4.0.  Tests need to be written for some of the collections I've

made that emulate Java's, to make sure they're even behaving the same way.

The good news is that all of the existing tests pass as a whole, so it

seems to be working, though I'd like the peace of mind of having tests for

them (being HashMapTKey, TValue, WeakDictionaryTKey, TValue and

IdentityCollectionTKey, TValue, it's quite possible any one of them 
could

be completely wrong in how they were put together.)


I'd also like to finally formalize the way we use IDisposable in

Lucene.Net, by marking the Close functions as obsolete, moving the code

into Dispose, and eventually (or immediately) removing the Close 
functions.

There's so much change to the API, that now would be a good time to make

that change if we wanted to.  I'm hesitant to move from a line-by-line 
port

of Lucene.Net completely, but rather having it be close as possible.  The

main reason I feel this way, is when I was porting the Shingle namespace 
of

Contrib.Analyzers, Troy has written it in a .Net way which different

GREATLY from java lucene, and it did make porting it considerably more

difficult; to keep the language to a minimum, I'm just going to say it was

a pain, a huge pain in fact.  I love the idea of moving to a more .NET

design, but I'd like to maintain a line-by-line port anyway, as I think

porting changes is far easier and quicker that way.  At this point, I'm

more interested in getting Lucene.Net to 4.0 and caught up to java, than I

am anything else, hence the extra amount of time I've put into this 
project

over the past week and a half.  Though this isn't really a place for this

discussion.


The larger area of difficult for the port, however, is the Contrib 
section.

There are two major problems with it that is slowing me down.  First,

there are a lot of classes that are outdated.  I've found versions of code

that still have the Apache 1.1 License attached to it, which makes the 
code

quite old.  Also, it was almost impossible for me to port a lot of changes

in Contrib.Analyzers, since the code was so old and different from Java's

2.9.4.


Second, we had almost no unit tests ported for any of the classes, which

means 

RE: [VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Uwe Schindler
Hi,

the Lucene-Core package works for PANGAEA as drop-in replacement, so this is 
all fine.

I found a problem (an inconsistency in the new openIfChanged API) when porting 
the code to not use deprecated stuff: 
https://issues.apache.org/jira/browse/LUCENE-3464 (reopened). The deprecated 
stuff successfully used the sophisticated backwards, cool, Mike!

Another problem with the release candidate: When running contrib tests on 
windows, since the benchmark classes don’t correctly close all 
files/reader/whatever after Robert's commit 1203997 the benchmark tests fail on 
every run on windows. I don't like this for an official release, sorry. The 
test should pass (somehow most of the time - lol@solr), but consistently 
failing is bad (see https://issues.apache.org/jira/browse/LUCENE-3583).

I checked CheckIndex on pre-3.5 indexes, all worked fine for the hug PANGAEA 
index, no hotspot bugs (1.6.0_24), will later try 1.6.0_29. 
Optimizing/Upgrading indexes worked, too. As said before, backwards 
compatibility is fine, no recompilation was needed here.

I will try to run smoke tests later, but I don't like the above 2 issues, so my 
+/-0 (as there is a workaround for the first issue) :(

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Simon Willnauer [mailto:simon.willna...@googlemail.com]
 Sent: Monday, November 21, 2011 1:47 PM
 To: dev@lucene.apache.org
 Subject: [VOTE] Release Lucene/Solr 3.5.0, RC1
 
 Please vote to release the RC1 artifacts at:
 
 https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-
 rev1204425/
 
 as Lucene 3.5.0 and Solr 3.5.0.
 
 Simon Willnauer
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-3464:


Attachment: LUCENE-3464_see_its_just_fine.patch

Thats no problem, it just calls super.doOpenIfChanged, see my proof of concept.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154440#comment-13154440
 ] 

Uwe Schindler commented on LUCENE-3464:
---

This code does not compile (fails by saying IndexReader has no accessible 
method doOpenIfChanged()):

{code:java}
package my.sophisticated.package;

@Override
protected IndexReader doOpenIfChanged() throws CorruptIndexException, 
IOException {
  final IndexReader n=in.doOpenIfChanged();
  return (n==null) ? null : new MySophisticatedIndexReader(n);
}
{code}

This is the working workaround but looks wrong (and works around the 
VirtualMethod issues):

{code:java}
@Override
protected IndexReader doOpenIfChanged() throws CorruptIndexException, 
IOException {
  final IndexReader n=IndexReader.openIfChanged(in);
  return (n==null) ? null : new MySophisticatedIndexReader(n);
}
{code}


 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread Christopher Currens
Some of the Lucene classes have Dispose methods, well, ones that call Close
(and that Close method may or may not call base.Close(), if needed or not).
 Virtual dispose methods can be dangerous only in that they're easy to
implement wrong.  However, it shouldn't be too bad, at least with a
line-by-line port, as we would make the call to the base class whenever
Lucene does, and that would (should) give us the same behavior, implemented
properly.  I'm not aware of differences in the JVM, regarding inheritance
and base methods being called automatically, particularly Close methods.

Slightly unrelated, another annoyance is the use of Java Iterators vs C#
Enumerables.  A lot of our code is there simply because there are
Iterators, but it could be converted to Enumerables. The whole HasNext,
Next vs C#'s MoveNext(), Current is annoying, but it's used all over in the
base code, and would have to be changed there as well.  Either way, I would
like to push for that before 3.0.3 is relased.  IMO, small changes like
this still keep the code similar to the line-by-line port, in that it
doesn't add any difficulties in the porting process, but provides great
benefits to the users of the code, to have a .NET centric API.  I don't
think it would violate our project desciption we have listed on our
Incubator page, either.


Thanks,
Christopher

On Mon, Nov 21, 2011 at 12:03 PM, casper...@caspershouse.com 
casper...@caspershouse.com wrote:

 +1 on the suggestion to move Close - IDisposable; not being able to use
 using is such a pain, and an eyesore on the code.


 Although it will have to be done properly, and not just have Dispose call
 Close (you should have proper protected virtual Dispose methods to take
 inheritance into account, etc).


 - Nick

 

 From: Christopher Currens currens.ch...@gmail.com

 Sent: Monday, November 21, 2011 2:56 PM

 To: lucene-net-...@lucene.apache.org

 Subject: Re: [Lucene.Net] Roadmap


 Regarding the 3.0.3 branch I started last week, I've put in a lot of late

 nights and gotten far more done in a week and a half than I expected.  The

 list of changes is very large, and fortunately, I've documented it in some

 files that are in the branches root of certain projects.  I'll list what

 changes have been made so far, and some of the concerns I have about them,

 as well as what still needs to be done.  You can read them all in detail
 in

 the files that are in the branch.


 All changes in 3.0.3 have been ported to the Lucene.Net and

 Lucene.Net.Test, except BooleanClause, LockStressTest, MMapDirectory,

 NIOFSDirectory, DummyConcurrentLock, NamedThreadFactory, and

 ThreadInterruptedException.


 MMapDirectory and NIOFSDirectory have never been ported in the first place

 for 2.9.4, so I'm not worried about those.  LockStressTest is a

 command-line tool, porting it should be easy, but not essential to a 3.0.3

 release, IMO.  DummyConcurrentLock also seems unnecessary (and

 non-portable) for .NET, since it's based around Java's Lock class and is

 only used to bypass locking, which can be done by passing new Object() to

 the method.

 NamedThreadFactory I'm unsure about.  It's used in ParallelMultiSearcher

 (in which I've opted to use the TPL), and seems to be only used for

 debugging, possibly testing.  Either way, I'm not sure it's necessary.

 Also, named threads would mean we probably would have to move the class

 from the TPL, which greatly simplified the code and parallelization of it

 all, as I can't see a way to Set names for a Task.  I suppose it might be

 possible, as Tasks have unique Ids, and you could use a Dictionary to map

 the thread's name to the ID in the factory, but you'd have to create a

 helper function that would allow you to find a task by its name, which

 seems more work than the resulting benefits.  VS2010 already has better

 support for debugging tasks over threads (I used it when writing the

 class), frankly, it's amazing how easy it was to debug.


 Other than the above, the entire code base in the core dlls is at 3.0.3,

 which is exciting, as I'm really hoping we can get Lucene.Net up to the

 current version of Java's 3.x branch, and start working on a line-by-line

 port of 4.0.  Tests need to be written for some of the collections I've

 made that emulate Java's, to make sure they're even behaving the same way.

 The good news is that all of the existing tests pass as a whole, so it

 seems to be working, though I'd like the peace of mind of having tests for

 them (being HashMapTKey, TValue, WeakDictionaryTKey, TValue and

 IdentityCollectionTKey, TValue, it's quite possible any one of them
 could

 be completely wrong in how they were put together.)


 I'd also like to finally formalize the way we use IDisposable in

 Lucene.Net, by marking the Close functions as obsolete, moving the code

 into Dispose, and eventually (or immediately) removing the Close
 functions.

 There's so much change to 

[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154442#comment-13154442
 ] 

Uwe Schindler commented on LUCENE-3464:
---

bq. Thats no problem, it just calls super.doOpenIfChanged, see my proof of 
concept.

This throws UOE as FilterIndexReader (correctly!!!) does not implement 
doOpenIfChanged. If it would implement if a FilterIndexReader without any 
doOpenIfChanged impl would be horribly broken as it would return an unfiltered 
reader!!!

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154441#comment-13154441
 ] 

Robert Muir commented on LUCENE-3464:
-

Uwe, see my patch, it uses its own sophisticated packaged and works fine.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1315#comment-1315
 ] 

Robert Muir commented on LUCENE-3464:
-

In my patch FIR returns whatever the underlying reader does.

If you want this to be UOE *if doReopen is not overridden*, well, you know how 
to do that :)

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154445#comment-13154445
 ] 

Uwe Schindler commented on LUCENE-3464:
---

Robert read my comment about the reason why FilterIndexReader should never 
delegate to doOpenIfChanged(), as it would return an unfiltered reader on 
reopening, which would be a hidden bug for an implementor!

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154448#comment-13154448
 ] 

Uwe Schindler commented on LUCENE-3464:
---

Robert: If you wrap a standard SegmentReader, it supports doOpenIfChanged. If I 
wrap it with my own custom FilterIndexReader that does not implement 
doOpenIfChanged, it will silently pass the return value of 
SegmentReader.doOpenIfChanged() which is no longer filtered. By throwing UOE in 
the default FilterIndexReader the user will see this when he tries to reopen.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154449#comment-13154449
 ] 

Robert Muir commented on LUCENE-3464:
-

As I said before, this is simple, FIR's impl only delegates if its overriden. 
otherwise throws UOE.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154456#comment-13154456
 ] 

Uwe Schindler commented on LUCENE-3464:
---

 bq. As I said before, this is simple, FIR's impl only delegates if its 
overriden. otherwise throws UOE.

Not in your proof of concept. And this proof of concept is no proof as it 
modifies FilteredIndexReader, so it would not work with 3.5.0RC1.

Please note: For the above reason: FilterIndexReader in the past never 
delegated reopen() - for this exact reason (there was an issue open why 
delegating is wrong - have to look it up).

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: TestLFUCache.java
LFUCache.java
ConcurrentLFUCache.java

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: ConcurrentLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: TestLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: LFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: ConcurrentLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: ConcurrentLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: FastLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: (was: FastLFUCache.java)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, FastLFUCache.java, 
 LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154465#comment-13154465
 ] 

Shawn Heisey commented on SOLR-2906:


All known bugs found and fixed, unit test looks correct and passes.  This was 
created against branch_3x, but trunk probably won't be much different.

IMHO, ready for review and possible inclusion.  The javadoc and other comments 
were reviewed and modified, but not closely.


 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154463#comment-13154463
 ] 

Robert Muir commented on LUCENE-3464:
-

{quote}
Not in your proof of concept. And this proof of concept is no proof as it 
modifies FilteredIndexReader, so it would not work with 3.5.0RC1.
{quote}

I'm not arguing that nothing needs to be done: I'm just saying that the fact 
its protected isnt really a problem.

For 3.5.0 there is already a workaround.

 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread casper...@caspershouse.com


Christopher,


I'd say there not that hard to get wrong, the pattern for correctly 
implementing the IDisposable interface is well-established and has been 
common practice since .NET 1.0:


http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=VS.100).aspx


Additionally, I said protected virtual (as per the recommendation in the 
link above).


Also agreed on the use of iterators everywhere.  Foreach is your friend.


What would be even better in some cases, using yield return, as I'm sure 
result sets don't need to be materialized everywhere as they are now.


- Nick



From: Christopher Currens currens.ch...@gmail.com

Sent: Monday, November 21, 2011 3:18 PM

To: lucene-net-...@lucene.apache.org, casper...@caspershouse.com

Subject: Re: [Lucene.Net] Roadmap


Some of the Lucene classes have Dispose methods, well, ones that call Close 
(and that Close method may or may not call base.Close(), if needed or not). 
 Virtual dispose methods can be dangerous only in that they're easy to 
implement wrong.  However, it shouldn't be too bad, at least with a 
line-by-line port, as we would make the call to the base class whenever 
Lucene does, and that would (should) give us the same behavior, implemented 
properly.  I'm not aware of differences in the JVM, regarding inheritance 
and base methods being called automatically, particularly Close methods.

Slightly unrelated, another annoyance is the use of Java Iterators vs C# 
Enumerables.  A lot of our code is there simply because there are 
Iterators, but it could be converted to Enumerables. The whole HasNext, 
Next vs C#'s MoveNext(), Current is annoying, but it's used all over in the 
base code, and would have to be changed there as well.  Either way, I would 
like to push for that before 3.0.3 is relased.  IMO, small changes like 
this still keep the code similar to the line-by-line port, in that it 
doesn't add any difficulties in the porting process, but provides great 
benefits to the users of the code, to have a .NET centric API.  I don't 
think it would violate our project desciption we have listed on our 
Incubator page, either.

Thanks,
Christopher


On Mon, Nov 21, 2011 at 12:03 PM, casper...@caspershouse.com 
casper...@caspershouse.com wrote:

+1 on the suggestion to move Close - IDisposable; not being able to use

using is such a pain, and an eyesore on the code.


Although it will have to be done properly, and not just have Dispose call

Close (you should have proper protected virtual Dispose methods to take

inheritance into account, etc).


- Nick





From: Christopher Currens currens.ch...@gmail.com


Sent: Monday, November 21, 2011 2:56 PM


To: lucene-net-...@lucene.apache.org


Subject: Re: [Lucene.Net] Roadmap


Regarding the 3.0.3 branch I started last week, I've put in a lot of late


nights and gotten far more done in a week and a half than I expected.  The


list of changes is very large, and fortunately, I've documented it in some


files that are in the branches root of certain projects.  I'll list what


changes have been made so far, and some of the concerns I have about them,


as well as what still needs to be done.  You can read them all in detail

in


the files that are in the branch.


All changes in 3.0.3 have been ported to the Lucene.Net and


Lucene.Net.Test, except BooleanClause, LockStressTest, MMapDirectory,


NIOFSDirectory, DummyConcurrentLock, NamedThreadFactory, and


ThreadInterruptedException.


MMapDirectory and NIOFSDirectory have never been ported in the first place


for 2.9.4, so I'm not worried about those.  LockStressTest is a


command-line tool, porting it should be easy, but not essential to a 3.0.3


release, IMO.  DummyConcurrentLock also seems unnecessary (and


non-portable) for .NET, since it's based around Java's Lock class and is


only used to bypass locking, which can be done by passing new Object() to


the method.


NamedThreadFactory I'm unsure about.  It's used in ParallelMultiSearcher


(in which I've opted to use the TPL), and seems to be only used for


debugging, possibly testing.  Either way, I'm not sure it's necessary.


Also, named threads would mean we probably would have to move the class


from the TPL, which greatly simplified the code and parallelization of it


all, as I can't see a way to Set names for a Task.  I suppose it might be


possible, as Tasks have unique Ids, and you could use a Dictionary to map


the thread's name to the ID in the factory, but you'd have to create a


helper function that would allow you to find a task by its name, which


seems more work than the resulting benefits.  VS2010 already has better


support for debugging tasks over threads (I used it when writing the


class), frankly, it's amazing how easy it was to debug.


Other than the above, the entire code base in the core dlls is at 3.0.3,


which is exciting, as I'm really hoping we can get Lucene.Net up to the



[jira] [Commented] (LUCENE-3464) Rename IndexReader.reopen to make it clear that reopen may not happen

2011-11-21 Thread Robert Muir (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154468#comment-13154468
 ] 

Robert Muir commented on LUCENE-3464:
-

And just again: I want to point out this is the same fundamental problem as 
LUCENE-2828.

This is why abstract classes are NOT the solution to backwards compatibility, 
because
delegators over abstract classes get all jacked up.


 Rename IndexReader.reopen to make it clear that reopen may not happen
 -

 Key: LUCENE-3464
 URL: https://issues.apache.org/jira/browse/LUCENE-3464
 Project: Lucene - Java
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.5, 4.0

 Attachments: LUCENE-3464.3x.patch, LUCENE-3464.patch, 
 LUCENE-3464.patch, LUCENE-3464_see_its_just_fine.patch


 Spinoff from LUCENE-3454 where Shai noted this inconsistency.
 IR.reopen sounds like an unconditional operation, which has trapped users in 
 the past into always closing the old reader instead of only closing it if the 
 returned reader is new.
 I think this hidden maybe-ness is trappy and we should rename it 
 (maybeReopen?  reopenIfNeeded?).
 In addition, instead of returning this when the reopen didn't happen, I 
 think we should return null to enforce proper usage of the maybe-ness of this 
 API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2772) DateField parsing/formatting issues of years 0001-1000

2011-11-21 Thread Hoss Man (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154478#comment-13154478
 ] 

Hoss Man commented on SOLR-2772:


1) The _original_ code was more broken then what we have now, so saying it's 4 
times as fast is kind of meaningless.

2) The only issue is not just (positive) years less then 1000, as noted in the 
parent issue, there is confusion about how to handle years less then 0001 
because of ambiguity in xml dateTime format spec about year 0

...as i said in the parent issue, unless someone has a better idea, i'm open to 
moving towards (correct) hand rolled code -- but we have to figure out what is 
correct as far as year 0.  This particular child issue was just to move 
forward with something that worked for year 0001 and above.

 DateField parsing/formatting issues of years 0001-1000
 --

 Key: SOLR-2772
 URL: https://issues.apache.org/jira/browse/SOLR-2772
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 3.5, 4.0


 see parent task for background

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Simon Willnauer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154482#comment-13154482
 ] 

Simon Willnauer commented on SOLR-2906:
---

bq. IMHO, ready for review and possible inclusion. The javadoc and other 
comments were reviewed and modified, but not closely.

shawn, is it possible to upload a diff file (patch). If you are using a svn 
checkout make sure on add all new files (svn add) and then run svn diff  
SOLR-2906.patch from the top level directory. This makes it easier to see what 
changed and we only have to apply a single file to test  review your changes. 
there is also a wiki for this: 
http://wiki.apache.org/lucene-java/HowToContribute (see How to create a patch)

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2772) DateField parsing/formatting issues of years 0001-1000

2011-11-21 Thread Simon Willnauer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154484#comment-13154484
 ] 

Simon Willnauer commented on SOLR-2772:
---

I don't know if that will help but maybe we should look at 
http://joda-time.sourceforge.net it seems they worked around the perf issues 
with the java impl. and its ASL-2 maybe we can include or get some code from 
them? - ignore me if I'm completely off here

 DateField parsing/formatting issues of years 0001-1000
 --

 Key: SOLR-2772
 URL: https://issues.apache.org/jira/browse/SOLR-2772
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 3.5, 4.0


 see parent task for background

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

2011-11-21 Thread Steven A Rowe
I tried this, and the test fails under Maven; codec's class is 
o.a.l.index.codecs.lucene3x.Lucene3xCodec instead of 
o.a.l.index.codecs.preflexrw.PreFlexRWCodec.

So, it's a classpath ordering issue.  I'll see if I can do some re-ordering.

Steve

 -Original Message-
 From: Steven A Rowe [mailto:sar...@syr.edu]
 Sent: Monday, November 21, 2011 2:40 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of
 sync
 
 Thanks, I'll give it a try. - Steve
 
  -Original Message-
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Monday, November 21, 2011 2:38 PM
  To: dev@lucene.apache.org
  Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of
  sync
 
  If you want to confirm thats the bug, just add a test to this contrib
  module that looks like TestImpersonation:
 
  public class TestImpersonation extends LuceneTestCase {
public void test() throws Exception {
  Codec codec = Codec.forName(Lucene3x);
  assertTrue(codec instanceof PreFlexRWCodec);
}
  }
 
  On Mon, Nov 21, 2011 at 2:35 PM, Uwe Schindler u...@thetaphi.de wrote:
   Exactly, test-framework must come before core.
  
   The reason here is that 3x impersonation does not work.
  
   Uwe
   --
   Uwe Schindler
   H.-H.-Meier-Allee 63, 28213 Bremen
   http://www.thetaphi.de
  
  
  
   Steven A Rowe sar...@syr.edu schrieb:
  
   I wasn’t able to reproduce using Ant.
  
  
  
   But I can consistently reproduce the failure with both of these Maven
   cmdlines under modules/facet/:
  
  
  
       mvn
   -Dtests.seed=78e524cead5893e3:4edf9eedd4ed0a36:1912c3654ff9f2e1
   -Dtest=TestIndexClose clean test
  
  
  
   mvn -Dtests.codec=Lucene3x -Dtest=TestIndexClose clean test
  
  
  
   Without specifying -Dtests.codec or -Dtests.seed, “mvn clean test”
  usually
   succeeds; it looks like this is specific to the Lucene3x codec under
  Maven.
  
  
  
   I ran all Lucene/Solr tests under Maven with “-Dtests.codec=Lucene3x”
   (using mvn’s --fail-at-end option), and the facet module’s
  TextIndexClose is
   the only failing test.
  
  
  
   Maybe there is a Maven-specific test classpath ordering problem here?
  
  
  
   Steve
  
  
  
   From: Shai Erera [mailto:ser...@gmail.com]
   Sent: Monday, November 21, 2011 11:13 AM
   To: dev@lucene.apache.org
   Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out
 of
   sync
  
  
  
   Perhaps it's a build error (i.e. not cleaning after the test)? This
 is
  the
   exception that appears in the log:
  
  
  
   The following exceptions were thrown by threads:
  
   *** Thread: Lucene Merge Thread #0 ***
  
   org.apache.lucene.index.MergePolicy$MergeException:
   java.lang.IllegalStateException: sub-readers have different
   BytesRef.Comparators; cannot merge
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(Conc
  urrentMergeScheduler.java:513)
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concurren
  tMergeScheduler.java:478)
  
   Caused by: java.lang.IllegalStateException: sub-readers have
 different
   BytesRef.Comparators; cannot merge
  
     at
   org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
  
     at
   org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
  
     at
  
 org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)
  
     at
  
 
 org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51
  )
  
     at
  
  org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)
  
     at
   org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
  
     at
  
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)
  
     at
   org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSc
  heduler.java:383)
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concurren
  tMergeScheduler.java:451)
  
   *** Thread: Lucene Merge Thread #1 ***
  
   org.apache.lucene.index.MergePolicy$MergeException:
   java.lang.IllegalStateException: sub-readers have different
   BytesRef.Comparators; cannot merge
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(Conc
  urrentMergeScheduler.java:513)
  
     at
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concurren
  tMergeScheduler.java:478)
  
   Caused by: java.lang.IllegalStateException: sub-readers have
 different
   BytesRef.Comparators; cannot merge
  
     at
   org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
  
     at
   org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
 

[jira] [Commented] (SOLR-2772) DateField parsing/formatting issues of years 0001-1000

2011-11-21 Thread Hoss Man (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154505#comment-13154505
 ] 

Hoss Man commented on SOLR-2772:


Simon: see parent issue with link to JodaTime thread where i asked about using 
JodaTime to deal with some of the issues faced by DateField and the XML 
dateTime spec (including stub code using JodaTime) and got a response that was 
not very promising.

 DateField parsing/formatting issues of years 0001-1000
 --

 Key: SOLR-2772
 URL: https://issues.apache.org/jira/browse/SOLR-2772
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 3.5, 4.0


 see parent task for background

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2772) DateField parsing/formatting issues of years 0001-1000

2011-11-21 Thread Yonik Seeley (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154509#comment-13154509
 ] 

Yonik Seeley commented on SOLR-2772:


bq. 1) The original code was more broken then what we have now, so saying it's 
4 times as fast is kind of meaningless.

Seems relevant that if the only difference in behavior between the old code and 
the new code is years  1000, then I could fix the old code and get a 4x 
speedup.


 DateField parsing/formatting issues of years 0001-1000
 --

 Key: SOLR-2772
 URL: https://issues.apache.org/jira/browse/SOLR-2772
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 3.5, 4.0


 see parent task for background

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154514#comment-13154514
 ] 

Shawn Heisey commented on SOLR-2906:


bq. shawn, is it possible to upload a diff file (patch).

These are all new files, no files changed.  svn diff returns nothing.

 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, LFUCache.java, TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2906) Implement LFU Cache

2011-11-21 Thread Shawn Heisey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-2906:
---

Attachment: SOLR-2906.patch

I figured out what I did wrong.  You have to 'svn add' the files before you can 
'svn diff' :)


 Implement LFU Cache
 ---

 Key: SOLR-2906
 URL: https://issues.apache.org/jira/browse/SOLR-2906
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 3.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: ConcurrentLFUCache.java, LFUCache.java, SOLR-2906.patch, 
 TestLFUCache.java


 Implement an LFU (Least Frequently Used) cache as the first step towards a 
 full ARC cache

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [Lucene.Net] Roadmap

2011-11-21 Thread Digy
Hi Chris,

First of all, thank you for your great work on 3.0.3 branch. 
I suppose you took 2.9.4 as a code base to make 3.0.3 port since some of
your problems are the same with those I faced in 2.9.4g branch. 
(e.g, 
Support/MemoryMappedDirectory.cs (but never used in core), 
IDisposable, 
introduction of some ActionTs, FuncTs , 
foreach instead of GetEnumerator/MoveNext,
IEquatableT,
WeakDictionaryT,
SetT
etc.
)

Since I also used 3.0.3 as a reference, maybe we can use some of 2.9.4g's
code in 3.0.3 when necessary(I haven't had time to look into 3.0.3 deeply)

Just to ensure the coordination, maybe you should create a new issue in
JIRA, so that people send patches to that issue instead of directly
commiting.


@Prescott,
2.9.4g is not behind of 2.9.4 in bug fixes  features level. So, It is (I
think) ready for another release.(I use it in all my projects since long).


PS: Hearing the pain of porting codes that greatly differ from Java made
me just smile( sorry for that:( ). Be ready for responses that get beyond
the criticism between With all due respect  Just my $0.02 paranthesis.

DIGY

-Original Message-
From: Christopher Currens [mailto:currens.ch...@gmail.com] 
Sent: Monday, November 21, 2011 10:19 PM
To: lucene-net-...@lucene.apache.org; casper...@caspershouse.com
Subject: Re: [Lucene.Net] Roadmap

Some of the Lucene classes have Dispose methods, well, ones that call Close
(and that Close method may or may not call base.Close(), if needed or not).
 Virtual dispose methods can be dangerous only in that they're easy to
implement wrong.  However, it shouldn't be too bad, at least with a
line-by-line port, as we would make the call to the base class whenever
Lucene does, and that would (should) give us the same behavior, implemented
properly.  I'm not aware of differences in the JVM, regarding inheritance
and base methods being called automatically, particularly Close methods.

Slightly unrelated, another annoyance is the use of Java Iterators vs C#
Enumerables.  A lot of our code is there simply because there are
Iterators, but it could be converted to Enumerables. The whole HasNext,
Next vs C#'s MoveNext(), Current is annoying, but it's used all over in the
base code, and would have to be changed there as well.  Either way, I would
like to push for that before 3.0.3 is relased.  IMO, small changes like
this still keep the code similar to the line-by-line port, in that it
doesn't add any difficulties in the porting process, but provides great
benefits to the users of the code, to have a .NET centric API.  I don't
think it would violate our project desciption we have listed on our
Incubator page, either.


Thanks,
Christopher

On Mon, Nov 21, 2011 at 12:03 PM, casper...@caspershouse.com 
casper...@caspershouse.com wrote:

 +1 on the suggestion to move Close - IDisposable; not being able to use
 using is such a pain, and an eyesore on the code.


 Although it will have to be done properly, and not just have Dispose call
 Close (you should have proper protected virtual Dispose methods to take
 inheritance into account, etc).


 - Nick

 

 From: Christopher Currens currens.ch...@gmail.com

 Sent: Monday, November 21, 2011 2:56 PM

 To: lucene-net-...@lucene.apache.org

 Subject: Re: [Lucene.Net] Roadmap


 Regarding the 3.0.3 branch I started last week, I've put in a lot of late

 nights and gotten far more done in a week and a half than I expected.  The

 list of changes is very large, and fortunately, I've documented it in some

 files that are in the branches root of certain projects.  I'll list what

 changes have been made so far, and some of the concerns I have about them,

 as well as what still needs to be done.  You can read them all in detail
 in

 the files that are in the branch.


 All changes in 3.0.3 have been ported to the Lucene.Net and

 Lucene.Net.Test, except BooleanClause, LockStressTest, MMapDirectory,

 NIOFSDirectory, DummyConcurrentLock, NamedThreadFactory, and

 ThreadInterruptedException.


 MMapDirectory and NIOFSDirectory have never been ported in the first place

 for 2.9.4, so I'm not worried about those.  LockStressTest is a

 command-line tool, porting it should be easy, but not essential to a 3.0.3

 release, IMO.  DummyConcurrentLock also seems unnecessary (and

 non-portable) for .NET, since it's based around Java's Lock class and is

 only used to bypass locking, which can be done by passing new Object() to

 the method.

 NamedThreadFactory I'm unsure about.  It's used in ParallelMultiSearcher

 (in which I've opted to use the TPL), and seems to be only used for

 debugging, possibly testing.  Either way, I'm not sure it's necessary.

 Also, named threads would mean we probably would have to move the class

 from the TPL, which greatly simplified the code and parallelization of it

 all, as I can't see a way to Set names for a Task.  I suppose 

[jira] [Commented] (SOLR-2772) DateField parsing/formatting issues of years 0001-1000

2011-11-21 Thread Hoss Man (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154551#comment-13154551
 ] 

Hoss Man commented on SOLR-2772:


I guess i don't understand what exactly you mean by original. 

The *original* circa Solr 1.1 hand rolled code for dealing with dates had all 
sorts of bugs, fixed in various ways over the years -- this particular patch 
fixed two bugs in DateField (related to years 0001-1000 and millis on years 
before the epoch) and corrected the fact that TextResponseWriter was 
erroneously not using the well tested formatting code in DateField -- so it was 
still susceptible to some of those old bugs.  (do i have the full list of those 
bugs handy: no, but if you're interested you can search Jira for them)

If you want to re-write this to be hand rolled go ahead, but please do so in 
the DateField code (and don't just revert the changes to TextResponseWriter) 
since tests directly against DateField are where most of the Unit testing of 
parsing/formatting esoteric dates live -- if you keep all those tests happy 
(w/o writting yet another new set of parse/format methods) then i'll happily 
welcome whatever high-performance hand rolled solution you have.

 DateField parsing/formatting issues of years 0001-1000
 --

 Key: SOLR-2772
 URL: https://issues.apache.org/jira/browse/SOLR-2772
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 3.5, 4.0


 see parent task for background

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of sync

2011-11-21 Thread Steven A Rowe
I committed a fix for all modules' POMs in r1204732.  All tests pass for me 
locally (except for benchmark module's LineDocSourceTest: LUCENE-3583).

 -Original Message-
 From: Steven A Rowe [mailto:sar...@syr.edu]
 Sent: Monday, November 21, 2011 3:47 PM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of
 sync
 
 I tried this, and the test fails under Maven; codec's class is
 o.a.l.index.codecs.lucene3x.Lucene3xCodec instead of
 o.a.l.index.codecs.preflexrw.PreFlexRWCodec.
 
 So, it's a classpath ordering issue.  I'll see if I can do some re-
 ordering.
 
 Steve
 
  -Original Message-
  From: Steven A Rowe [mailto:sar...@syr.edu]
  Sent: Monday, November 21, 2011 2:40 PM
  To: dev@lucene.apache.org
  Subject: RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of
  sync
 
  Thanks, I'll give it a try. - Steve
 
   -Original Message-
   From: Robert Muir [mailto:rcm...@gmail.com]
   Sent: Monday, November 21, 2011 2:38 PM
   To: dev@lucene.apache.org
   Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out of
   sync
  
   If you want to confirm thats the bug, just add a test to this contrib
   module that looks like TestImpersonation:
  
   public class TestImpersonation extends LuceneTestCase {
 public void test() throws Exception {
   Codec codec = Codec.forName(Lucene3x);
   assertTrue(codec instanceof PreFlexRWCodec);
 }
   }
  
   On Mon, Nov 21, 2011 at 2:35 PM, Uwe Schindler u...@thetaphi.de
 wrote:
Exactly, test-framework must come before core.
   
The reason here is that 3x impersonation does not work.
   
Uwe
--
Uwe Schindler
H.-H.-Meier-Allee 63, 28213 Bremen
http://www.thetaphi.de
   
   
   
Steven A Rowe sar...@syr.edu schrieb:
   
I wasn’t able to reproduce using Ant.
   
   
   
But I can consistently reproduce the failure with both of these
 Maven
cmdlines under modules/facet/:
   
   
   
    mvn
-Dtests.seed=78e524cead5893e3:4edf9eedd4ed0a36:1912c3654ff9f2e1
-Dtest=TestIndexClose clean test
   
   
   
mvn -Dtests.codec=Lucene3x -Dtest=TestIndexClose clean test
   
   
   
Without specifying -Dtests.codec or -Dtests.seed, “mvn clean test”
   usually
succeeds; it looks like this is specific to the Lucene3x codec
 under
   Maven.
   
   
   
I ran all Lucene/Solr tests under Maven with “-
 Dtests.codec=Lucene3x”
(using mvn’s --fail-at-end option), and the facet module’s
   TextIndexClose is
the only failing test.
   
   
   
Maybe there is a Maven-specific test classpath ordering problem
 here?
   
   
   
Steve
   
   
   
From: Shai Erera [mailto:ser...@gmail.com]
Sent: Monday, November 21, 2011 11:13 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #303: POMs out
  of
sync
   
   
   
Perhaps it's a build error (i.e. not cleaning after the test)? This
  is
   the
exception that appears in the log:
   
   
   
The following exceptions were thrown by threads:
   
*** Thread: Lucene Merge Thread #0 ***
   
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
   
  at
   
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(Conc
   urrentMergeScheduler.java:513)
   
  at
   
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concurren
   tMergeScheduler.java:478)
   
Caused by: java.lang.IllegalStateException: sub-readers have
  different
BytesRef.Comparators; cannot merge
   
  at
org.apache.lucene.index.MultiTerms.init(MultiTerms.java:55)
   
  at
org.apache.lucene.index.MultiFields.terms(MultiFields.java:232)
   
  at
   
  org.apache.lucene.index.MultiFieldsEnum.terms(MultiFieldsEnum.java:114)
   
  at
   
  
 
 org.apache.lucene.index.codecs.FieldsConsumer.merge(FieldsConsumer.java:51
   )
   
  at
   
  
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:293)
   
  at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
   
  at
   
  org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3581)
   
  at
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3215)
   
  at
   
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSc
   heduler.java:383)
   
  at
   
  
 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concurren
   tMergeScheduler.java:451)
   
*** Thread: Lucene Merge Thread #1 ***
   
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.IllegalStateException: sub-readers have different
BytesRef.Comparators; cannot merge
   

[jira] [Commented] (SOLR-2910) PingRequest fails if there is no defaultSearchField in the schema.xml

2011-11-21 Thread Hoss Man (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154691#comment-13154691
 ] 

Hoss Man commented on SOLR-2910:


This sounds like a user configuration error.

the ping request handler just executes whatever query it's configured to 
execute (in solrconfig.xml) ... whatever query you have configured for the 
/admin/ping handler needs to be something that actually works with your 
schema.xml (ie: if you don't have a default search field, then your ping query 
needs to have an explicit field, or use \*:\*, etc...)

 PingRequest fails if there is no defaultSearchField in the schema.xml
 -

 Key: SOLR-2910
 URL: https://issues.apache.org/jira/browse/SOLR-2910
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Yury Kats

 PingRequest fails if there is no defaultSearchField in the schema.xml
 CommonsHttpSolrServer#ping produces:
 org.apache.solr.common.SolrException: Ping query caused exception: no field 
 name specified in query and no defaultSearchField defined in schema.xml
 at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:77)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1407)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353)
 ...
 request: http://127.0.0.1:8983/solr/a1/admin/ping?wt=javabinversion=2'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread Troy Howard
So, if we're getting back to the line by line port discussion... I
think either side of this discussion is too extreme. For the case in
point Chris just mentioned (which I'm not really sure what part was so
difficult, as I ported that library in about 30 minutes from
scratch)... anything is a pain if it sticks out in the middle of doing
something completely different.

The only reason we are able to do this line by line is due to the
general similarity between Java and C#'s language syntax. If we were
porting Lucene to a completely different language, that had a totally
different syntax, the process would go like this:

- Look at the original code, understand it's intent
- Create similar code in the new language that expresses the same intent

When applying changes:

- Look at the original code diffs, understanding the intent of the change
- Look at the ported code, and apply the changed logic's meaning in
that language

So, is just a different thought process. In my opinion, it's a better
process because it forces the developer to actually think about the
code instead of blindly converting syntax (possibly slightly
incorrectly and introducing regressions). While there is a large
volume of unit tests in Lucene, they are unfortunately not really the
right tests and make porting much more difficult, because it's hard to
verify that your ported code behaves the same because you can't just
rely on the unit tests to verify your port. Therefore, it's safer to
follow a process that requires the developer to delve deeply into the
meaning of the code. Following a line-by-line process is convenient,
but doesn't focus on meaning, which I think is more important.

Thanks,
Troy

On Mon, Nov 21, 2011 at 2:23 PM, Christopher Currens
currens.ch...@gmail.com wrote:
 Digy,

 No worries.  I wasn't taking them personally.  You've been doing this for a
 lot longer than I have, but I didn't understand you pain until I had to go
 through it personally. :P

 Have you looked at Contrib in a while?  There's a lot of projects that are
 in Java's Contrib that are not in Lucene.Net?  Is this because there are
 some that can't easily (if at all) be ported over to .NET or just because
 they've been neglected?  I'm trying to get a handle on what's important to
 port and what isn't.  Figured someone with experience could help me with a
 starting point over deciding where to start with everything that's missing.


 Thanks,
 Christopher

 On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:


 Chris,

 Sorry, if you took my comments about pain of porting personally. That
 wasn't my intension.

 +1 for all your changes/divergences. I made/could have made them too.

 DIGY

 -Original Message-
 From: Christopher Currens [mailto:currens.ch...@gmail.com]
 Sent: Monday, November 21, 2011 11:45 PM
 To: lucene-net-...@lucene.apache.org
 Subject: Re: [Lucene.Net] Roadmap

 Digy,

 I used 2.9.4 trunk as the base for the 3.0.3 branch, but I looked to the
 code in 2.9.4g as a reference for many things, particularly the Support
 classes.  We hit many of the same issues I'm sure, I moved some of the
 anonymous classes into a base class where you could inject functions,
 though not all could be replaced, nor did I replace all that could have
 been.  Some of our code is different, I went for the option for
 WeakDictionary to be completely generic, as in wrapping a generic
 dictionary with WeakKeyT instead of wrapping the already existing
 WeakHashTable in support.  In hindsight, it may have just been easier to
 convert the WeakHashTable to generic, but alas, I'm only realizing that
 now.  There is a problem with my WeakDictionary, specifically the function
 that determines when to clean/compact the dictionary and remove the dead
 keys.  I need a better heuristic of deciding when to run the clean.  That's
 a performance issue though.

 Regarding the pain of porting, I am a changed man.  It's nice, in a sad
 way, to know that I'm not the only one who experienced those difficulties.
  I used to be in the camp that porting code that differed from java
 wouldn't be difficult at all.  However, now I code corrected!  It threw me
 a curve-ball, for sure.  I DO think a line-by-line port can definitely
 include the things talked about below, ie the changes to Dispose and the
 changes to IEnumerableT.  Those changes, I thing, can be made without a
 heavy impact on the porting process.

 There was one fairly large change I opted to use that differed quite a bit
 from Java, however, and that was the use of the TPL in
 ParallelMultiSearcher.  It was far easier to port this way, and I don't
 think it affects the porting process too much.  Java uses a helper class
 defined at the bottom of the source file that handles it, I'm simply using
 a built-in one instead.  I just need to be careful about it, it would be
 really easy to get carried away with it.


 Thanks,
 Christopher

 On Mon, Nov 21, 2011 at 1:20 PM, Digy digyd...@gmail.com wrote:

  Hi 

RE: [Lucene.Net] Roadmap

2011-11-21 Thread Digy
Troy,
I am not againt it if you can continue to understand and port so easyly. 
No one here -I think- wants a java-tastes code.

DIGY

-Original Message-
From: Troy Howard [mailto:thowar...@gmail.com] 
Sent: Tuesday, November 22, 2011 12:42 AM
To: lucene-net-...@lucene.apache.org
Subject: Re: [Lucene.Net] Roadmap

So, if we're getting back to the line by line port discussion... I
think either side of this discussion is too extreme. For the case in
point Chris just mentioned (which I'm not really sure what part was so
difficult, as I ported that library in about 30 minutes from
scratch)... anything is a pain if it sticks out in the middle of doing
something completely different.

The only reason we are able to do this line by line is due to the
general similarity between Java and C#'s language syntax. If we were
porting Lucene to a completely different language, that had a totally
different syntax, the process would go like this:

- Look at the original code, understand it's intent
- Create similar code in the new language that expresses the same intent

When applying changes:

- Look at the original code diffs, understanding the intent of the change
- Look at the ported code, and apply the changed logic's meaning in
that language

So, is just a different thought process. In my opinion, it's a better
process because it forces the developer to actually think about the
code instead of blindly converting syntax (possibly slightly
incorrectly and introducing regressions). While there is a large
volume of unit tests in Lucene, they are unfortunately not really the
right tests and make porting much more difficult, because it's hard to
verify that your ported code behaves the same because you can't just
rely on the unit tests to verify your port. Therefore, it's safer to
follow a process that requires the developer to delve deeply into the
meaning of the code. Following a line-by-line process is convenient,
but doesn't focus on meaning, which I think is more important.

Thanks,
Troy

On Mon, Nov 21, 2011 at 2:23 PM, Christopher Currens
currens.ch...@gmail.com wrote:
 Digy,

 No worries.  I wasn't taking them personally.  You've been doing this for a
 lot longer than I have, but I didn't understand you pain until I had to go
 through it personally. :P

 Have you looked at Contrib in a while?  There's a lot of projects that are
 in Java's Contrib that are not in Lucene.Net?  Is this because there are
 some that can't easily (if at all) be ported over to .NET or just because
 they've been neglected?  I'm trying to get a handle on what's important to
 port and what isn't.  Figured someone with experience could help me with a
 starting point over deciding where to start with everything that's missing.


 Thanks,
 Christopher

 On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:


 Chris,

 Sorry, if you took my comments about pain of porting personally. That
 wasn't my intension.

 +1 for all your changes/divergences. I made/could have made them too.

 DIGY

 -Original Message-
 From: Christopher Currens [mailto:currens.ch...@gmail.com]
 Sent: Monday, November 21, 2011 11:45 PM
 To: lucene-net-...@lucene.apache.org
 Subject: Re: [Lucene.Net] Roadmap

 Digy,

 I used 2.9.4 trunk as the base for the 3.0.3 branch, but I looked to the
 code in 2.9.4g as a reference for many things, particularly the Support
 classes.  We hit many of the same issues I'm sure, I moved some of the
 anonymous classes into a base class where you could inject functions,
 though not all could be replaced, nor did I replace all that could have
 been.  Some of our code is different, I went for the option for
 WeakDictionary to be completely generic, as in wrapping a generic
 dictionary with WeakKeyT instead of wrapping the already existing
 WeakHashTable in support.  In hindsight, it may have just been easier to
 convert the WeakHashTable to generic, but alas, I'm only realizing that
 now.  There is a problem with my WeakDictionary, specifically the function
 that determines when to clean/compact the dictionary and remove the dead
 keys.  I need a better heuristic of deciding when to run the clean.  That's
 a performance issue though.

 Regarding the pain of porting, I am a changed man.  It's nice, in a sad
 way, to know that I'm not the only one who experienced those difficulties.
  I used to be in the camp that porting code that differed from java
 wouldn't be difficult at all.  However, now I code corrected!  It threw me
 a curve-ball, for sure.  I DO think a line-by-line port can definitely
 include the things talked about below, ie the changes to Dispose and the
 changes to IEnumerableT.  Those changes, I thing, can be made without a
 heavy impact on the porting process.

 There was one fairly large change I opted to use that differed quite a bit
 from Java, however, and that was the use of the TPL in
 ParallelMultiSearcher.  It was far easier to port this way, and I don't
 think it affects the porting process too 

[jira] [Commented] (SOLR-1972) Need additional query stats in admin interface - median, 95th and 99th percentile

2011-11-21 Thread Shawn Heisey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154710#comment-13154710
 ] 

Shawn Heisey commented on SOLR-1972:


Based on filenames, I couldn't find an existing unit test that checks handler 
statistics, so I couldn't figure out how to make a test for this patch.

I am very interested in getting this included in branch_3x.  If you have some 
example code I can look at to create unit tests, I can look into making one.

 Need additional query stats in admin interface - median, 95th and 99th 
 percentile
 -

 Key: SOLR-1972
 URL: https://issues.apache.org/jira/browse/SOLR-1972
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Shawn Heisey
Priority: Minor
 Attachments: SOLR-1972.patch, SOLR-1972.patch, SOLR-1972.patch, 
 SOLR-1972.patch, elyograg-1972-3.2.patch, elyograg-1972-3.2.patch, 
 elyograg-1972-trunk.patch, elyograg-1972-trunk.patch


 I would like to see more detailed query statistics from the admin GUI.  This 
 is what you can get now:
 requests : 809
 errors : 0
 timeouts : 0
 totalTime : 70053
 avgTimePerRequest : 86.59209
 avgRequestsPerSecond : 0.8148785 
 I'd like to see more data on the time per request - median, 95th percentile, 
 99th percentile, and any other statistical function that makes sense to 
 include.  In my environment, the first bunch of queries after startup tend to 
 take several seconds each.  I find that the average value tends to be useless 
 until it has several thousand queries under its belt and the caches are 
 thoroughly warmed.  The statistical functions I have mentioned would quickly 
 eliminate the influence of those initial slow queries.
 The system will have to store individual data about each query.  I don't know 
 if this is something Solr does already.  It would be nice to have a 
 configurable count of how many of the most recent data points are kept, to 
 control the amount of memory the feature uses.  The default value could be 
 something like 1024 or 4096.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Lucene.Net] Roadmap

2011-11-21 Thread Christopher Currens
To clarify, it wasn't as much *difficult* as it was more *painful*.  Above,
I was inferring that it was more difficult that the rest of the code, which
by comparison was easier.  It wasn't painless to try and map where code
changes were from the java classes into the .Net version.  I prefer that
style more for its readability and the niceties of working with a .Net
style of Lucene, however as I said before, it slowed down significantly the
porting process.  I hope it didn't come across that I thought that it was
bad code, because it's probably the most readable code we have in the
Contrib at the moment.

I want to make it clear that my intention right now is to get Lucene.Net up
to date with Java.  When I read the Java code, I understand its intent, and
I make sure the ported code represents it.  That takes enough time as it
is, moving to try and figure out where the code went in Lucene.Net, since
it wasn't a 1-1 map, was a MINOR annoyance, especially when you compare it
to the issues I had dealing with the differences between the two languages,
generics especialy.  That being said, I don't have a problem with code
being converted in a .Net idiomatic way, in fact, I welcome it, if it still
allows the changes to be ported with minimal effort.  I feel at this point
in the project, there are some limitations to how far I'd like it to
diverge.

Anyway, my opinion, which may not be in agreement with the group as a
whole, is that it would be better to bring the codebase up to date, or at
least more up to date with java's, and then maintaining a version with a
complete .net-concentric API.  I feel this would beeasier, as porting
Java's Lucene SVN commits by the week would be a relatively small workload.

On Mon, Nov 21, 2011 at 2:41 PM, Troy Howard thowar...@gmail.com wrote:

 So, if we're getting back to the line by line port discussion... I
 think either side of this discussion is too extreme. For the case in
 point Chris just mentioned (which I'm not really sure what part was so
 difficult, as I ported that library in about 30 minutes from
 scratch)... anything is a pain if it sticks out in the middle of doing
 something completely different.

 The only reason we are able to do this line by line is due to the
 general similarity between Java and C#'s language syntax. If we were
 porting Lucene to a completely different language, that had a totally
 different syntax, the process would go like this:

 - Look at the original code, understand it's intent
 - Create similar code in the new language that expresses the same intent

 When applying changes:

 - Look at the original code diffs, understanding the intent of the change
 - Look at the ported code, and apply the changed logic's meaning in
 that language

 So, is just a different thought process. In my opinion, it's a better
 process because it forces the developer to actually think about the
 code instead of blindly converting syntax (possibly slightly
 incorrectly and introducing regressions). While there is a large
 volume of unit tests in Lucene, they are unfortunately not really the
 right tests and make porting much more difficult, because it's hard to
 verify that your ported code behaves the same because you can't just
 rely on the unit tests to verify your port. Therefore, it's safer to
 follow a process that requires the developer to delve deeply into the
 meaning of the code. Following a line-by-line process is convenient,
 but doesn't focus on meaning, which I think is more important.

 Thanks,
 Troy

 On Mon, Nov 21, 2011 at 2:23 PM, Christopher Currens
 currens.ch...@gmail.com wrote:
  Digy,
 
  No worries.  I wasn't taking them personally.  You've been doing this
 for a
  lot longer than I have, but I didn't understand you pain until I had to
 go
  through it personally. :P
 
  Have you looked at Contrib in a while?  There's a lot of projects that
 are
  in Java's Contrib that are not in Lucene.Net?  Is this because there are
  some that can't easily (if at all) be ported over to .NET or just because
  they've been neglected?  I'm trying to get a handle on what's important
 to
  port and what isn't.  Figured someone with experience could help me with
 a
  starting point over deciding where to start with everything that's
 missing.
 
 
  Thanks,
  Christopher
 
  On Mon, Nov 21, 2011 at 2:13 PM, Digy digyd...@gmail.com wrote:
 
 
  Chris,
 
  Sorry, if you took my comments about pain of porting personally. That
  wasn't my intension.
 
  +1 for all your changes/divergences. I made/could have made them too.
 
  DIGY
 
  -Original Message-
  From: Christopher Currens [mailto:currens.ch...@gmail.com]
  Sent: Monday, November 21, 2011 11:45 PM
  To: lucene-net-...@lucene.apache.org
  Subject: Re: [Lucene.Net] Roadmap
 
  Digy,
 
  I used 2.9.4 trunk as the base for the 3.0.3 branch, but I looked to the
  code in 2.9.4g as a reference for many things, particularly the Support
  classes.  We hit many of the same issues I'm sure, I moved some 

[jira] [Updated] (SOLR-2438) Case Insensitive Search for Wildcard Queries

2011-11-21 Thread Erick Erickson (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-2438:
-

Attachment: SOLR-2438.patch

OK, this patch does a better job with the matchVersion as per Muir. If nobody 
objects I'll commit it this week, probably not before Wednesday though. Then I 
should be able to do the backport to 3.6 shortly thereafter.

I still have to run all the tests yet again, but I don't really expect much of 
a problem.

Should SOLR 218, 219 and 757 all be closed as part of 2438?

 Case Insensitive Search for Wildcard Queries
 

 Key: SOLR-2438
 URL: https://issues.apache.org/jira/browse/SOLR-2438
 Project: Solr
  Issue Type: Improvement
Reporter: Peter Sturge
Assignee: Erick Erickson
 Attachments: SOLR-2438.patch, SOLR-2438.patch, SOLR-2438.patch, 
 SOLR-2438.patch, SOLR-2438.patch, SOLR-2438.patch


 This patch adds support to allow case-insensitive queries on wildcard 
 searches for configured TextField field types.
 This patch extends the excellent work done Yonik and Michael in SOLR-219.
 The approach here is different enough (imho) to warrant a separate JIRA issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #304: POMs out of sync

2011-11-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/304/

All tests passed

Build Log (for compile errors):
[...truncated 25151 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



What would be needed to get SOLR-1351 implemented

2011-11-21 Thread Greg Bowyer
Hi all, I have need of the functionality proposed in SOLR-1351, and I 
would like the chance to dip my toes in the water to implement this


what would be needed to resurrect this patch ?

-- Greg

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: test failure under Windows: trunk benchmark module: LineDocSourceTest: could not remove temp dir

2011-11-21 Thread Shai Erera
I posted a patch in LUCENE-3583 - simple fix. I will commit it shortly.

Shai

On Mon, Nov 21, 2011 at 9:40 PM, Robert Muir rcm...@gmail.com wrote:

 Yes but thats not my fault: the test has been broken for a long
 time... I just exposed its brokenness.

 I'd rather it fail than see confusing corrupt index errors because it
 tries to open a test from a previous test run.

 On Mon, Nov 21, 2011 at 2:38 PM, Uwe Schindler u...@thetaphi.de wrote:
  Since you committed the benchmark working dir patch to 3.x this test also
  fails here on the src jar of 3.5
  --
  Uwe Schindler
  H.-H.-Meier-Allee 63, 28213 Bremen
  http://www.thetaphi.de
 
 
 
  Robert Muir rcm...@gmail.com schrieb:
 
  Hi, there is an issue already open:
  https://issues.apache.org/jira/browse/LUCENE-3583
 
  I made a first stab at a patch, the problem is it doesn't fix it all
  the time, only some of the time.
 
  On Mon, Nov 21, 2011 at 2:32 PM, Steven A Rowe sar...@syr.edu wrote:
   Under Windows 7, this test fails 100% of the time for me - no other
   Lucene/Solr test does this:
  
 cd modules/benchmark
 ant clean test -Dtestcase=LineDocSourceTest
  
   Test output follows:
  
   junit-sequential:
  [junit] Testsuite:
   org.apache.lucene.benchmark.byTask.feeds.LineDocSourceTest
  [junit] Tests run: 8, Failures: 0, Errors: 1, Time elapsed: 1.472
 sec
  [junit]
  [junit] - Standard Error
  -
  [junit] java.io.IOException: could not delete
  
 C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp\one-line
  [junit] at
   org.apache.lucene.util._TestUtil.rmDir(_TestUtil.java:85)
  [junit] at
  
 org.apache.lucene.util.LuceneTestCase.clearTempDirectoriesAfterClass(LuceneTestCase.java:447)
  [junit] at
  
 org.apache.lucene.util.LuceneTestCase.afterClassLuceneTestCaseJ4(LuceneTestCase.java:377)
  [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
   Method)
  [junit] at
  
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  [junit] at
  
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  [junit] at java.lang.reflect.Method.invoke(Method.java:597)
  [junit] at
 
 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
  [junit] at
  
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
  [junit] at
  
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
  [junit] at
  
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
  [junit] at
   org.junit.runners.ParentRunner.run(ParentRunner.java:236)
  [junit] at
   junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
  [junit] at
  
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
  [junit] at
  
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
  [junit] at
  
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
  
   gt;
   [junit] path
 
 C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp
  allocated from
  [junit]
   org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:65)
  [junit]
  
 org.apache.lucene.benchmark.BenchmarkTestCase.beforeClassBenchmarkTestCase(BenchmarkTestCase.java:39)
  [junit] NOTE: reproduce with (hopefully): ant test
   -Dtestcase=LineDocSourceTest
   -Dtests.seed=-10f2bc7ed2006d82:0:-285ae758d1d45d49
   -Dargs=-Dfile.encoding=Cp1252
  [junit] NOTE: test params are: codec=Lucene40:
   {docid=PostingsFormat(name=Memory),
 body=PostingsFormat(name=MockRandom),
   docdatenum=PostingsFormat(name=Lucene40WithOrds),
   docdate=PostingsFormat(name=MockRandom),
   doctitle=MockFixedIntBlock(blockSize=897),
   doctimesecnum=PostingsFormat(name=NestedPulsing),
   mySpecialField=PostingsFormat(name=Memory),
 docname=Pulsing40(freqCutoff=18
   minBlockSize=98 maxBlockSize=231)},
  sim=RandomSimilarityProvider(queryNorm=false,coord=false): {},
 locale=da,
  timezone=Atlantic/Reykjavik
  [junit] NOTE: all tests run in this JVM:
  [junit] [LineDocSourceTest]
  [junit] NOTE: Windows 7 6.1 amd64/Sun Microsystems Inc. 1.6.0_21
   (64-bit)/cpus=4,threads=1,free=124797192,total=128057344
  [junit] -  ---
  [junit] Testcase:
   org.apache.lucene.benchmark.byTask.feeds.LineDocSourceTest:
 Caused an
   ERROR
  [junit] java.lang.AssertionError: could not remove temp dir:
  
 C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp
  [junit] java.lang.RuntimeException: java.lang.AssertionError: could
   not remove temp dir:
  
 C:\svn\lucene\dev\trunk\modules\benchmark\build\test\1\benchmark3213632136tmp
  [junit] at
  
 

[jira] [Updated] (LUCENE-3583) benchmark tests always fail on windows because directory cannot be removed

2011-11-21 Thread Shai Erera (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-3583:
---

Attachment: LUCENE-3583.patch

Patch fixes the problem in LineDocSourceTest - add tasks.close() (otherwise LDS 
keeps a reader open on the file).

I intend to commit shortly, after verifying all tests pass and no other such 
changes are required.

 benchmark tests always fail on windows because directory cannot be removed
 --

 Key: LUCENE-3583
 URL: https://issues.apache.org/jira/browse/LUCENE-3583
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 3.5, 4.0
 Environment: Only fails for Lucene trunk
Reporter: Uwe Schindler
 Attachments: LUCENE-3583.patch, LUCENE-3583.patch, 
 benchmark-test-output.txt, io-event-log.txt


 This seems to be a bug recently introduced. I have no idea what's wrong. 
 Attached is a log file, reproduces everytime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3583) benchmark tests always fail on windows because directory cannot be removed

2011-11-21 Thread Shai Erera (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-3583.


   Resolution: Fixed
Fix Version/s: 4.0
   3.6
 Assignee: Shai Erera
Lucene Fields: New,Patch Available  (was: New)

Committed rev 1204826 (3x).
Ported changes to trunk in rev 1204828.

 benchmark tests always fail on windows because directory cannot be removed
 --

 Key: LUCENE-3583
 URL: https://issues.apache.org/jira/browse/LUCENE-3583
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 3.5, 4.0
 Environment: Only fails for Lucene trunk
Reporter: Uwe Schindler
Assignee: Shai Erera
 Fix For: 3.6, 4.0

 Attachments: LUCENE-3583.patch, LUCENE-3583.patch, 
 benchmark-test-output.txt, io-event-log.txt


 This seems to be a bug recently introduced. I have no idea what's wrong. 
 Attached is a log file, reproduces everytime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 3.5.0, RC1

2011-11-21 Thread Shai Erera
I fixed LUCENE-3583. It was a test issue, so I'm not sure if it's worth a
respin. But if you do, then we should mark the issue as Fix Version: 3.5
(not 3.6) and merge the changes to the 3.5 branch.

Shai

On Mon, Nov 21, 2011 at 10:12 PM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 the Lucene-Core package works for PANGAEA as drop-in replacement, so this
 is all fine.

 I found a problem (an inconsistency in the new openIfChanged API) when
 porting the code to not use deprecated stuff:
 https://issues.apache.org/jira/browse/LUCENE-3464 (reopened). The
 deprecated stuff successfully used the sophisticated backwards, cool, Mike!

 Another problem with the release candidate: When running contrib tests on
 windows, since the benchmark classes don’t correctly close all
 files/reader/whatever after Robert's commit 1203997 the benchmark tests
 fail on every run on windows. I don't like this for an official release,
 sorry. The test should pass (somehow most of the time - lol@solr), but
 consistently failing is bad (see
 https://issues.apache.org/jira/browse/LUCENE-3583).

 I checked CheckIndex on pre-3.5 indexes, all worked fine for the hug
 PANGAEA index, no hotspot bugs (1.6.0_24), will later try 1.6.0_29.
 Optimizing/Upgrading indexes worked, too. As said before, backwards
 compatibility is fine, no recompilation was needed here.

 I will try to run smoke tests later, but I don't like the above 2 issues,
 so my +/-0 (as there is a workaround for the first issue) :(

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


  -Original Message-
  From: Simon Willnauer [mailto:simon.willna...@googlemail.com]
  Sent: Monday, November 21, 2011 1:47 PM
  To: dev@lucene.apache.org
  Subject: [VOTE] Release Lucene/Solr 3.5.0, RC1
 
  Please vote to release the RC1 artifacts at:
 
  https://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC1-
  rev1204425/
 
  as Lucene 3.5.0 and Solr 3.5.0.
 
  Simon Willnauer
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
  commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (LUCENE-3583) benchmark tests always fail on windows because directory cannot be removed

2011-11-21 Thread Uwe Schindler (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154930#comment-13154930
 ] 

Uwe Schindler commented on LUCENE-3583:
---

I still think we should also commit Robert's patch.

 benchmark tests always fail on windows because directory cannot be removed
 --

 Key: LUCENE-3583
 URL: https://issues.apache.org/jira/browse/LUCENE-3583
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 3.5, 4.0
 Environment: Only fails for Lucene trunk
Reporter: Uwe Schindler
Assignee: Shai Erera
 Fix For: 3.6, 4.0

 Attachments: LUCENE-3583.patch, LUCENE-3583.patch, 
 benchmark-test-output.txt, io-event-log.txt


 This seems to be a bug recently introduced. I have no idea what's wrong. 
 Attached is a log file, reproduces everytime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org