[jira] [Resolved] (LUCENENET-484) Some possibly major tests intermittently fail

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens resolved LUCENENET-484.
---

Resolution: Fixed

Thanks for your help with all of these Luc.  Thanks to your hard work, this 
issue is finally closed, and for the first time in a long time, the whole test 
suite should consistently pass!

 Some possibly major tests intermittently fail 
 --

 Key: LUCENENET-484
 URL: https://issues.apache.org/jira/browse/LUCENENET-484
 Project: Lucene.Net
  Issue Type: Bug
  Components: Lucene.Net Core, Lucene.Net Test
Affects Versions: Lucene.Net 3.0.3
 Environment: All
Reporter: Christopher Currens
 Fix For: Lucene.Net 3.0.3

 Attachments: Lucenenet-484-FieldCacheImpl.patch, 
 Lucenenet-484-NativeFSLockFactory.patch, Lucenenet-484-WeakDictionary.patch, 
 Lucenenet-484-WeakDictionaryTests.patch


 These tests will fail intermittently in Debug or Release mode, in the core 
 test suite:
 # Lucene.Net.Index: 
 #- TestConcurrentMergeScheduler.TestFlushExceptions -- *FIXED*
 # Lucene.Net.Store: 
 #- TestLockFactory.TestStressLocks -- *FIXED*
 # Lucene.Net.Search: 
 #- TestSort.TestParallelMultiSort -- *FIXED*
 # Lucene.Net.Util: 
 #- TestFieldCacheSanityChecker.TestInsanity1 -- *FIXED*
 #- TestFieldCacheSanityChecker.TestInsanity2 -- *FIXED*
 # Lucene.Net.Support 
 #- TestWeakHashTableMultiThreadAccess.Test -- *FIXED*
 TestWeakHashTableMultiThreadAccess should be fine to remove along with the 
 WeakHashTable in the Support namespace, since it's been replaced with 
 WeakDictionary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (LUCENENET-484) Some possibly major tests intermittently fail

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens updated LUCENENET-484:
--

Description: 
These tests will fail intermittently in Debug or Release mode, in the core test 
suite:

# Lucene.Net.Index: 
#- TestConcurrentMergeScheduler.TestFlushExceptions -- *FIXED*
# Lucene.Net.Store: 
#- TestLockFactory.TestStressLocks -- *FIXED*
# Lucene.Net.Search: 
#- TestSort.TestParallelMultiSort -- *FIXED*
# Lucene.Net.Util: 
#- TestFieldCacheSanityChecker.TestInsanity1 -- *FIXED*
#- TestFieldCacheSanityChecker.TestInsanity2 -- *FIXED*
# Lucene.Net.Support 
#- TestWeakHashTableMultiThreadAccess.Test -- *FIXED*

TestWeakHashTableMultiThreadAccess should be fine to remove along with the 
WeakHashTable in the Support namespace, since it's been replaced with 
WeakDictionary.

  was:
These tests will fail intermittently in Debug or Release mode, in the core test 
suite:

# -Lucene.Net.Index:-
#- -TestConcurrentMergeScheduler.TestFlushExceptions-
# Lucene.Net.Store:
#- TestLockFactory.TestStressLocks
# Lucene.Net.Search:
#- TestSort.TestParallelMultiSort
# -Lucene.Net.Util:-
#- -TestFieldCacheSanityChecker.TestInsanity1-
#- -TestFieldCacheSanityChecker.TestInsanity2-
#- -(It's possible all of the insanity tests fail at one point or another)-
# -Lucene.Net.Support-
#- -TestWeakHashTableMultiThreadAccess.Test-

TestWeakHashTableMultiThreadAccess should be fine to remove along with the 
WeakHashTable in the Support namespace, since it's been replaced with 
WeakDictionary.


 Some possibly major tests intermittently fail 
 --

 Key: LUCENENET-484
 URL: https://issues.apache.org/jira/browse/LUCENENET-484
 Project: Lucene.Net
  Issue Type: Bug
  Components: Lucene.Net Core, Lucene.Net Test
Affects Versions: Lucene.Net 3.0.3
 Environment: All
Reporter: Christopher Currens
 Fix For: Lucene.Net 3.0.3

 Attachments: Lucenenet-484-FieldCacheImpl.patch, 
 Lucenenet-484-NativeFSLockFactory.patch, Lucenenet-484-WeakDictionary.patch, 
 Lucenenet-484-WeakDictionaryTests.patch


 These tests will fail intermittently in Debug or Release mode, in the core 
 test suite:
 # Lucene.Net.Index: 
 #- TestConcurrentMergeScheduler.TestFlushExceptions -- *FIXED*
 # Lucene.Net.Store: 
 #- TestLockFactory.TestStressLocks -- *FIXED*
 # Lucene.Net.Search: 
 #- TestSort.TestParallelMultiSort -- *FIXED*
 # Lucene.Net.Util: 
 #- TestFieldCacheSanityChecker.TestInsanity1 -- *FIXED*
 #- TestFieldCacheSanityChecker.TestInsanity2 -- *FIXED*
 # Lucene.Net.Support 
 #- TestWeakHashTableMultiThreadAccess.Test -- *FIXED*
 TestWeakHashTableMultiThreadAccess should be fine to remove along with the 
 WeakHashTable in the Support namespace, since it's been replaced with 
 WeakDictionary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (LUCENENET-493) Make lucene.net culture insensitive (like the java version)

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens resolved LUCENENET-493.
---

Resolution: Fixed

This should be fixed now.  Added a test to LocalizedTestCase (different from 
the patch above) so that it will run some or all tests (if no specific tests 
were selected) under all installed cultures on the machine.  This should be the 
same behavior as Java, except for that it is running all tests in one, instead 
of individually.  However, when a test fails, it will output which test failed 
and which culture it failed it.  I discovered issues in DateTools.cs in the 
ar culture, where DateTimeToString was returning culture specific formatting.

I think we can resolve this now that we can confirm that the tests pass.  If 
any future culture-sensitive bug appear, new issues can be created, and then 
specific tests can be written to check for those issues.


 Make lucene.net culture insensitive (like the java version)
 ---

 Key: LUCENENET-493
 URL: https://issues.apache.org/jira/browse/LUCENENET-493
 Project: Lucene.Net
  Issue Type: Bug
  Components: Lucene.Net Core, Lucene.Net Test
Affects Versions: Lucene.Net 3.0.3
Reporter: Luc Vanlerberghe
  Labels: patch
 Fix For: Lucene.Net 3.0.3

 Attachments: Lucenenet-493.patch, UpdatedLocalizedTestCase.patch


 In Java, conversion of the basic types to and from strings is locale 
 (culture) independent. For localized input/output one needs to use the 
 classes in the java.text package.
 In .Net, conversion of the basic types to and from strings depends on the 
 default Culture.  Otherwise you have to specify CultureInfo.InvariantCulture 
 explicitly.
 Some of the testcases in lucene.net fail if they are not run on a machine 
 with culture set to US.
 In the current version of lucene.net there are patches here and there that 
 try to correct for some specific cases by using string replacement (like  
 System.Double.Parse(s.Replace(., 
 CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator)), but that 
 seems really ugly.
 I submit a patch here that removes the old workarounds and replaces them by 
 calls to classes in the Lucene.Net.Support namespace that try to handle the 
 conversions in a compatible way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (LUCENENET-436) Refactor Deprecated Code inside of tests

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens updated LUCENENET-436:
--

Affects Version/s: Lucene.Net 3.0.3
Fix Version/s: (was: Lucene.Net 3.0.3)
   Lucene.Net 3.5

 Refactor Deprecated Code inside of tests 
 -

 Key: LUCENENET-436
 URL: https://issues.apache.org/jira/browse/LUCENENET-436
 Project: Lucene.Net
  Issue Type: Sub-task
  Components: Lucene.Net Test
Affects Versions: Lucene.Net 2.9.4g, Lucene.Net 3.0.3
Reporter: michael herndon
  Labels: refactoring, testing,
 Fix For: Lucene.Net 3.5


 * We should still be testing deprecated methods, but we need to use #pragma 
 warning disable/enable 0618 for testing those. otherwise compiler warnings 
 are too numerous to be anywhere near helpful.
 * We should only be using deprecated methods in places where they are being 
 explicitly tested, other tests that need that functionality in 
 order to validate those tests should be re factored to use methods that are 
 not deprecated.
 This is one place we should probably deviate from the parent project and make 
 sure that any deprecated code gets isolated to the tests designed only for 
 the deprecated methods and then use the newer API through out the testsuite.
 This should help move the project forward and remove deprecated API's when 
 the time comes.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (LUCENENET-434) Remove AnonymousXXXX classes to increase readablity

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens updated LUCENENET-434:
--

Fix Version/s: (was: Lucene.Net 3.0.3)
   Lucene.Net 3.5

Moving to 3.5.  As ugly as they are, they don't hurt anything except our eyes 
leaving them in there.  As we port to 3.5, we can remove these as much as we 
can.

 Remove Anonymous classes to increase readablity
 ---

 Key: LUCENENET-434
 URL: https://issues.apache.org/jira/browse/LUCENENET-434
 Project: Lucene.Net
  Issue Type: Improvement
  Components: Lucene.Net Core
Affects Versions: Lucene.Net 2.9.4g, Lucene.Net 3.0.3
Reporter: Scott Lombard
Assignee: Scott Lombard
Priority: Minor
 Fix For: Lucene.Net 3.5

 Attachments: TeeSinkTokenFilter.patch

   Original Estimate: 168h
  Time Spent: 13h
  Remaining Estimate: 155h

 Replace Anonymous classes inhereted from JLCA which make the code 
 impossible to read.
 Follow Digy's template to replace the single abstract method with Func or 
 Action
  
 like in FilterCacheT from:
 protected abstract object MergeDeletes(IndexReader reader, object value);
 to:
 FuncIndexReader, object, object MergeDeletes;
  
 Determine a solution to the classes with more than 1 abstract method without 
 diverging much from Java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (LUCENENET-467) .NETify the public API where appropriate

2012-06-22 Thread Christopher Currens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENENET-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Currens updated LUCENENET-467:
--

Fix Version/s: (was: Lucene.Net 3.0.3)
   Lucene.Net 3.5

Moving to 3.5

 .NETify the public API where appropriate
 

 Key: LUCENENET-467
 URL: https://issues.apache.org/jira/browse/LUCENENET-467
 Project: Lucene.Net
  Issue Type: Improvement
  Components: Lucene.Net Contrib, Lucene.Net Core
Affects Versions: Lucene.Net 2.9.2, Lucene.Net 2.9.4, Lucene.Net 2.9.4g, 
 Lucene.Net 3.0.3
 Environment: all
Reporter: Christopher Currens
  Labels: refactoring
 Fix For: Lucene.Net 3.5

 Attachments: Lucenenet-467-create.patch


 Although we haven't abandoned the line-by-line port of Java lucene, there are 
 many idioms in Java that make little to no sense in a .NET assembly.  The API 
 can change to allow for a conventional .NET experience, while still 
 maintaining the ability and ease during the porting process of Java logic.
 * Change Getxxx() and Setxxx() methods to .NET Properties
 * Implement the [dispose 
 pattern|http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx] properly.  
 Try, at all costs, to only use finalizers *when necessary*.  They are 
 expensive, and most of the classes used already have finalizers that will be 
 called.
 * Convert Java Iterator-style classes (see TermEnum, TermDocs and others) to 
 implement IEnumerableT
 * When catching exceptions, do not use *throw;* instead of *throw ex;* to 
 maintain the stack trace

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Outstanding issues for 3.0.3

2012-06-22 Thread Christopher Currens
I want to discuss the outstanding issues blocking a 3.0.3 release.  I've
already moved quite a few bugs to 3.5 that are too time-consuming or
require too many changes to the code.  It's fairly obvious that we wouldn't
be able to finish them.  Other more serious bugs, like the
culture-sensitivity issues we've been seeing and failing tests, were able
to be closed due to changed checked in by committers and contributors.

-- Task 470, a non-serious one, is listed only because it's mostly done and
just need a few loose ends tied up.  I'll hopefully have time to take care
of that this weekend.
-- Task 446 (CLS Compliance), is important, but there's no way we can get
this done quickly.  The current state of this issue is that all of the
names of public members are now compliant.  There are a few things that
aren't, the use of sbyte (particularly those related to the FieldCache) and
some conflicts with *protected or internal* fields (some with public
members).  Opinions on this one will be appreciated the most.  My opinion
is that we should draw a line on the amount of CLS compliance to have in
this release, and push the rest into 3.5.
-- Improvement 337 - Are we going to add this code (not present in java) to
the core library?
-- Improvement 456 - This is related to builds being output in Apache's
release format.  Do we want to do this for this release?



Thanks,
Christopher


RE: Outstanding issues for 3.0.3

2012-06-22 Thread Prescott Nasser


 -- Task 470, a non-serious one, is listed only because it's mostly done and
 just need a few loose ends tied up. I'll hopefully have time to take care
 of that this weekend.


How many GetX/SetX are left? I did a quick search for 'public * Get*()' Most of 
them looked to be actual methods - perhaps a few to replace


 -- Task 446 (CLS Compliance), is important, but there's no way we can get
 this done quickly. The current state of this issue is that all of the
 names of public members are now compliant. There are a few things that
 aren't, the use of sbyte (particularly those related to the FieldCache) and
 some conflicts with *protected or internal* fields (some with public
 members). Opinions on this one will be appreciated the most. My opinion
 is that we should draw a line on the amount of CLS compliance to have in
 this release, and push the rest into 3.5.

 

I count roughly 53 CLS compliant issues. the sbyte stuff will run into trouble 
when you do bit shifting (I ran into this issue when trying to do this for 
2.9.4. I'd like to see if we can't get rid of the easier stuff 
(internal/protected stuff). I would not try getting rid of sbyte or volatile 
for thile release. It's going to take some serious consideration to get rid of 
those


 -- Improvement 337 - Are we going to add this code (not present in java) to
 the core library?

 

I'd skip it and re-evaluate the community desire for this in 3.5.


 -- Improvement 456 - This is related to builds being output in Apache's
 release format. Do we want to do this for this release?
 


I looked into this last weekend - I'm terrible with Nant, so I didn't get 
anywhere. It would be nice to have, but I don't think I'll figure it out. If 
Michael has some time to maybe make the adjustment, he knows these scripts 
best. If not I'm going to look into it, but I don't call this a show stopper - 
either we have it or we don't when the rest is done.

 

 

~P

[jira] [Commented] (SOLR-3509) Print support in Web UI

2012-06-22 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399163#comment-13399163
 ] 

Stefan Matheis (steffkes) commented on SOLR-3509:
-

I know Lance - perhaps i stated this not clear enough, the first step was only 
meant to be a reset .. to avoid having screen-styles being printed. styling 
every screen for a nice print layout is very time-consuming, therefore i'll add 
them only in little steps.

 Print support in Web UI
 ---

 Key: SOLR-3509
 URL: https://issues.apache.org/jira/browse/SOLR-3509
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.0
Reporter: Lance Norskog
Priority: Minor
 Attachments: NewUI_Print_Problems.pdf, SOLR-3509.patch, Solr Admin 
 (172.16.10.99).pdf


 Please try printing the various pages and decide if they need different 
 layouts or formatting. For example, it would help if all characters are in 
 black, not grey.
 I have attached a sample printout of the UI. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-4163:
-

 Summary: Improve concurrency in MMapIndexInput.clone()
 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0


Followup issue from SOLR-3566:

Whenever you clone the TermIndex, it also creates a clone of the underlying 
IndexInputs. In high cocurrent environments, the clone method of MMapIndexInput 
is a bottleneck (it has heavy work to do to manage the weak references in a 
synchronized block).

Everywhere else in Lucene we use my new WeakIdentityMap for managing concurrent 
weak maps. For this case I did not do this, as the WeakIdentityMap has no 
iterators (it doe snot implement Map interface). This issue will add a key and 
values iterator (the key iterator will not return GC'ed keys), so 
MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs no 
synchronization. ConcurrentHashMap has better concurrency because it 
distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4163:
--

Attachment: LUCENE-4163.patch

Patch.

This patch also refactors the close() method of MMapIndexInput a little bit to 
work around the issue that we have no synchronization anymore. It will mark the 
IndexInput as closed (buffers = null) as first step, so later clone() or other 
access fails with AlreadyClosedException. After unsetting the buffers it will 
unset all clone buffers and finally unmap them.

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread tom
cross posting this issue to the dev list in the hope to get a response 
here...



 Original Message 
Subject:Re: suggester/autocomplete locks file preventing replication
Date:   Thu, 21 Jun 2012 17:11:40 +0200
From:   tom dev.tom.men...@gmx.net
Reply-To:   solr-u...@lucene.apache.org
To: solr-u...@lucene.apache.org



pocking into the code i think the FileDictionary class is the culprit:
It takes an InputStream as a ctor argument but never releases the
stream. what puzzles me is that the class seems to allow a one-time
iteration and then the stream is useless, unless i'm missing smth. here.

is there a good reason for this or rather a bug?
should i move the topic to the dev list?


On 21.06.2012 14:49, tom wrote:

BTW: a core unload doesnt release the lock either ;(


On 21.06.2012 14:39, tom wrote:

hi,

i'm using the suggester with a file like so:

  searchComponent class=solr.SpellCheckComponent name=suggest
lst name=spellchecker
  str name=namesuggest/str
  str
name=classnameorg.apache.solr.spelling.suggest.Suggester/str
  str
name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
  !-- Alternatives to lookupImpl:
org.apache.solr.spelling.suggest.fst.FSTLookup [finite state
automaton]
org.apache.solr.spelling.suggest.jaspell.JaspellLookup
[default, jaspell-based]
org.apache.solr.spelling.suggest.tst.TSTLookup [ternary trees]
  --
  !-- the indexed field to derive suggestions from --
  !-- TODO must change this to spell or smth alike later --
  str name=fieldcontent/str
  float name=threshold0.05/float
  str name=buildOnCommittrue/str
  str name=weightBuckets100/str
  str name=sourceLocationautocomplete.dictionary/str
/lst
  /searchComponent

when trying to replicate i get the following error message on the
slave side:

 2012-06-21 14:34:50,781 ERROR
[pool-3-thread-1  ]
handler.ReplicationHandler- SnapPull failed
org.apache.solr.common.SolrException: Unable to rename: path
autocomplete.dictionary.20120620120611
at
org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
at
org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)

so i dug around it and found out that the solr's java process holds a
lock on the autocomplete.dictionary file. any reason why this is so?

thx,

running:
solr 3.5
win7














[jira] [Updated] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4163:
--

Attachment: LUCENE-4163.patch

Slightly improved test (null keys and iterator conformance).

I think that's ready to commit and brings a big improvement in concurrency. We 
should backport this to 3.6.1!

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399229#comment-13399229
 ] 

Adrien Grand commented on LUCENE-4163:
--

+1

Maybe we should also update {{WeakIdentityMap}} documentation now that it has 
key and value iterators:

bq. This implementation was forked from a href=http://cxf.apache.org/;Apache 
CXF/a but modified to bnot/b implement the {@link java.util.Map} 
interface and without any set/iterator views on it, as those are error-prone 
and inefficient

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399231#comment-13399231
 ] 

Uwe Schindler commented on LUCENE-4163:
---

Adrien: right, will do!

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread Simon Willnauer
On Fri, Jun 22, 2012 at 10:37 AM, tom dev.tom.men...@gmx.net wrote:

  cross posting this issue to the dev list in the hope to get a response
 here...


I think you are right. Closing the Stream / Reader is the responsibility of
the caller not the FileDictionary IMO but solr doesn't close it so that
might cause your problems. Are you running on windows by any chance?
I will create an issue and fix it.

simon



  Original Message   Subject: Re: suggester/autocomplete
 locks file preventing replication  Date: Thu, 21 Jun 2012 17:11:40 +0200  
 From:
 tom dev.tom.men...@gmx.net dev.tom.men...@gmx.net  Reply-To:
 solr-u...@lucene.apache.org  To: solr-u...@lucene.apache.org


 pocking into the code i think the FileDictionary class is the culprit:
 It takes an InputStream as a ctor argument but never releases the
 stream. what puzzles me is that the class seems to allow a one-time
 iteration and then the stream is useless, unless i'm missing smth. here.

 is there a good reason for this or rather a bug?
 should i move the topic to the dev list?


 On 21.06.2012 14:49, tom wrote:
  BTW: a core unload doesnt release the lock either ;(
 
 
  On 21.06.2012 14:39, tom wrote:
  hi,
 
  i'm using the suggester with a file like so:
 
searchComponent class=solr.SpellCheckComponent name=suggest
  lst name=spellchecker
str name=namesuggest/str
str
  name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
  name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
!-- Alternatives to lookupImpl:
  org.apache.solr.spelling.suggest.fst.FSTLookup [finite state
  automaton]
  org.apache.solr.spelling.suggest.jaspell.JaspellLookup
  [default, jaspell-based]
  org.apache.solr.spelling.suggest.tst.TSTLookup [ternary trees]
--
!-- the indexed field to derive suggestions from --
!-- TODO must change this to spell or smth alike later --
str name=fieldcontent/str
float name=threshold0.05/float
str name=buildOnCommittrue/str
str name=weightBuckets100/str
str name=sourceLocationautocomplete.dictionary/str
  /lst
/searchComponent
 
  when trying to replicate i get the following error message on the
  slave side:
 
   2012-06-21 14:34:50,781 ERROR
  [pool-3-thread-1  ]
  handler.ReplicationHandler- SnapPull failed
  org.apache.solr.common.SolrException: Unable to rename: path
  autocomplete.dictionary.20120620120611
  at
  org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
  at
  org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
  at
  org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
  at
  org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
  at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
  at
  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at
  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
  at java.lang.Thread.run(Thread.java:619)
 
  so i dug around it and found out that the solr's java process holds a
  lock on the autocomplete.dictionary file. any reason why this is so?
 
  thx,
 
  running:
  solr 3.5
  win7
 
 
 
 










[jira] [Updated] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4163:
--

Attachment: LUCENE-4163.patch

Patch with updated Javadocs.

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread Simon Willnauer
On Fri, Jun 22, 2012 at 11:47 AM, Simon Willnauer 
simon.willna...@googlemail.com wrote:



 On Fri, Jun 22, 2012 at 10:37 AM, tom dev.tom.men...@gmx.net wrote:

  cross posting this issue to the dev list in the hope to get a response
 here...


 I think you are right. Closing the Stream / Reader is the responsibility
 of the caller not the FileDictionary IMO but solr doesn't close it so that
 might cause your problems. Are you running on windows by any chance?
 I will create an issue and fix it.


hmm I just looked at it and I see a IOUtils.close call in FileDictionary

https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_6/lucene/contrib/spellchecker/src/java/org/apache/lucene/search/suggest/FileDictionary.java

are you using solr 3.6?


 simon



  Original Message   Subject: Re: suggester/autocomplete
 locks file preventing replication  Date: Thu, 21 Jun 2012 17:11:40 +0200  
 From:
 tom dev.tom.men...@gmx.net dev.tom.men...@gmx.net  Reply-To:
 solr-u...@lucene.apache.org  To: solr-u...@lucene.apache.org


 pocking into the code i think the FileDictionary class is the culprit:
 It takes an InputStream as a ctor argument but never releases the
 stream. what puzzles me is that the class seems to allow a one-time
 iteration and then the stream is useless, unless i'm missing smth. here.

 is there a good reason for this or rather a bug?
 should i move the topic to the dev list?


 On 21.06.2012 14:49, tom wrote:
  BTW: a core unload doesnt release the lock either ;(
 
 
  On 21.06.2012 14:39, tom wrote:
  hi,
 
  i'm using the suggester with a file like so:
 
searchComponent class=solr.SpellCheckComponent name=suggest
  lst name=spellchecker
str name=namesuggest/str
str
  name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
  name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
!-- Alternatives to lookupImpl:
  org.apache.solr.spelling.suggest.fst.FSTLookup [finite state
  automaton]
  org.apache.solr.spelling.suggest.jaspell.JaspellLookup
  [default, jaspell-based]
  org.apache.solr.spelling.suggest.tst.TSTLookup [ternary trees]
--
!-- the indexed field to derive suggestions from --
!-- TODO must change this to spell or smth alike later --
str name=fieldcontent/str
float name=threshold0.05/float
str name=buildOnCommittrue/str
str name=weightBuckets100/str
str name=sourceLocationautocomplete.dictionary/str
  /lst
/searchComponent
 
  when trying to replicate i get the following error message on the
  slave side:
 
   2012-06-21 14:34:50,781 ERROR
  [pool-3-thread-1  ]
  handler.ReplicationHandler- SnapPull failed
  org.apache.solr.common.SolrException: Unable to rename: path
  autocomplete.dictionary.20120620120611
  at
  org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
  at
  org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
  at
  org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
  at
  org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
  at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
  at
  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at
  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
  at java.lang.Thread.run(Thread.java:619)
 
  so i dug around it and found out that the solr's java process holds a
  lock on the autocomplete.dictionary file. any reason why this is so?
 
  thx,
 
  running:
  solr 3.5
  win7
 
 
 
 











[jira] [Created] (SOLR-3570) Release Resource Stream in Suggestor

2012-06-22 Thread Simon Willnauer (JIRA)
Simon Willnauer created SOLR-3570:
-

 Summary: Release Resource Stream in Suggestor 
 Key: SOLR-3570
 URL: https://issues.apache.org/jira/browse/SOLR-3570
 Project: Solr
  Issue Type: Improvement
  Components: spellchecker
Affects Versions: 3.6, 4.0, 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 4.0, 3.6.1, 5.0


Currently the Suggestor doesn't release the resource stream if a file 
dictionary is loaded. The FileDictionary implemenation closes the stream but 
only if we exhaust the stream entirely. Yet, we should close this on the 
consumer side too. double close is no harm.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2012-06-22 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399243#comment-13399243
 ] 

Lance Norskog commented on LUCENE-2899:
---

bq. For NER you should try the perceptron and a cutoff of zero. 
Thanks! This patch generates all models needed by tests, and the tests are 
rewritten to use the poor quality data from the models. To make the models, go 
to {{solr/contrib/opennlp/src/test-files/training}} and run 
{{bin/training.sh}}. This populates 
{{solr/contrib/opennlp/src/test-files/opennlp/conf/opennlp}}. I don't have 
windows anymore so I can't make a .bat version.



 Add OpenNLP Analysis capabilities as a module
 -

 Key: LUCENE-2899
 URL: https://issues.apache.org/jira/browse/LUCENE-2899
 Project: Lucene - Java
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: LUCENE-2899.patch, opennlp_trunk.patch


 Now that OpenNLP is an ASF project and has a nice license, it would be nice 
 to have a submodule (under analysis) that exposed capabilities for it. Drew 
 Farris, Tom Morton and I have code that does:
 * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
 would have to change slightly to buffer tokens)
 * NamedEntity recognition as a TokenFilter
 We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
 either payloads (PartOfSpeechAttribute?) on a token or at the same position.
 I'd propose it go under:
 modules/analysis/opennlp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2012-06-22 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399245#comment-13399245
 ] 

Lance Norskog commented on LUCENE-2899:
---

General status:
* At this point you have to download 1 library (jwnl) and run a script to make 
the unit tests work. 
* You have to download several model files from sourceforge to do real work. 
There is no script to help.
* The tokenizer and filter are in solr/ not lucene/

What is missing to make this a full package:
* Payload handling
** TokenFilter to parse TAG/term or term_TAG into term/payload.
** Output code in Solr for the reverse.
** Payload query for tags.
** Similarity scoring algorithms for tags.
* Tag handling
** There is a universal set of 12 parts-of-speech tags, with mappings for many 
language tagsets (Treebank etc.) into 12 common tags. Multi-language sites 
would benefit from this. I persuaded the authors to switch from GNU to Apache 
licensing.
*** [A Universal Part-of-Speech Tagset|http://arxiv.org/abs/1104.2086]

What NLP apps would be useful for search? Coordinate expansion, for example.


 Add OpenNLP Analysis capabilities as a module
 -

 Key: LUCENE-2899
 URL: https://issues.apache.org/jira/browse/LUCENE-2899
 Project: Lucene - Java
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: LUCENE-2899.patch, opennlp_trunk.patch


 Now that OpenNLP is an ASF project and has a nice license, it would be nice 
 to have a submodule (under analysis) that exposed capabilities for it. Drew 
 Farris, Tom Morton and I have code that does:
 * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
 would have to change slightly to buffer tokens)
 * NamedEntity recognition as a TokenFilter
 We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
 either payloads (PartOfSpeechAttribute?) on a token or at the same position.
 I'd propose it go under:
 modules/analysis/opennlp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2012-06-22 Thread Lance Norskog (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lance Norskog updated LUCENE-2899:
--

Attachment: LUCENE-2899.patch

 Add OpenNLP Analysis capabilities as a module
 -

 Key: LUCENE-2899
 URL: https://issues.apache.org/jira/browse/LUCENE-2899
 Project: Lucene - Java
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: LUCENE-2899.patch, LUCENE-2899.patch, opennlp_trunk.patch


 Now that OpenNLP is an ASF project and has a nice license, it would be nice 
 to have a submodule (under analysis) that exposed capabilities for it. Drew 
 Farris, Tom Morton and I have code that does:
 * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
 would have to change slightly to buffer tokens)
 * NamedEntity recognition as a TokenFilter
 We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
 either payloads (PartOfSpeechAttribute?) on a token or at the same position.
 I'd propose it go under:
 modules/analysis/opennlp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread Simon Willnauer
here is the issue https://issues.apache.org/jira/browse/SOLR-3570

On Fri, Jun 22, 2012 at 11:55 AM, Simon Willnauer 
simon.willna...@googlemail.com wrote:



 On Fri, Jun 22, 2012 at 11:47 AM, Simon Willnauer 
 simon.willna...@googlemail.com wrote:



 On Fri, Jun 22, 2012 at 10:37 AM, tom dev.tom.men...@gmx.net wrote:

  cross posting this issue to the dev list in the hope to get a response
 here...


 I think you are right. Closing the Stream / Reader is the responsibility
 of the caller not the FileDictionary IMO but solr doesn't close it so that
 might cause your problems. Are you running on windows by any chance?
 I will create an issue and fix it.


 hmm I just looked at it and I see a IOUtils.close call in FileDictionary


 https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_6/lucene/contrib/spellchecker/src/java/org/apache/lucene/search/suggest/FileDictionary.java

 are you using solr 3.6?


 simon



  Original Message   Subject: Re: suggester/autocomplete
 locks file preventing replication  Date: Thu, 21 Jun 2012 17:11:40 +0200  
 From:
 tom dev.tom.men...@gmx.net dev.tom.men...@gmx.net  Reply-To:
 solr-u...@lucene.apache.org  To: solr-u...@lucene.apache.org


 pocking into the code i think the FileDictionary class is the culprit:
 It takes an InputStream as a ctor argument but never releases the
 stream. what puzzles me is that the class seems to allow a one-time
 iteration and then the stream is useless, unless i'm missing smth. here.

 is there a good reason for this or rather a bug?
 should i move the topic to the dev list?


 On 21.06.2012 14:49, tom wrote:
  BTW: a core unload doesnt release the lock either ;(
 
 
  On 21.06.2012 14:39, tom wrote:
  hi,
 
  i'm using the suggester with a file like so:
 
searchComponent class=solr.SpellCheckComponent name=suggest
  lst name=spellchecker
str name=namesuggest/str
str
  name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
  name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
!-- Alternatives to lookupImpl:
  org.apache.solr.spelling.suggest.fst.FSTLookup [finite state
  automaton]
  org.apache.solr.spelling.suggest.jaspell.JaspellLookup
  [default, jaspell-based]
  org.apache.solr.spelling.suggest.tst.TSTLookup [ternary trees]
--
!-- the indexed field to derive suggestions from --
!-- TODO must change this to spell or smth alike later --
str name=fieldcontent/str
float name=threshold0.05/float
str name=buildOnCommittrue/str
str name=weightBuckets100/str
str name=sourceLocationautocomplete.dictionary/str
  /lst
/searchComponent
 
  when trying to replicate i get the following error message on the
  slave side:
 
   2012-06-21 14:34:50,781 ERROR
  [pool-3-thread-1  ]
  handler.ReplicationHandler- SnapPull failed
  org.apache.solr.common.SolrException: Unable to rename: path
  autocomplete.dictionary.20120620120611
  at
  org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
  at
  org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
  at
  org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
  at
  org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
  at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
  at
  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at
  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
  at java.lang.Thread.run(Thread.java:619)
 
  so i dug around it and found out that the solr's java process holds a
  lock on the autocomplete.dictionary file. any reason why this is so?
 
  thx,
 
  running:
  solr 3.5
  win7
 
 
 
 












[jira] [Updated] (SOLR-3570) Release Resource Stream in Suggestor

2012-06-22 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated SOLR-3570:
--

Attachment: SOLR-3570.patch

patch, closing the input stream in the Suggestor and adds additional safety to 
FileDictionary in the case of an error we close the stream internally.

 Release Resource Stream in Suggestor 
 -

 Key: SOLR-3570
 URL: https://issues.apache.org/jira/browse/SOLR-3570
 Project: Solr
  Issue Type: Improvement
  Components: spellchecker
Affects Versions: 3.6, 4.0, 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: SOLR-3570.patch


 Currently the Suggestor doesn't release the resource stream if a file 
 dictionary is loaded. The FileDictionary implemenation closes the stream but 
 only if we exhaust the stream entirely. Yet, we should close this on the 
 consumer side too. double close is no harm.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Build failed in Jenkins: Lucene-Core-4x-Beasting #6762

2012-06-22 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Core-4x-Beasting/6762/

--
[...truncated 894 lines...]
   [junit4] Suite: org.apache.lucene.util.TestFixedBitSet
   [junit4] Completed on J1 in 1.15s, 5 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestFieldCacheRangeFilter
   [junit4] Completed on J0 in 0.12s, 9 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.automaton.TestDeterminizeLexicon
   [junit4] Completed on J3 in 0.37s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestMultiTermConstantScore
   [junit4] Completed on J1 in 0.06s, 7 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.codecs.lucene40.values.TestDocValues
   [junit4] Completed on J0 in 0.25s, 14 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestAutomatonQuery
   [junit4] Completed on J3 in 0.04s, 6 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestMultiFields
   [junit4] Completed on J1 in 0.20s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestPayloads
   [junit4] Completed on J0 in 0.17s, 5 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestTransactionRollback
   [junit4] Completed on J3 in 0.07s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestSizeBoundedForceMerge
   [junit4] Completed on J1 in 0.05s, 11 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestFuzzyQuery
   [junit4] Completed on J0 in 0.05s, 5 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.spans.TestSpansAdvanced2
   [junit4] Completed on J3 in 0.06s, 4 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestFilterAtomicReader
   [junit4] Completed on J1 in 0.02s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestBooleanQuery
   [junit4] Completed on J0 in 0.10s, 5 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.TestIdentityHashSet
   [junit4] Completed on J3 in 0.05s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.TestSentinelIntSet
   [junit4] Completed on J1 in 0.12s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.TestRollingBuffer
   [junit4] Completed on J0 in 0.05s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4] Completed on J3 in 0.08s, 8 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestRegexpQuery
   [junit4] Completed on J1 in 0.03s, 7 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestCachingWrapperFilter
   [junit4] Completed on J0 in 0.03s, 5 tests
   [junit4]  
   [junit4] Suite: 
org.apache.lucene.search.spans.TestSpanExplanationsOfNonMatches
   [junit4] Completed on J3 in 0.04s, 31 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.spans.TestSpansAdvanced
   [junit4] Completed on J1 in 0.02s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.document.TestDocument
   [junit4] Completed on J0 in 0.03s, 10 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.store.TestFileSwitchDirectory
   [junit4] Completed on J1 in 0.02s, 4 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.junitcompat.TestJUnitRuleOrder
   [junit4] Completed on J0 in 0.01s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestDocCount
   [junit4] Completed on J3 in 0.25s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestParallelTermEnum
   [junit4] Completed on J1 in 0.01s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestElevationComparator
   [junit4] Completed on J0 in 0.02s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestFieldValueFilter
   [junit4] Completed on J3 in 0.15s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestBooleanScorer
   [junit4] Completed on J1 in 0.03s, 3 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.TestRecyclingByteBlockAllocator
   [junit4] Completed on J0 in 0.02s, 3 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.util.TestSortedVIntList
   [junit4] Completed on J3 in 0.04s, 19 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestPositionIncrement
   [junit4] Completed on J1 in 0.02s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.document.TestDateTools
   [junit4] Completed on J0 in 0.01s, 5 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestAutomatonQueryUnicode
   [junit4] Completed on J3 in 0.01s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.TestPrefixFilter
   [junit4] Completed on J1 in 0.01s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.document.TestBinaryDocument
   [junit4] Completed on J0 in 0.02s, 2 tests
   [junit4]  
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanFirstQuery
   [junit4] Completed on J3 in 0.01s, 1 test
   [junit4]  
   [junit4] Suite: org.apache.lucene.index.TestNoMergePolicy
   [junit4] Completed 

Jenkins build is back to normal : Lucene-Core-4x-Beasting #6763

2012-06-22 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Core-4x-Beasting/6763/


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1856) In Solr Cell, literals should override Tika-parsed values

2012-06-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-1856:
--

Fix Version/s: 5.0

 In Solr Cell, literals should override Tika-parsed values
 -

 Key: SOLR-1856
 URL: https://issues.apache.org/jira/browse/SOLR-1856
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris Harris
Assignee: Jan Høydahl
 Fix For: 4.0, 5.0

 Attachments: SOLR-1856.patch


 I propose that ExtractingRequestHandler / SolrCell literals should take 
 precedence over Tika-parsed metadata in all situations, including where 
 multiValued=true. (Compare SOLR-1633?)
 My personal motivation is that I have several fields (e.g. title, date) 
 where my own metadata is much superior to what Tika offers, and I want to 
 throw those Tika values away. (I actually wouldn't mind throwing away _all_ 
 Tika-parsed values, but let's set that aside.) SOLR-1634 is one potential 
 approach to this, but the fix here might be simpler.
 I'll attach a patch shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread tom

ah thx and to answer ur question:

no we are still on 3.5 and i had tested it on win7 (see the very end of 
this email thread)
and after having briefly looked @ the 3.6 code the class has changed 
quite a bit since 3.5...


On 22.06.2012 12:16, Simon Willnauer wrote:

here is the issue https://issues.apache.org/jira/browse/SOLR-3570

On Fri, Jun 22, 2012 at 11:55 AM, Simon Willnauer 
simon.willna...@googlemail.com 
mailto:simon.willna...@googlemail.com wrote:




On Fri, Jun 22, 2012 at 11:47 AM, Simon Willnauer
simon.willna...@googlemail.com
mailto:simon.willna...@googlemail.com wrote:



On Fri, Jun 22, 2012 at 10:37 AM, tom dev.tom.men...@gmx.net
mailto:dev.tom.men...@gmx.net wrote:

cross posting this issue to the dev list in the hope to
get a response here...


I think you are right. Closing the Stream / Reader is the
responsibility of the caller not the FileDictionary IMO but
solr doesn't close it so that might cause your problems. Are
you running on windows by any chance?
I will create an issue and fix it.


hmm I just looked at it and I see a IOUtils.close call in
FileDictionary


https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_6/lucene/contrib/spellchecker/src/java/org/apache/lucene/search/suggest/FileDictionary.java

are you using solr 3.6?


simon



 Original Message 
Subject:Re: suggester/autocomplete locks file preventing
replication
Date:   Thu, 21 Jun 2012 17:11:40 +0200
From:   tom dev.tom.men...@gmx.net
mailto:dev.tom.men...@gmx.net
Reply-To:   solr-u...@lucene.apache.org
mailto:solr-u...@lucene.apache.org
To: solr-u...@lucene.apache.org
mailto:solr-u...@lucene.apache.org



pocking into the code i think the FileDictionary class is the 
culprit:
It takes an InputStream as a ctor argument but never releases the
stream. what puzzles me is that the class seems to allow a one-time
iteration and then the stream is useless, unless i'm missing smth. 
here.

is there a good reason for this or rather a bug?
should i move the topic to the dev list?


On21.06.2012 14  tel:21.06.2012%2014:49, tom wrote:
 BTW: a core unload doesnt release the lock either ;(


 On21.06.2012 14  tel:21.06.2012%2014:39, tom wrote:
 hi,

 i'm using the suggester with a file like so:

   searchComponent class=solr.SpellCheckComponent 
name=suggest
 lst name=spellchecker
   str name=namesuggest/str
   str
 name=classnameorg.apache.solr.spelling.suggest.Suggester/str
   str
 
name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
   !-- Alternatives to lookupImpl:
 org.apache.solr.spelling.suggest.fst.FSTLookup [finite 
state
 automaton]
 org.apache.solr.spelling.suggest.jaspell.JaspellLookup
 [default, jaspell-based]
 org.apache.solr.spelling.suggest.tst.TSTLookup [ternary 
trees]
   --
   !-- the indexed field to derive suggestions from --
   !-- TODO must change this to spell or smth alike later --
   str name=fieldcontent/str
   float name=threshold0.05/float
   str name=buildOnCommittrue/str
   str name=weightBuckets100/str
   str name=sourceLocationautocomplete.dictionary/str
 /lst
   /searchComponent

 when trying to replicate i get the following error message on the
 slave side:

  2012-06-21 14:34:50,781 ERROR
 [pool-3-thread-1  ]
 handler.ReplicationHandler- SnapPull failed
 org.apache.solr.common.SolrException: Unable to rename: path
 autocomplete.dictionary.20120620120611
 at
 
org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
 at
 
org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
 at
 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
 at
 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
 at 
org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
 at
 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at
 

Re: suggester/autocomplete locks file preventing replication

2012-06-22 Thread Simon Willnauer
On Fri, Jun 22, 2012 at 1:06 PM, tom dev.tom.men...@gmx.net wrote:

  ah thx and to answer ur question:

 no we are still on 3.5 and i had tested it on win7 (see the very end of
 this email thread)
 and after having briefly looked @ the 3.6 code the class has changed quite
 a bit since 3.5...


yeah that is right! I rewrote most of the parts in the suggest code I think
this should be fine now. So you might want to upgrade to 3.6 to get this
fixed.

simon



 On 22.06.2012 12:16, Simon Willnauer wrote:

 here is the issue https://issues.apache.org/jira/browse/SOLR-3570

 On Fri, Jun 22, 2012 at 11:55 AM, Simon Willnauer 
 simon.willna...@googlemail.com wrote:



  On Fri, Jun 22, 2012 at 11:47 AM, Simon Willnauer 
 simon.willna...@googlemail.com wrote:



  On Fri, Jun 22, 2012 at 10:37 AM, tom dev.tom.men...@gmx.net wrote:

  cross posting this issue to the dev list in the hope to get a response
 here...


  I think you are right. Closing the Stream / Reader is the
 responsibility of the caller not the FileDictionary IMO but solr doesn't
 close it so that might cause your problems. Are you running on windows by
 any chance?
 I will create an issue and fix it.


  hmm I just looked at it and I see a IOUtils.close call in FileDictionary


 https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_6/lucene/contrib/spellchecker/src/java/org/apache/lucene/search/suggest/FileDictionary.java

  are you using solr 3.6?


  simon



  Original Message   Subject: Re:
 suggester/autocomplete locks file preventing replication  Date: Thu,
 21 Jun 2012 17:11:40 +0200  From: tom 
 dev.tom.men...@gmx.netdev.tom.men...@gmx.net  Reply-To:
 solr-u...@lucene.apache.org  To: solr-u...@lucene.apache.org


 pocking into the code i think the FileDictionary class is the culprit:
 It takes an InputStream as a ctor argument but never releases the
 stream. what puzzles me is that the class seems to allow a one-time
 iteration and then the stream is useless, unless i'm missing smth. here.

 is there a good reason for this or rather a bug?
 should i move the topic to the dev list?


 On 21.06.2012 14:49, tom wrote:
  BTW: a core unload doesnt release the lock either ;(
 
 
  On 21.06.2012 14:39, tom wrote:
  hi,
 
  i'm using the suggester with a file like so:
 
searchComponent class=solr.SpellCheckComponent name=suggest
  lst name=spellchecker
str name=namesuggest/str
str
  name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
  name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
!-- Alternatives to lookupImpl:
  org.apache.solr.spelling.suggest.fst.FSTLookup [finite state
  automaton]
  org.apache.solr.spelling.suggest.jaspell.JaspellLookup
  [default, jaspell-based]
  org.apache.solr.spelling.suggest.tst.TSTLookup [ternary trees]
--
!-- the indexed field to derive suggestions from --
!-- TODO must change this to spell or smth alike later --
str name=fieldcontent/str
float name=threshold0.05/float
str name=buildOnCommittrue/str
str name=weightBuckets100/str
str name=sourceLocationautocomplete.dictionary/str
  /lst
/searchComponent
 
  when trying to replicate i get the following error message on the
  slave side:
 
   2012-06-21 14:34:50,781 ERROR
  [pool-3-thread-1  ]
  handler.ReplicationHandler- SnapPull failed
  org.apache.solr.common.SolrException: Unable to rename: path
  autocomplete.dictionary.20120620120611
  at
  org.apache.solr.handler.SnapPuller.copyTmpConfFiles2Conf(SnapPuller.java:642)
  at
  org.apache.solr.handler.SnapPuller.downloadConfFiles(SnapPuller.java:526)
  at
  org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:299)
  at
  org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:268)
  at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
  at
  java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at
  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
  at
  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
  at java.lang.Thread.run(Thread.java:619)
 
  so i dug around it and found out that the solr's java process holds a
  lock on the autocomplete.dictionary file. any reason why this 

[jira] [Updated] (SOLR-3561) Error during deletion of shard/core

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3561:
--

Fix Version/s: 4.0

 Error during deletion of shard/core
 ---

 Key: SOLR-3561
 URL: https://issues.apache.org/jira/browse/SOLR-3561
 Project: Solr
  Issue Type: Bug
  Components: multicore, replication (java), SolrCloud
Affects Versions: 4.0
 Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
Reporter: Per Steffensen
 Fix For: 4.0


 Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
 servers).
 Several collections with several slices and one replica for each slice (each 
 slice has two shards)
 Basically we want let our system delete an entire collection. We do this by 
 trying to delete each and every shard under the collection. Each shard is 
 deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
 Solr
 {code}
 CoreAdminRequest request = new CoreAdminRequest();
 request.setAction(CoreAdminAction.UNLOAD);
 request.setCoreName(shardName);
 CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
 {code}
 The delete/unload succeeds, but in like 10% of the cases we get errors on 
 involved Solr servers, right around the time where shard/cores are deleted, 
 and we end up in a situation where ZK still claims (forever) that the deleted 
 shard is still present and active.
 Form here the issue is easilier explained by a more concrete example:
 - 7 Solr servers involved
 - Several collection a.o. one called collection_2012_04, consisting of 28 
 slices, 56 shards (remember 1 replica for each slice) named 
 collection_2012_04_sliceX_shardY for all pairs in {X:1..28}x{Y:1,2}
 - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
 collection_2012_04_slice1_shard1 and Solr server #7 is running shard 
 collection_2012_04_slice1_shard2 belonging to the same slice slice1.
 When we decide to delete the collection collection_2012_04 we go through 
 all 56 shards and delete/unload them one-by-one - including 
 collection_2012_04_slice1_shard1 and collection_2012_04_slice1_shard2. At 
 some point during or shortly after all this deletion we see the following 
 exceptions in solr.log on Solr server #7
 {code}
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
 core not found:collection_2012_04_slice1_shard1
 request: 
 http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERYcore=collection_2012_04_slice1_shard1nodeName=solr_server_7%3A8983_solrcoreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
 at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
 at 
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
 at 
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
 Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
 SEVERE: Recovery failed - trying again...
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 WARNING:
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
 at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
 at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
 Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
 {code}
 Im not sure exactly how to interpret this, but it seems to me that some 
 recovery job tries to recover collection_2012_04_slice1_shard2 on Solr server 
 #7 from collection_2012_04_slice1_shard1 on Solr server #1, but fail because 
 Solr server #1 answers back that it doesnt run 
 collection_2012_04_slice1_shard1 (anymore).
 This problem occurs for 

[jira] [Updated] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3563:
--

Fix Version/s: 4.0

 Collection in ZK not deleted when all shards has been unloaded
 --

 Key: SOLR-3563
 URL: https://issues.apache.org/jira/browse/SOLR-3563
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Priority: Minor
 Fix For: 4.0


 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
 collection, that the collection and all its slices are still present in ZK 
 under /collections. I might be ok since the operation is called UNLOAD, but I 
 basically want to delete an entire collection and all data related to it 
 (including information about it in ZK).
 A delete-collection operation, that also deletes info about the collection 
 under /collections in ZK, would be very nice! Or a delete-shard/core 
 operation and then some nice logic that detects when all shards belonging to 
 a collection has been deleted, and when that has happened deletes info about 
 the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3488:
--

Fix Version/s: 4.0

 Create a Collections API for SolrCloud
 --

 Key: SOLR-3488
 URL: https://issues.apache.org/jira/browse/SOLR-3488
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.0

 Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
 SOLR-3488_2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3529) clarify distnction between index query tables on analysis debug pages

2012-06-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399286#comment-13399286
 ] 

Erick Erickson commented on SOLR-3529:
--

OK, I _will_ make a comment or two... But understand that my UI design skills 
are legendary... legendarily bad.

 Let's not default to verbose. That's real intimidating for someone seeing it 
 for the first time.

 Let's remove one of the entry fields. 99% of the time I really _want_ to see 
 the same input go through at least the index and query chains. I'd rather put 
 different text in a single entry field on those rare occasions when I want 
 different index and query terms analyzed than copy all the time.

 I think it's _extremely_ valuable to have side-by-side display. When someone 
 just starting looks at, say, WDFF on a side-by-side screen, it gives them 
 information that they'd never notice otherwise. If this is only in 
 non-verbose mode, fine.

 I think that having the sliders for long input when side-by-side is 
 preferable to stacking anything vertically. Most of the time, I'm only 
 worried about a few terms anyway and the usually fit side-by-side.

 Putting on my newbie hat, it's not obvious that the really cool display of 
 the class when you hover over the abbreviations for the tokenizers/filters is 
 available. Perhaps a note somewhere hover over abbreviations to see the 
 definition? What'd be really cool is to show the complete definition in the 
 hover box (e.g. the catenateWords, catenateNumbers from WDFF). Don't quite 
 know how to get it to the UI, but.. G

 I thought a bit about MultiTermAware stuff, and it's a sticky wicket for the 
 same reasons it's always been one, this bypasses parsing. Can we cheat here? 
 I have in mind some weird bit where we detect wildcards on input, and somehow 
 send all the wildcard terms through the multiterm chain and put them back in 
 the display in place of the wildcards that went through the whole chain. I 
 _like_ making Stefan work hard G... OK, maybe I can help. Perhaps we can 
 colorize this switch with a note about it somehow to signal that something 
 you may not have expected happened.



 clarify distnction between index  query tables on analysis debug pages
 ---

 Key: SOLR-3529
 URL: https://issues.apache.org/jira/browse/SOLR-3529
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Hoss Man
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.0

 Attachments: SOLR-3529.patch, SOLR-3529.patch, 
 long-side-by-side-below-fold.png, long-side-by-side.png, med-just-index.png, 
 med-query-only.png, med-side-by-side-below-fold.png, med-side-by-side.png, 
 short-side-by-side.png


 Working on the tutorial, i noticed that the analysis debug page is a little 
 hard to read when specifying both index and query values
  * if the inputs are short, it's not entirely obvious that you are looking at 
 two tables (especially compared to how the page looks using only the index 
 textbox)
  * if the inputs are longer, the query table shifts down below the fold 
 where users may not even be aware of it.
 Screenshots and ideas for improvement to follow

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1856) In Solr Cell, literals should override Tika-parsed values

2012-06-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-1856:
--

Attachment: SOLR-1856.patch

Updated patch for trunk, with /trunk as base, not /solr.

I added the request param literalsOverride=true|false which defaults to true, 
and documented it at http://wiki.apache.org/solr/ExtractingRequestHandler

Think this is ready for commit, will then backport to 4.x

 In Solr Cell, literals should override Tika-parsed values
 -

 Key: SOLR-1856
 URL: https://issues.apache.org/jira/browse/SOLR-1856
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Chris Harris
Assignee: Jan Høydahl
 Fix For: 4.0, 5.0

 Attachments: SOLR-1856.patch, SOLR-1856.patch


 I propose that ExtractingRequestHandler / SolrCell literals should take 
 precedence over Tika-parsed metadata in all situations, including where 
 multiValued=true. (Compare SOLR-1633?)
 My personal motivation is that I have several fields (e.g. title, date) 
 where my own metadata is much superior to what Tika offers, and I want to 
 throw those Tika values away. (I actually wouldn't mind throwing away _all_ 
 Tika-parsed values, but let's set that aside.) SOLR-1634 is one potential 
 approach to this, but the fix here might be simpler.
 I'll attach a patch shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1929) Index encrypted files

2012-06-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-1929:
--

  Description: SolrCell should be able to index encrypted files (pdfs, word 
docs).  (was: SolrCell is not able to index encrypted pdfs.
This is easily done by supplying the password in the metadata passed on to tika)
Fix Version/s: 5.0
  Summary: Index encrypted files  (was: Index encrypted pdf files)

For PDFs there was a possibility of supplying the password in the metadata 
passed on to tika (as the first patch here). However, since TIKA-850, we can 
now supply a PasswordProvider on the context, which will provide the password 
and is future proof for any document type which supports it.

 Index encrypted files
 -

 Key: SOLR-1929
 URL: https://issues.apache.org/jira/browse/SOLR-1929
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Yiannis Pericleous
Assignee: Jan Høydahl
Priority: Minor
 Fix For: 4.0, 5.0

 Attachments: SOLR-1929.patch


 SolrCell should be able to index encrypted files (pdfs, word docs).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399290#comment-13399290
 ] 

Mark Miller commented on SOLR-3488:
---

To get something incrementally committable I'm changing from using a collection 
template to a simple numReplicas. I have hit an annoying stall where it is 
difficult to get all of the node host urls. The live_nodes list is translated 
from url to path safe. It's not reversible if _ is in the original url. You can 
put the url in data at each node, but then you have to slowly read each node 
rather than a simple getChildren call. You can also try and find every node by 
running through the whole json cluster state file - but that wouldn't give you 
any nodes that had no cores on it at the moment (say after a collection delete).

 Create a Collections API for SolrCloud
 --

 Key: SOLR-3488
 URL: https://issues.apache.org/jira/browse/SOLR-3488
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.0

 Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
 SOLR-3488_2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4155) Move hide ReaderSlice and BitSlice classes (and possibly others) to oal.index package; move ReaderUtil to oal.index

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4155:
--

Attachment: LUCENE-4155.patch

New patch with moving renaming more classes to appropinquate packages:

- CodecUtils to codecs package
- TwoPhaseCommit* and TermContext to index package

I will commit this later today, as patch might get outdated soon.

 Move  hide ReaderSlice and BitSlice classes (and possibly others) to 
 oal.index package; move ReaderUtil to oal.index
 -

 Key: LUCENE-4155
 URL: https://issues.apache.org/jira/browse/LUCENE-4155
 Project: Lucene - Java
  Issue Type: Task
  Components: core/index
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4155.patch, LUCENE-4155.patch, LUCENE-4155.patch


 Those are used solely by the index package and are very internal (just helper 
 classes), so they should be hidden from the user. This can be done by adding 
 the pkg-private to index package.
 ReaderUtil was cleaned up in LUCENE-3866, should stay public, but is in wrong 
 package since Lucene 2.9. We should move it to oal.index package, too. Its 
 name suggests that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4163:
--

Attachment: (was: LUCENE-4163.patch)

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4163) Improve concurrency in MMapIndexInput.clone()

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4163:
--

Attachment: LUCENE-4163.patch

 Improve concurrency in MMapIndexInput.clone()
 -

 Key: LUCENE-4163
 URL: https://issues.apache.org/jira/browse/LUCENE-4163
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Affects Versions: 3.6, 4.0, 5.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4163.patch, LUCENE-4163.patch, LUCENE-4163.patch


 Followup issue from SOLR-3566:
 Whenever you clone the TermIndex, it also creates a clone of the underlying 
 IndexInputs. In high cocurrent environments, the clone method of 
 MMapIndexInput is a bottleneck (it has heavy work to do to manage the weak 
 references in a synchronized block).
 Everywhere else in Lucene we use my new WeakIdentityMap for managing 
 concurrent weak maps. For this case I did not do this, as the WeakIdentityMap 
 has no iterators (it doe snot implement Map interface). This issue will add a 
 key and values iterator (the key iterator will not return GC'ed keys), so 
 MMapIndexInput can use WeakIdentityMap backed by ConcurrentHashMap and needs 
 no synchronization. ConcurrentHashMap has better concurrency because it 
 distributes the hash keys in different buckets per thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows-Java7-64 - Build # 124 - Failure!

2012-06-22 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows-Java7-64/124/

1 tests failed.
REGRESSION:  org.apache.solr.spelling.suggest.SuggesterWFSTTest.testRebuild

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([3784EB9B1B0AAB66:6CA149D82F0AD1FC]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:459)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:426)
at 
org.apache.solr.spelling.suggest.SuggesterTest.testRebuild(SuggesterTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:56)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)
Caused by: java.lang.NullPointerException
at 
org.apache.lucene.search.suggest.fst.WFSTCompletionLookup.lookupPrefix(WFSTCompletionLookup.java:193)
at 
org.apache.lucene.search.suggest.fst.WFSTCompletionLookup.lookup(WFSTCompletionLookup.java:155)
at 
org.apache.solr.spelling.suggest.Suggester.getSuggestions(Suggester.java:188)
at 
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:172)
at 

[jira] [Created] (SOLR-3571) You should have the option of removing the data dir when unloading a core.

2012-06-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3571:
-

 Summary: You should have the option of removing the data dir when 
unloading a core.
 Key: SOLR-3571
 URL: https://issues.apache.org/jira/browse/SOLR-3571
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3571) You should have the option of removing the data dir when unloading a core.

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3571.
---

Resolution: Duplicate

 You should have the option of removing the data dir when unloading a core.
 --

 Key: SOLR-3571
 URL: https://issues.apache.org/jira/browse/SOLR-3571
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4155) Move hide ReaderSlice and BitSlice classes (and possibly others) to oal.index package; move ReaderUtil to oal.index

2012-06-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399399#comment-13399399
 ] 

Michael McCandless commented on LUCENE-4155:


+1, thanks Uwe!

 Move  hide ReaderSlice and BitSlice classes (and possibly others) to 
 oal.index package; move ReaderUtil to oal.index
 -

 Key: LUCENE-4155
 URL: https://issues.apache.org/jira/browse/LUCENE-4155
 Project: Lucene - Java
  Issue Type: Task
  Components: core/index
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4155.patch, LUCENE-4155.patch, LUCENE-4155.patch


 Those are used solely by the index package and are very internal (just helper 
 classes), so they should be hidden from the user. This can be done by adding 
 the pkg-private to index package.
 ReaderUtil was cleaned up in LUCENE-3866, should stay public, but is in wrong 
 package since Lucene 2.9. We should move it to oal.index package, too. Its 
 name suggests that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4155) Move hide ReaderSlice and BitSlice classes (and possibly others) to oal.index package; move ReaderUtil to oal.index

2012-06-22 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-4155.
---

Resolution: Fixed

Committed trunk revision: 1352942, 1352949
Backported 4.x revision: 1352956

I hope all compiles and tests on Jenkins!

 Move  hide ReaderSlice and BitSlice classes (and possibly others) to 
 oal.index package; move ReaderUtil to oal.index
 -

 Key: LUCENE-4155
 URL: https://issues.apache.org/jira/browse/LUCENE-4155
 Project: Lucene - Java
  Issue Type: Task
  Components: core/index
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.0, 5.0

 Attachments: LUCENE-4155.patch, LUCENE-4155.patch, LUCENE-4155.patch


 Those are used solely by the index package and are very internal (just helper 
 classes), so they should be hidden from the user. This can be done by adding 
 the pkg-private to index package.
 ReaderUtil was cleaned up in LUCENE-3866, should stay public, but is in wrong 
 package since Lucene 2.9. We should move it to oal.index package, too. Its 
 name suggests that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4069) Segment-level Bloom filters for a 2 x speed up on rare term searches

2012-06-22 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-4069:
-

Attachment: PKLookupUpdatePerfTest.java

Attached a performance test (adapted from Mike's PKLookupPerfTest) that 
demonstrates the worst-case scenario where BloomFilter offers the 2x speed up 
not previously revealed in Mike's other tests.

This test case mixes reads and writes on a growing index and is representative 
of the real-world scenario I am seeking to optimize. See the javadoc for test 
details.

 Segment-level Bloom filters for a 2 x speed up on rare term searches
 

 Key: LUCENE-4069
 URL: https://issues.apache.org/jira/browse/LUCENE-4069
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/index
Affects Versions: 3.6, 4.0
Reporter: Mark Harwood
Priority: Minor
 Fix For: 4.0, 3.6.1

 Attachments: BloomFilterPostingsBranch4x.patch, 
 MHBloomFilterOn3.6Branch.patch, PKLookupUpdatePerfTest.java, 
 PrimaryKeyPerfTest40.java


 An addition to each segment which stores a Bloom filter for selected fields 
 in order to give fast-fail to term searches, helping avoid wasted disk access.
 Best suited for low-frequency fields e.g. primary keys on big indexes with 
 many segments but also speeds up general searching in my tests.
 Overview slideshow here: 
 http://www.slideshare.net/MarkHarwood/lucene-bloomfilteredsegments
 Benchmarks based on Wikipedia content here: http://goo.gl/X7QqU
 Patch based on 3.6 codebase attached.
 There are no 3.6 API changes currently - to play just add a field with _blm 
 on the end of the name to invoke special indexing/querying capability. 
 Clearly a new Field or schema declaration(!) would need adding to APIs to 
 configure the service properly.
 Also, a patch for Lucene4.0 codebase introducing a new PostingsFormat

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux-Java6-64 - Build # 1008 - Failure!

2012-06-22 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux-Java6-64/1008/

All tests passed

Build Log:
[...truncated 15998 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.appending...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.intblock...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40.values...
  [javadoc] Loading source files for package org.apache.lucene.codecs.memory...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.codecs.pulsing...
  [javadoc] Loading source files for package org.apache.lucene.codecs.sep...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.simpletext...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.6.0_32
  [javadoc] Building tree for all the packages and classes...
  [javadoc] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:78:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:67:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:50:
 warning - Tag @see: reference not found: DummyConcurrentLock
  [javadoc] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:67:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:78:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/build/docs/core/stylesheet.css...
  [javadoc] 5 warnings

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/common-build.xml:611:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/core/build.xml:49:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux-Java6-64/checkout/lucene/common-build.xml:1418:
 Javadocs warnings were found!

Total time: 11 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-4.x - Build # 16 - Failure

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-4.x/16/

No tests ran.

Build Log:
[...truncated 15456 lines...]

compile-test:
 [echo] Building suggest...

ivy-availability-check:

ivy-fail:

ivy-configure:

resolve:

common.init:

compile-lucene-core:

init:

clover.setup:

clover.info:
 [echo] 
 [echo]   Clover not found. Code coverage reports disabled.
 [echo] 

clover:

compile-core:

compile-test-framework:

ivy-availability-check:

ivy-fail:

ivy-configure:

resolve:
[ivy:retrieve] :: loading settings :: url = 
jar:file:/home/hudson/.ant/lib/ivy-2.2.0.jar!/org/apache/ivy/core/settings/ivysettings.xml

init:

compile-lucene-core:

compile-core:

common.compile-test:

build-artifacts-and-tests:

init-dist:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

javadocs:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/core

download-java6-javadoc-packagelist:
 [copy] Copying 1 file to 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/core
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.appending...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.intblock...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene3x...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40.values...
  [javadoc] Loading source files for package org.apache.lucene.codecs.memory...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.codecs.pulsing...
  [javadoc] Loading source files for package org.apache.lucene.codecs.sep...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.simpletext...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.6.0_32
  [javadoc] Building tree for all the packages and classes...
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:78:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:67:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:50:
 warning - Tag @see: reference not found: DummyConcurrentLock
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:67:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/core/src/java/org/apache/lucene/util/RecyclingByteBlockAllocator.java:78:
 warning - Tag @link: reference not found: DummyConcurrentLock
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/usr/home/hudson/hudson-slave/workspace/Lucene-4.x/checkout/lucene/build/docs/core/stylesheet.css...
  [javadoc] 5 warnings
[...truncated 19 lines...]

[...truncated 15576 lines...]

[...truncated 15576 lines...]

[...truncated 15576 lines...]

[...truncated 15576 lines...]

[...truncated 15576 lines...]



-
To unsubscribe, e-mail: 

[JENKINS] Lucene-Solr-tests-only-trunk-java7 - Build # 2817 - Still Failing

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk-java7/2817/

No tests ran.

Build Log:
[...truncated 784 lines...]
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:97:
 error: cannot find symbol
[javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_CURRENT);
[javac]   ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40StoredFieldsWriter
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:98:
 error: cannot find symbol
[javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX, 
VERSION_CURRENT);
[javac]   ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40StoredFieldsWriter
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:85:
 error: cannot find symbol
[javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40StoredFieldsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:86:
 error: cannot find symbol
[javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40StoredFieldsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:75:
 error: cannot find symbol
[javac]   static final long HEADER_LENGTH_FIELDS = 
CodecUtil.headerLength(CODEC_NAME_FIELDS);
[javac]^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:76:
 error: cannot find symbol
[javac]   static final long HEADER_LENGTH_DOCS = 
CodecUtil.headerLength(CODEC_NAME_DOCS);
[javac]  ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:77:
 error: cannot find symbol
[javac]   static final long HEADER_LENGTH_INDEX = 
CodecUtil.headerLength(CODEC_NAME_INDEX);
[javac]   ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:108:
 error: cannot find symbol
[javac]   final int tvxVersion = CodecUtil.checkHeader(tvx, 
CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:112:
 error: cannot find symbol
[javac]   final int tvdVersion = CodecUtil.checkHeader(tvd, 
CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:115:
 error: cannot find symbol
[javac]   final int tvfVersion = CodecUtil.checkHeader(tvf, 
CODEC_NAME_FIELDS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac]   symbol:   variable CodecUtil
[javac]   location: class Lucene40TermVectorsReader
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk-java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsWriter.java:76:
 error: cannot find symbol
[javac]   CodecUtil.writeHeader(tvx, CODEC_NAME_INDEX, VERSION_CURRENT);
[javac]   

RE: [JENKINS] Lucene-Solr-tests-only-trunk-java7 - Build # 2817 - Still Failing

2012-06-22 Thread Uwe Schindler
For this build the cause is a SVN checkout error? Should fail build earlier.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Friday, June 22, 2012 7:02 PM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-tests-only-trunk-java7 - Build # 2817 - Still
 Failing
 
 Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk-java7/2817/
 
 No tests ran.
 
 Build Log:
 [...truncated 784 lines...]
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40StoredFieldsWriter.java:97: error: cannot find symbol
 [javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT,
 VERSION_CURRENT);
 [javac]   ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40StoredFieldsWriter
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40StoredFieldsWriter.java:98: error: cannot find symbol
 [javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX,
 VERSION_CURRENT);
 [javac]   ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40StoredFieldsWriter
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40StoredFieldsReader.java:85: error: cannot find symbol
 [javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX,
 VERSION_START, VERSION_CURRENT);
 [javac]   ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40StoredFieldsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40StoredFieldsReader.java:86: error: cannot find symbol
 [javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT,
 VERSION_START, VERSION_CURRENT);
 [javac]   ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40StoredFieldsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:75: error: cannot find symbol
 [javac]   static final long HEADER_LENGTH_FIELDS =
 CodecUtil.headerLength(CODEC_NAME_FIELDS);
 [javac]^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40TermVectorsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:76: error: cannot find symbol
 [javac]   static final long HEADER_LENGTH_DOCS =
 CodecUtil.headerLength(CODEC_NAME_DOCS);
 [javac]  ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40TermVectorsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:77: error: cannot find symbol
 [javac]   static final long HEADER_LENGTH_INDEX =
 CodecUtil.headerLength(CODEC_NAME_INDEX);
 [javac]   ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40TermVectorsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:108: error: cannot find symbol
 [javac]   final int tvxVersion = CodecUtil.checkHeader(tvx,
 CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
 [javac]  ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40TermVectorsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:112: error: cannot find symbol
 [javac]   final int tvdVersion = CodecUtil.checkHeader(tvd,
 CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
 [javac]  ^
 [javac]   symbol:   variable CodecUtil
 [javac]   location: class Lucene40TermVectorsReader
 [javac] /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-
 trunk-
 java7/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Luc
 ene40TermVectorsReader.java:115: error: cannot find symbol
 [javac]   final int tvfVersion = 

[jira] [Updated] (LUCENE-3892) Add a useful intblock postings format (eg, FOR, PFOR, PFORDelta, Simple9/16/64, etc.)

2012-06-22 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-3892:
---

Attachment: LUCENE-3892-BlockTermScorer.patch

I was curious how much the layers (SepPostingsReader,
FixedIntBlock.IntIndexInput, ForFactor) between the FOR block decode
and the query scoring were hurting performance, so I wrote a
specialized scorer (BlockTermScorer) for just TermQuery.

The scorer is only used if the postings format is ForPF, and if no
skipping will be done (I didn't implement advance...).

The scorer reaches down and holds on to the decoded int[] buffer, and
then does its own adding up of the doc deltas, reading the next block,
etc.

The baseline is the current branch (not trunk!):

{noformat}
TaskQPS base StdDev base   QPS patch StdDev patch Pct 
diff
Wildcard   10.310.40   10.100.17   -7% -
3%
 AndHighHigh4.900.104.820.15   -6% -
3%
 Prefix3   28.501.06   28.110.50   -6% -
4%
  IntNRQ9.720.469.600.57  -11% -
9%
SloppyPhrase0.920.030.920.02   -6% -
5%
PKLookup  106.212.54  105.662.07   -4% -
3%
  Phrase1.560.001.560.01   -1% -
0%
  Fuzzy1   90.333.48   90.192.25   -6% -
6%
  Fuzzy2   29.660.61   29.640.85   -4% -
4%
  AndHighMed   14.870.29   15.020.81   -6% -
8%
 Respell   78.832.46   79.621.54   -3% -
6%
SpanNear1.180.021.190.04   -4% -
6%
 TermGroup1M2.780.063.280.14   10% -   
25%
  OrHighHigh4.190.245.040.209% -   
32%
   OrHighMed8.210.459.870.23   11% -   
30%
  TermBGroup1M1P5.110.206.210.26   12% -   
31%
TermBGroup1M4.490.115.490.27   13% -   
31%
Term8.890.58   11.901.529% -   
61%
{noformat}

Seems like we get a good boost removing the abstractions.


 Add a useful intblock postings format (eg, FOR, PFOR, PFORDelta, 
 Simple9/16/64, etc.)
 -

 Key: LUCENE-3892
 URL: https://issues.apache.org/jira/browse/LUCENE-3892
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Michael McCandless
  Labels: gsoc2012, lucene-gsoc-12
 Fix For: 4.1

 Attachments: LUCENE-3892-BlockTermScorer.patch, 
 LUCENE-3892-direct-IntBuffer.patch, LUCENE-3892_for.patch, 
 LUCENE-3892_for_byte[].patch, LUCENE-3892_for_int[].patch, 
 LUCENE-3892_for_unfold_method.patch, LUCENE-3892_pfor.patch, 
 LUCENE-3892_pfor.patch, LUCENE-3892_pfor.patch, 
 LUCENE-3892_pfor_unfold_method.patch, LUCENE-3892_settings.patch, 
 LUCENE-3892_settings.patch


 On the flex branch we explored a number of possible intblock
 encodings, but for whatever reason never brought them to completion.
 There are still a number of issues opened with patches in different
 states.
 Initial results (based on prototype) were excellent (see
 http://blog.mikemccandless.com/2010/08/lucene-performance-with-pfordelta-codec.html
 ).
 I think this would make a good GSoC project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399507#comment-13399507
 ] 

Mark Miller commented on SOLR-3563:
---

I've add this to the work I did on the collections API - it's needed there for 
collection removal. Should be committing a first iteration of that soon.

 Collection in ZK not deleted when all shards has been unloaded
 --

 Key: SOLR-3563
 URL: https://issues.apache.org/jira/browse/SOLR-3563
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Priority: Minor
 Fix For: 4.0


 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
 collection, that the collection and all its slices are still present in ZK 
 under /collections. I might be ok since the operation is called UNLOAD, but I 
 basically want to delete an entire collection and all data related to it 
 (including information about it in ZK).
 A delete-collection operation, that also deletes info about the collection 
 under /collections in ZK, would be very nice! Or a delete-shard/core 
 operation and then some nice logic that detects when all shards belonging to 
 a collection has been deleted, and when that has happened deletes info about 
 the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399508#comment-13399508
 ] 

Mark Miller commented on SOLR-3562:
---

I've add this to the work I did on the collections API - it's needed there for 
collection removal. Should be committing a first iteration of that soon.

 Data folder not deleted during unload
 -

 Key: SOLR-3562
 URL: https://issues.apache.org/jira/browse/SOLR-3562
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Priority: Minor

 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
 belonging to the shard/core that has been unloaded is not deleted. I might be 
 ok since the operation is called UNLOAD, but I basically want to delete a 
 shard/core and all data related to it (including its data-folder).
 Dont we have a delete shard/core operation? Or what do I need to do? Do I 
 have to manually delete the data-folder myself after having unloaded?
 A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3562:
-

Assignee: Mark Miller

 Data folder not deleted during unload
 -

 Key: SOLR-3562
 URL: https://issues.apache.org/jira/browse/SOLR-3562
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Assignee: Mark Miller
Priority: Minor

 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
 belonging to the shard/core that has been unloaded is not deleted. I might be 
 ok since the operation is called UNLOAD, but I basically want to delete a 
 shard/core and all data related to it (including its data-folder).
 Dont we have a delete shard/core operation? Or what do I need to do? Do I 
 have to manually delete the data-folder myself after having unloaded?
 A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3563:
-

Assignee: Mark Miller

 Collection in ZK not deleted when all shards has been unloaded
 --

 Key: SOLR-3563
 URL: https://issues.apache.org/jira/browse/SOLR-3563
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.0


 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
 collection, that the collection and all its slices are still present in ZK 
 under /collections. I might be ok since the operation is called UNLOAD, but I 
 basically want to delete an entire collection and all data related to it 
 (including information about it in ZK).
 A delete-collection operation, that also deletes info about the collection 
 under /collections in ZK, would be very nice! Or a delete-shard/core 
 operation and then some nice logic that detects when all shards belonging to 
 a collection has been deleted, and when that has happened deletes info about 
 the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399518#comment-13399518
 ] 

Mark Miller commented on SOLR-3488:
---

Re the above: I'm tempted to add another data node that just has the list of 
nodes? I think it would be good to have an efficient way to get that list. It's 
pain with clusterstate.json and that loses nodes with no cores on it now.

Something I just remembered I have to look into: the default location of the 
data dir for on the fly cores that are created is probably not great. 

 Create a Collections API for SolrCloud
 --

 Key: SOLR-3488
 URL: https://issues.apache.org/jira/browse/SOLR-3488
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.0

 Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
 SOLR-3488_2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Build failed in Jenkins: Lucene-Core-4x-Beasting #6762

2012-06-22 Thread Michael McCandless
I committed a fix: rev 1353004

Mike McCandless

http://blog.mikemccandless.com

On Fri, Jun 22, 2012 at 6:18 AM,  hudsonsevilt...@gmail.com wrote:
 See http://sierranevada.servebeer.com/job/Lucene-Core-4x-Beasting/6762/

 --
 [...truncated 894 lines...]
   [junit4] Suite: org.apache.lucene.util.TestFixedBitSet
   [junit4] Completed on J1 in 1.15s, 5 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestFieldCacheRangeFilter
   [junit4] Completed on J0 in 0.12s, 9 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.automaton.TestDeterminizeLexicon
   [junit4] Completed on J3 in 0.37s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestMultiTermConstantScore
   [junit4] Completed on J1 in 0.06s, 7 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.codecs.lucene40.values.TestDocValues
   [junit4] Completed on J0 in 0.25s, 14 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestAutomatonQuery
   [junit4] Completed on J3 in 0.04s, 6 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestMultiFields
   [junit4] Completed on J1 in 0.20s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestPayloads
   [junit4] Completed on J0 in 0.17s, 5 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestTransactionRollback
   [junit4] Completed on J3 in 0.07s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestSizeBoundedForceMerge
   [junit4] Completed on J1 in 0.05s, 11 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestFuzzyQuery
   [junit4] Completed on J0 in 0.05s, 5 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.spans.TestSpansAdvanced2
   [junit4] Completed on J3 in 0.06s, 4 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestFilterAtomicReader
   [junit4] Completed on J1 in 0.02s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestBooleanQuery
   [junit4] Completed on J0 in 0.10s, 5 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.TestIdentityHashSet
   [junit4] Completed on J3 in 0.05s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.util.TestSentinelIntSet
   [junit4] Completed on J1 in 0.12s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.TestRollingBuffer
   [junit4] Completed on J0 in 0.05s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanSearchEquivalence
   [junit4] Completed on J3 in 0.08s, 8 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestRegexpQuery
   [junit4] Completed on J1 in 0.03s, 7 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestCachingWrapperFilter
   [junit4] Completed on J0 in 0.03s, 5 tests
   [junit4]
   [junit4] Suite: 
 org.apache.lucene.search.spans.TestSpanExplanationsOfNonMatches
   [junit4] Completed on J3 in 0.04s, 31 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.spans.TestSpansAdvanced
   [junit4] Completed on J1 in 0.02s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.document.TestDocument
   [junit4] Completed on J0 in 0.03s, 10 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.store.TestFileSwitchDirectory
   [junit4] Completed on J1 in 0.02s, 4 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.junitcompat.TestJUnitRuleOrder
   [junit4] Completed on J0 in 0.01s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestDocCount
   [junit4] Completed on J3 in 0.25s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.index.TestParallelTermEnum
   [junit4] Completed on J1 in 0.01s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestElevationComparator
   [junit4] Completed on J0 in 0.02s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestFieldValueFilter
   [junit4] Completed on J3 in 0.15s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestBooleanScorer
   [junit4] Completed on J1 in 0.03s, 3 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.TestRecyclingByteBlockAllocator
   [junit4] Completed on J0 in 0.02s, 3 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.util.TestSortedVIntList
   [junit4] Completed on J3 in 0.04s, 19 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestPositionIncrement
   [junit4] Completed on J1 in 0.02s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.document.TestDateTools
   [junit4] Completed on J0 in 0.01s, 5 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestAutomatonQueryUnicode
   [junit4] Completed on J3 in 0.01s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.search.TestPrefixFilter
   [junit4] Completed on J1 in 0.01s, 1 test
   [junit4]
   [junit4] Suite: org.apache.lucene.document.TestBinaryDocument
   [junit4] Completed on J0 in 0.02s, 2 tests
   [junit4]
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanFirstQuery
   [junit4] Completed on J3 in 0.01s, 1 test
   [junit4]
   

[jira] [Commented] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399545#comment-13399545
 ] 

Mark Miller commented on SOLR-3562:
---

ill add an option both for the instanceDir and dataDir

 Data folder not deleted during unload
 -

 Key: SOLR-3562
 URL: https://issues.apache.org/jira/browse/SOLR-3562
 Project: Solr
  Issue Type: Bug
  Components: multicore, SolrCloud
Affects Versions: 4.0
 Environment: Same as SOLR-3561
Reporter: Per Steffensen
Assignee: Mark Miller
Priority: Minor

 Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
 command.
 I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
 belonging to the shard/core that has been unloaded is not deleted. I might be 
 ok since the operation is called UNLOAD, but I basically want to delete a 
 shard/core and all data related to it (including its data-folder).
 Dont we have a delete shard/core operation? Or what do I need to do? Do I 
 have to manually delete the data-folder myself after having unloaded?
 A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3154) SolrJ CloudServer should be leader and network aware when adding docs

2012-06-22 Thread Michael Garski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13399612#comment-13399612
 ] 

Michael Garski commented on SOLR-3154:
--

Would there be any issues with just hashing on the string representation of the 
ID? That would provide a neutral format that both client and server could use 
and I implemented that approach in a patch for SOLR-2592 (pluggable sharding), 
however I have not yet done anything on the SolrJ side of things.

 SolrJ CloudServer should be leader and network aware when adding docs
 -

 Key: SOLR-3154
 URL: https://issues.apache.org/jira/browse/SOLR-3154
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Grant Ingersoll
Priority: Minor
 Fix For: 4.1


 It would be good when indexing if the SolrJ CloudServer was leader aware so 
 that we could avoid doing an extra hop for the data.  It would also be good 
 if one could easily set things up based on data locality principles.  This 
 might mean that CloudServer is aware of where on the network it is and would 
 pick leaders that are as close as possible (i.e. local, perhaps.)  This would 
 come in to play when working with tools like Hadoop or other grid computing 
 frameworks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows-Java6-64 - Build # 147 - Failure!

2012-06-22 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows-Java6-64/147/

1 tests failed.
REGRESSION:  
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testBGSearchTaskThreads

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([DF1365397E763173:226A0E5C6B5E0D1]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testBGSearchTaskThreads(TestPerfTasksLogic.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:814)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:889)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:32)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:821)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$700(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3$1.run(RandomizedRunner.java:669)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:695)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:734)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:745)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:56)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 4379 lines...]
   [junit4] Suite: org.apache.lucene.benchmark.byTask.TestPerfTasksLogic
   [junit4] FAILURE 1.19s | TestPerfTasksLogic.testBGSearchTaskThreads
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([DF1365397E763173:226A0E5C6B5E0D1]:0)
   [junit4]at org.junit.Assert.fail(Assert.java:92)
   [junit4]at org.junit.Assert.assertTrue(Assert.java:43)
   [junit4]at org.junit.Assert.assertTrue(Assert.java:54)
   [junit4]at 
org.apache.lucene.benchmark.byTask.TestPerfTasksLogic.testBGSearchTaskThreads(TestPerfTasksLogic.java:158)
   [junit4]at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]  

[JENKINS] Lucene-Solr-tests-only-trunk - Build # 14758 - Still Failing

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/14758/

No tests ran.

Build Log:
[...truncated 817 lines...]
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:97:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:98:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:85:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:86:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:75:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_FIELDS = 
CodecUtil.headerLength(CODEC_NAME_FIELDS);
[javac]^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:76:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_DOCS = 
CodecUtil.headerLength(CODEC_NAME_DOCS);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:77:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_INDEX = 
CodecUtil.headerLength(CODEC_NAME_INDEX);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:108:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvxVersion = CodecUtil.checkHeader(tvx, 
CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:112:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvdVersion = CodecUtil.checkHeader(tvd, 
CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:115:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvfVersion = CodecUtil.checkHeader(tvf, 
CODEC_NAME_FIELDS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 

[JENKINS] Lucene-Solr-tests-only-trunk - Build # 14759 - Still Failing

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/14759/

No tests ran.

Build Log:
[...truncated 812 lines...]
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:97:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:98:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:85:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:86:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:75:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_FIELDS = 
CodecUtil.headerLength(CODEC_NAME_FIELDS);
[javac]^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:76:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_DOCS = 
CodecUtil.headerLength(CODEC_NAME_DOCS);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:77:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_INDEX = 
CodecUtil.headerLength(CODEC_NAME_INDEX);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:108:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvxVersion = CodecUtil.checkHeader(tvx, 
CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:112:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvdVersion = CodecUtil.checkHeader(tvd, 
CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:115:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvfVersion = CodecUtil.checkHeader(tvf, 
CODEC_NAME_FIELDS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 

[JENKINS] Lucene-Solr-tests-only-trunk - Build # 14760 - Still Failing

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/14760/

No tests ran.

Build Log:
[...truncated 779 lines...]
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:97:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:98:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:85:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:86:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:75:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_FIELDS = 
CodecUtil.headerLength(CODEC_NAME_FIELDS);
[javac]^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:76:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_DOCS = 
CodecUtil.headerLength(CODEC_NAME_DOCS);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:77:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_INDEX = 
CodecUtil.headerLength(CODEC_NAME_INDEX);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:108:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvxVersion = CodecUtil.checkHeader(tvx, 
CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:112:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvdVersion = CodecUtil.checkHeader(tvd, 
CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:115:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvfVersion = CodecUtil.checkHeader(tvf, 
CODEC_NAME_FIELDS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 

[JENKINS] Lucene-Solr-4.x-Linux-Java7-64 - Build # 196 - Failure!

2012-06-22 Thread Policeman Jenkins Server
Build: 
http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux-Java7-64/196/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ERROR: SolrIndexSearcher opens=74 closes=73

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=74 closes=73
at __randomizedtesting.SeedInfo.seed([354E547795897CD2]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:190)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:82)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1969)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:752)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at 
org.apache.lucene.util.TestRuleIcuHack$1.evaluate(TestRuleIcuHack.java:51)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
at 
org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:56)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)




Build Log:
[...truncated 7826 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4] (@BeforeClass output)
   [junit4]   2 2 T476 oejs.Server.doStart jetty-8.1.2.v20120308
   [junit4]   2 5 T476 oejs.AbstractConnector.doStart Started 
SocketConnector@0.0.0.0:33461
   [junit4]   2 5 T476 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 6 T476 oasc.SolrResourceLoader.locateSolrHome using system 
property solr.solr.home: 
./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master
   [junit4]   2 6 T476 oasc.SolrResourceLoader.init new SolrResourceLoader 
for deduced Solr Home: 
'./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master/'
   [junit4]   2 11 T476 oass.SolrDispatchFilter.init SolrDispatchFilter.init()
   [junit4]   2 11 T476 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 11 T476 oasc.SolrResourceLoader.locateSolrHome using system 
property solr.solr.home: 
./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master
   [junit4]   2 11 T476 oasc.CoreContainer$Initializer.initialize looking for 
solr.xml: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux-Java7-64/checkout/solr/build/solr-core/test/J0/./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master/solr.xml
   [junit4]   2 12 T476 oasc.CoreContainer.init New CoreContainer 241210266
   [junit4]   2 12 T476 oasc.CoreContainer$Initializer.initialize no solr.xml 
file found - using default
   [junit4]   2 12 T476 oasc.CoreContainer.load Loading CoreContainer using 
Solr Home: 
'./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master/'
   [junit4]   2 12 T476 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 
'./org.apache.solr.handler.TestReplicationHandler$SolrInstance-1340415709423/master/'
   [junit4]   2 20 T476 oasc.CoreContainer.load Registering Log Listener
   [junit4]   2 32 T476 oashc.HttpShardHandlerFactory.getParameter Setting 
socketTimeout to: 0
   [junit4]   2 33 T476 oashc.HttpShardHandlerFactory.getParameter Setting 
urlScheme to: http://
   [junit4]   2 33 T476 oashc.HttpShardHandlerFactory.getParameter Setting 

[JENKINS] Lucene-Solr-tests-only-trunk - Build # 14761 - Still Failing

2012-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/14761/

No tests ran.

Build Log:
[...truncated 780 lines...]
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:97:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsWriter.java:98:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsWriter
[javac]   CodecUtil.writeHeader(indexStream, CODEC_NAME_IDX, 
VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:85:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(indexStream, CODEC_NAME_IDX, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40StoredFieldsReader.java:86:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader
[javac]   CodecUtil.checkHeader(fieldsStream, CODEC_NAME_DAT, 
VERSION_START, VERSION_CURRENT);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:75:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_FIELDS = 
CodecUtil.headerLength(CODEC_NAME_FIELDS);
[javac]^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:76:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_DOCS = 
CodecUtil.headerLength(CODEC_NAME_DOCS);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:77:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   static final long HEADER_LENGTH_INDEX = 
CodecUtil.headerLength(CODEC_NAME_INDEX);
[javac]   ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:108:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvxVersion = CodecUtil.checkHeader(tvx, 
CODEC_NAME_INDEX, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:112:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvdVersion = CodecUtil.checkHeader(tvd, 
CODEC_NAME_DOCS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac] 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-trunk/checkout/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsReader.java:115:
 cannot find symbol
[javac] symbol  : variable CodecUtil
[javac] location: class 
org.apache.lucene.codecs.lucene40.Lucene40TermVectorsReader
[javac]   final int tvfVersion = CodecUtil.checkHeader(tvf, 
CODEC_NAME_FIELDS, VERSION_START, VERSION_CURRENT);
[javac]  ^
[javac]