[jira] [Created] (SOLR-2729) DIH status: successful zero-document delta-import missing "" field

2011-08-24 Thread Shawn Heisey (JIRA)
DIH status: successful zero-document delta-import missing "" field
--

 Key: SOLR-2729
 URL: https://issues.apache.org/jira/browse/SOLR-2729
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 3.2
 Environment: Linux idxst0-a 2.6.18-238.12.1.el5.centos.plusxen #1 SMP 
Wed Jun 1 11:57:54 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

Reporter: Shawn Heisey
Priority: Minor
 Fix For: 3.4, 4.0


If you have a successful delta-import that happens to process zero documents, 
the  field is not present in the status.  I've run into this 
situation when the SQL query results in an empty set.  A workaround for the 
problem is to instead look for the "Time taken " field ... but if you don't 
happen to notice that this field has an extraneous space in the name, that 
won't work either.

A full-import that processes zero documents has the field present as expected:

Indexing completed. Added/Updated: 0 documents. Deleted 0 
documents.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2312) Search on IndexWriter's RAM Buffer

2011-08-24 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090764#comment-13090764
 ] 

Jason Rutherglen commented on LUCENE-2312:
--

A benchmark plan is, compare the speed of NRT vs. RT.  

Index documents in a single thread, in a 2nd thread open a reader and perform a 
query.  It would be nice to synchronize the point / max doc at which RT and NRT 
open new readers to additionally verify the correctness of the directly 
comparable search results.  To make the test fair, concurrent merge scheduler 
should be turned off in the NRT test.

The hypothesis is that array copying, even on large [RT] indexes is no big deal 
compared with the excessive segment merging with NRT.

> Search on IndexWriter's RAM Buffer
> --
>
> Key: LUCENE-2312
> URL: https://issues.apache.org/jira/browse/LUCENE-2312
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: core/search
>Affects Versions: Realtime Branch
>Reporter: Jason Rutherglen
>Assignee: Michael Busch
> Fix For: Realtime Branch
>
> Attachments: LUCENE-2312-FC.patch, LUCENE-2312.patch, 
> LUCENE-2312.patch
>
>
> In order to offer user's near realtime search, without incurring
> an indexing performance penalty, we can implement search on
> IndexWriter's RAM buffer. This is the buffer that is filled in
> RAM as documents are indexed. Currently the RAM buffer is
> flushed to the underlying directory (usually disk) before being
> made searchable. 
> Todays Lucene based NRT systems must incur the cost of merging
> segments, which can slow indexing. 
> Michael Busch has good suggestions regarding how to handle deletes using max 
> doc ids.  
> https://issues.apache.org/jira/browse/LUCENE-2293?focusedCommentId=12841923&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12841923
> The area that isn't fully fleshed out is the terms dictionary,
> which needs to be sorted prior to queries executing. Currently
> IW implements a specialized hash table. Michael B has a
> suggestion here: 
> https://issues.apache.org/jira/browse/LUCENE-2293?focusedCommentId=12841915&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12841915

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3400:
---

Attachment: LUCENE-3400.patch

Absolutely Simon.

New patch for trunk which makes stemDict final.

> Deprecate / Remove DutchAnalyzer.setStemDictionary
> --
>
> Key: LUCENE-3400
> URL: https://issues.apache.org/jira/browse/LUCENE-3400
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3400-3x.patch, LUCENE-3400.patch, 
> LUCENE-3400.patch
>
>
> DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and 
> also uses a File which isn't ideal).  It should be deprecated in 3x, removed 
> in trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk - Build # 1662 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-trunk/1662/

2 tests failed.
FAILED:  org.apache.lucene.index.TestTermsEnum.testIntersectRandom

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
org.apache.lucene.util.automaton.RunAutomaton.(RunAutomaton.java:128)
at 
org.apache.lucene.util.automaton.ByteRunAutomaton.(ByteRunAutomaton.java:28)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:134)
at 
org.apache.lucene.index.TestTermsEnum.testIntersectRandom(TestTermsEnum.java:264)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1530)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1432)


FAILED:  org.apache.lucene.util.automaton.TestCompiledAutomaton.testRandom

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
org.apache.lucene.util.automaton.RunAutomaton.(RunAutomaton.java:128)
at 
org.apache.lucene.util.automaton.ByteRunAutomaton.(ByteRunAutomaton.java:28)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:134)
at 
org.apache.lucene.util.automaton.TestCompiledAutomaton.build(TestCompiledAutomaton.java:39)
at 
org.apache.lucene.util.automaton.TestCompiledAutomaton.testTerms(TestCompiledAutomaton.java:55)
at 
org.apache.lucene.util.automaton.TestCompiledAutomaton.testRandom(TestCompiledAutomaton.java:101)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1530)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1432)




Build Log (for compile errors):
[...truncated 12910 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3401) need to ensure that sims that use collection-level stats (e.g. sumTotalTermFreq) handle non-existent field

2011-08-24 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-3401:


Attachment: LUCENE-3401.patch

added another related test, no problems though

> need to ensure that sims that use collection-level stats (e.g. 
> sumTotalTermFreq) handle non-existent field
> --
>
> Key: LUCENE-3401
> URL: https://issues.apache.org/jira/browse/LUCENE-3401
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: flexscoring branch
>Reporter: Robert Muir
> Attachments: LUCENE-3401.patch, LUCENE-3401.patch
>
>
> Because of things like queryNorm, unfortunately similarities have to handle 
> the case where they are asked to computeStats() for a term, where the field 
> does not exist at all.
> (Note they will never have to actually score anything, but unless we break 
> how queryNorm works for TFIDF, we have to deal with this case).
> I noticed this while doing some benchmarking, so i created a test to test 
> some cases like this across all the sims.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3401) need to ensure that sims that use collection-level stats (e.g. sumTotalTermFreq) handle non-existent field

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090727#comment-13090727
 ] 

Robert Muir commented on LUCENE-3401:
-

Also for the record i think its garbage that some stats such as docFreq just 
silently return 0 here, but for things like sumTotalTermFreq is a hassle...

Its already annoying we have to deal with the -1 preflex case here too... maybe 
we should add helper methods to IndexSearcher so at least you only have one 
case?!

> need to ensure that sims that use collection-level stats (e.g. 
> sumTotalTermFreq) handle non-existent field
> --
>
> Key: LUCENE-3401
> URL: https://issues.apache.org/jira/browse/LUCENE-3401
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: flexscoring branch
>Reporter: Robert Muir
> Attachments: LUCENE-3401.patch
>
>
> Because of things like queryNorm, unfortunately similarities have to handle 
> the case where they are asked to computeStats() for a term, where the field 
> does not exist at all.
> (Note they will never have to actually score anything, but unless we break 
> how queryNorm works for TFIDF, we have to deal with this case).
> I noticed this while doing some benchmarking, so i created a test to test 
> some cases like this across all the sims.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3401) need to ensure that sims that use collection-level stats (e.g. sumTotalTermFreq) handle non-existent field

2011-08-24 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-3401:


Attachment: LUCENE-3401.patch

here's test with fix to SimilarityBase.

I tried to rearrange this in a way that its not confusing.

> need to ensure that sims that use collection-level stats (e.g. 
> sumTotalTermFreq) handle non-existent field
> --
>
> Key: LUCENE-3401
> URL: https://issues.apache.org/jira/browse/LUCENE-3401
> Project: Lucene - Java
>  Issue Type: Bug
>Affects Versions: flexscoring branch
>Reporter: Robert Muir
> Attachments: LUCENE-3401.patch
>
>
> Because of things like queryNorm, unfortunately similarities have to handle 
> the case where they are asked to computeStats() for a term, where the field 
> does not exist at all.
> (Note they will never have to actually score anything, but unless we break 
> how queryNorm works for TFIDF, we have to deal with this case).
> I noticed this while doing some benchmarking, so i created a test to test 
> some cases like this across all the sims.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3401) need to ensure that sims that use collection-level stats (e.g. sumTotalTermFreq) handle non-existent field

2011-08-24 Thread Robert Muir (JIRA)
need to ensure that sims that use collection-level stats (e.g. 
sumTotalTermFreq) handle non-existent field
--

 Key: LUCENE-3401
 URL: https://issues.apache.org/jira/browse/LUCENE-3401
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: flexscoring branch
Reporter: Robert Muir
 Attachments: LUCENE-3401.patch

Because of things like queryNorm, unfortunately similarities have to handle the 
case where they are asked to computeStats() for a term, where the field does 
not exist at all.
(Note they will never have to actually score anything, but unless we break how 
queryNorm works for TFIDF, we have to deal with this case).

I noticed this while doing some benchmarking, so i created a test to test some 
cases like this across all the sims.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2700) transaction logging

2011-08-24 Thread Jason Rutherglen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090722#comment-13090722
 ] 

Jason Rutherglen commented on SOLR-2700:


Typically a transaction log configured to be written to a different hard drive 
than the indexes / database.

> transaction logging
> ---
>
> Key: SOLR-2700
> URL: https://issues.apache.org/jira/browse/SOLR-2700
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, 
> SOLR-2700.patch, SOLR-2700.patch
>
>
> A transaction log is needed for durability of updates, for a more performant 
> realtime-get, and for replaying updates to recovering peers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2959) [GSoC] Implementing State of the Art Ranking for Lucene

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090685#comment-13090685
 ] 

Robert Muir commented on LUCENE-2959:
-

I rearranged the BM25 in the branch a little bit, its now as fast as lucene's 
ranking formula:
{noformat}
Task   QPS tfidf StdDev tfidf   QPS bm25 StdDev bm25  Pct 
diff
SpanNear4.290.524.140.49  -24% -   
22%
  Phrase3.970.253.890.25  -13% -   
11%
Term   82.184.78   81.002.56   -9% -
7%
  TermBGroup1M1P   83.302.41   82.122.20   -6% -
4%
SloppyPhrase8.030.317.930.43  -10% -
8%
 AndHighHigh   19.380.59   19.160.71   -7% -
5%
PKLookup  175.494.33  173.674.20   -5% -
3%
  AndHighMed   40.991.12   40.711.07   -5% -
4%
 TermGroup1M   25.690.39   25.690.44   -3% -
3%
  Fuzzy2   42.621.83   42.651.80   -8% -
8%
  Fuzzy1   91.743.48   91.863.44   -7% -
7%
 Respell   73.963.30   74.183.29   -8% -
9%
Wildcard   56.330.97   56.601.08   -3% -
4%
 Prefix3   33.360.83   33.590.97   -4% -
6%
TermBGroup1M   55.581.03   56.170.88   -2% -
4%
  IntNRQ   13.380.74   13.580.94  -10% -   
14%
   OrHighMed   11.711.18   11.940.97  -14% -   
22%
  OrHighHigh8.910.749.130.63  -11% -   
19%
{noformat}

> [GSoC] Implementing State of the Art Ranking for Lucene
> ---
>
> Key: LUCENE-2959
> URL: https://issues.apache.org/jira/browse/LUCENE-2959
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: core/query/scoring, general/javadocs, modules/examples
>Reporter: David Mark Nemeskey
>Assignee: Robert Muir
>  Labels: gsoc2011, lucene-gsoc-11, mentor
> Fix For: flexscoring branch
>
> Attachments: LUCENE-2959_mockdfr.patch, implementation_plan.pdf, 
> proposal.pdf
>
>
> Lucene employs the Vector Space Model (VSM) to rank documents, which compares
> unfavorably to state of the art algorithms, such as BM25. Moreover, the 
> architecture is
> tailored specically to VSM, which makes the addition of new ranking functions 
> a non-
> trivial task.
> This project aims to bring state of the art ranking methods to Lucene and to 
> implement a
> query architecture with pluggable ranking functions.
> The wiki page for the project can be found at 
> http://wiki.apache.org/lucene-java/SummerOfCode2011ProjectRanking.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] The question about DocStoreOffset

2011-08-24 Thread Ari . Ko


Hi, good morning.

I faced one problem when created index file using lucene 3.2.

In my indexs file there are the TermVector files but they become to 0 byte
when is open by Luke.

According to the explaination about Lucene index files format,
mybe the reason is the value of DocStoreOffset of segment file.

In my indexs file, the segement file name is segments_1

And the part of source code to create index is as below.



IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_32, null);
iwc.setIndexDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy());
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
IndexWriter writer = new IndexWriter(dir, iwc);
LogMergePolicy lmp = new LogDocMergePolicy();
lmp.setUseCompoundFile(false);
lmp.setIndexWriter(writer);

writer.addDocument(doc.getDocument(), analyzer);

writer.addIndexes(new Directory[] { form.getDirectory() });



In fact my source is based on the Hadoop-contrib 2951 which create index
using map/reduce.
https://issues.apache.org/jira/browse/HADOOP-2951

But it is based on Lucene 2.3. I made some changes in some deprecated
method to lt it match Lucene 3.2.

In the original source, there is no this problem.
The termvector file can be list normally and the segment file name is
segments_2.

But after my modification, termvector file was created normally but the
segement seams not correct which let termvector file cannot be list in Luke
normally.

Is there anybody could give me some advice about it?

Thanks in advance.

Best regards.

Yali Hu





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-2728) DIH status: "Total Documents Processed" field disappears

2011-08-24 Thread Shawn Heisey (JIRA)
DIH status: "Total Documents Processed" field disappears


 Key: SOLR-2728
 URL: https://issues.apache.org/jira/browse/SOLR-2728
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 3.2
 Environment: Linux idxst0-a 2.6.18-238.12.1.el5.centos.plusxen #1 SMP 
Wed Jun 1 11:57:54 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

Reporter: Shawn Heisey
Priority: Minor
 Fix For: 3.4, 4.0


As soon as the external data source is finished, the "Total Documents 
Processed" field disappears from the /dataimport status response.  It only 
returns once indexing, committing, and optimizing are complete and "status" 
changes to "idle".

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2700) transaction logging

2011-08-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090615#comment-13090615
 ] 

Yonik Seeley commented on SOLR-2700:


Just to get a rough idea of performance, I uploaded one of my CSV test files 
(765MB, 100M docs, 7 small string fields per doc).
Time to complete indexing was 42% longer, and the transaction log grew to 
1.8GB.  The lucene index was 1.2GB.  The log was on the same device, so the 
main impact may have been disk IO.

> transaction logging
> ---
>
> Key: SOLR-2700
> URL: https://issues.apache.org/jira/browse/SOLR-2700
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, 
> SOLR-2700.patch, SOLR-2700.patch
>
>
> A transaction log is needed for durability of updates, for a more performant 
> realtime-get, and for replaying updates to recovering peers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2312) Search on IndexWriter's RAM Buffer

2011-08-24 Thread Jason Rutherglen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Rutherglen updated LUCENE-2312:
-

Attachment: LUCENE-2312.patch

This is a revised version of the LUCENE-2312 patch.  The following are various 
and miscelaneous notes pertaining to the patch and where it needs to go to be 
committed.  

Feel free to review the approach taken, eg, we're getting around non-realtime 
structures through the usage of array copies (of which the arrays can be pooled 
at some point).

* A copy of FreqProxPostingsArray.termFreqs is made per new reader.  That array 
can be pooled.  This is no different than the deleted docs BitVector which is 
created anew per-segment for any deletes that have occurred.

* FreqProxPostingsArray freqUptosRT, proxUptosRT, lastDocIDsRT, lastDocFreqsRT 
is copied into, per new reader (as opposed to an entirely new array 
instantiated for each new reader), this is a slight optimization in object 
allocation.

* For deleting, a DWPT is clothed in an abstract class that exposes the 
necessary methods from segment info, so that deletes may be applied to the RT 
RAM reader.  The deleting is still performed in BufferedDeletesStream.  
BitVectors are cloned as well.  There is room for improvement, eg, pooling the 
BV byte[]’s.

* Documents (FieldsWriter) and term vectors are flushed on each get reader 
call, so that reading will be able to load the data.  We will need to test if 
this is performant.  We are not creating new files so this way of doing things 
may well be efficient.

* We need to measure the cost of the native system array copy.  It could very 
well be quite fast / enough.

* Full posting functionality should be working including payloads

* Field caching may be implemented as a new field cache that is growable and 
enables lock’d replacement of the underlying array

* String to string ordinal comparison caches needs to be figured out.  The RAM 
readers cannot maintain a sorted terms index the way statically sized segments 
do

* When a field cache value is first being created, it needs to obtain the 
indexing lock on the DWPT.  Otherwise documents will continue to be indexed, 
new values created, while the array will miss the new values.  The downside is 
that while the array is initially being created, indexing will stop.  This can 
probably be solved at some point by only locking during the creation of the 
field cache array, and then notifying the DWPT of the new array.  New values 
would then accumulate into the array from the point of the max doc of the 
reader the values creator is working from.

* The terms dictionary is a ConcurrentSkipListMap.  We can periodically convert 
it into a sorted [by term] int[], that has an FST on top.

Have fun reviewing! :)

> Search on IndexWriter's RAM Buffer
> --
>
> Key: LUCENE-2312
> URL: https://issues.apache.org/jira/browse/LUCENE-2312
> Project: Lucene - Java
>  Issue Type: New Feature
>  Components: core/search
>Affects Versions: Realtime Branch
>Reporter: Jason Rutherglen
>Assignee: Michael Busch
> Fix For: Realtime Branch
>
> Attachments: LUCENE-2312-FC.patch, LUCENE-2312.patch, 
> LUCENE-2312.patch
>
>
> In order to offer user's near realtime search, without incurring
> an indexing performance penalty, we can implement search on
> IndexWriter's RAM buffer. This is the buffer that is filled in
> RAM as documents are indexed. Currently the RAM buffer is
> flushed to the underlying directory (usually disk) before being
> made searchable. 
> Todays Lucene based NRT systems must incur the cost of merging
> segments, which can slow indexing. 
> Michael Busch has good suggestions regarding how to handle deletes using max 
> doc ids.  
> https://issues.apache.org/jira/browse/LUCENE-2293?focusedCommentId=12841923&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12841923
> The area that isn't fully fleshed out is the terms dictionary,
> which needs to be sorted prior to queries executing. Currently
> IW implements a specialized hash table. Michael B has a
> suggestion here: 
> https://issues.apache.org/jira/browse/LUCENE-2293?focusedCommentId=12841915&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12841915

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1725) Script based UpdateRequestProcessorFactory

2011-08-24 Thread Simon Rosenthal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Rosenthal updated SOLR-1725:
--

Attachment: SOLR-1725-rev1.patch

With the hope that this can be committed to trunk soon, I updated the patch to 
work with reorganized sources in trunk, and a couple of other small changes so 
that the tests would compile
.
Some tests fail - I'm seeing
[junit] ERROR: SolrIndexSearcher opens=30 closes=28
[junit] junit.framework.AssertionFailedError: ERROR: SolrIndexSearcher 
opens=30 closes=28

in the ScriptUpdateProcessorFactoryTest 



> Script based UpdateRequestProcessorFactory
> --
>
> Key: SOLR-1725
> URL: https://issues.apache.org/jira/browse/SOLR-1725
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 1.4
>Reporter: Uri Boness
> Attachments: SOLR-1725-rev1.patch, SOLR-1725.patch, SOLR-1725.patch, 
> SOLR-1725.patch, SOLR-1725.patch, SOLR-1725.patch
>
>
> A script based UpdateRequestProcessorFactory (Uses JDK6 script engine 
> support). The main goal of this plugin is to be able to configure/write 
> update processors without the need to write and package Java code.
> The update request processor factory enables writing update processors in 
> scripts located in {{solr.solr.home}} directory. The functory accepts one 
> (mandatory) configuration parameter named {{scripts}} which accepts a 
> comma-separated list of file names. It will look for these files under the 
> {{conf}} directory in solr home. When multiple scripts are defined, their 
> execution order is defined by the lexicographical order of the script file 
> name (so {{scriptA.js}} will be executed before {{scriptB.js}}).
> The script language is resolved based on the script file extension (that is, 
> a *.js files will be treated as a JavaScript script), therefore an extension 
> is mandatory.
> Each script file is expected to have one or more methods with the same 
> signature as the methods in the {{UpdateRequestProcessor}} interface. It is 
> *not* required to define all methods, only those hat are required by the 
> processing logic.
> The following variables are define as global variables for each script:
>  * {{req}} - The SolrQueryRequest
>  * {{rsp}}- The SolrQueryResponse
>  * {{logger}} - A logger that can be used for logging purposes in the script

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2700) transaction logging

2011-08-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-2700:
---

Attachment: SOLR-2700.patch

Patch that updates to trunk and comments out the prints (those were actually 
causing test failures for some reason...)

{code}
[junit] Testsuite: org.apache.solr.update.Batch-With-Multiple-Tests
[junit] Testcase: 
org.apache.solr.update.Batch-With-Multiple-Tests:testDistribSearch:   
Caused an ERROR
[junit] Forked Java VM exited abnormally. Please note the time in the 
report does not reflect the time until the VM exit.
[junit] junit.framework.AssertionFailedError: Forked Java VM exited 
abnormally. Please note the time in the report does not reflect the time until 
the VM exit.
[junit] at java.lang.Thread.run(Thread.java:680)
{code} 

> transaction logging
> ---
>
> Key: SOLR-2700
> URL: https://issues.apache.org/jira/browse/SOLR-2700
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, 
> SOLR-2700.patch, SOLR-2700.patch
>
>
> A transaction log is needed for durability of updates, for a more performant 
> realtime-get, and for replaying updates to recovering peers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2066) Search Grouping: support distributed search

2011-08-24 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated SOLR-2066:


Attachment: SOLR-2066.patch

Fixed the sorting issue with groups and inside groups when a sorting value is 
null.

> Search Grouping: support distributed search
> ---
>
> Key: SOLR-2066
> URL: https://issues.apache.org/jira/browse/SOLR-2066
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Yonik Seeley
> Fix For: 3.4, 4.0
>
> Attachments: SOLR-2066.patch, SOLR-2066.patch, SOLR-2066.patch, 
> SOLR-2066.patch, SOLR-2066.patch
>
>
> Support distributed field collapsing / search grouping.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2565) Prevent IW#close and cut over to IW#commit

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090463#comment-13090463
 ] 

Robert Muir commented on SOLR-2565:
---

I ran the original patch (with the @ignore still enabled) 100x each on my fast 
and slow machine.

> Prevent IW#close and cut over to IW#commit
> --
>
> Key: SOLR-2565
> URL: https://issues.apache.org/jira/browse/SOLR-2565
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-2565-revert.patch, SOLR-2565.patch, 
> SOLR-2565.patch, SOLR-2565.patch, SOLR-2565__HuperDuperAutoCommitTest.patch, 
> dump.txt, fix+hossmans-test.patch, slowtests.txt
>
>
> Spinnoff from SOLR-2193. We already have a branch to work on this issue here 
> https://svn.apache.org/repos/asf/lucene/dev/branches/solr2193 
> The main goal here is to prevent solr from closing the IW and use IW#commit 
> instead. AFAIK the main issues here are:
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> 2. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 3. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 4. Address the current issues we face because multiple original/'reloaded' 
> cores can have a different IndexWriter on the same index.
> Eventually this is a preparation for NRT support in Solr which I will create 
> a followup issue for.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2700) transaction logging

2011-08-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090438#comment-13090438
 ] 

Yonik Seeley commented on SOLR-2700:


Here's an update that among other things uses the "tlog" directory under the 
data directory.

> transaction logging
> ---
>
> Key: SOLR-2700
> URL: https://issues.apache.org/jira/browse/SOLR-2700
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, 
> SOLR-2700.patch
>
>
> A transaction log is needed for durability of updates, for a more performant 
> realtime-get, and for replaying updates to recovering peers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2700) transaction logging

2011-08-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-2700:
---

Attachment: SOLR-2700.patch

> transaction logging
> ---
>
> Key: SOLR-2700
> URL: https://issues.apache.org/jira/browse/SOLR-2700
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, 
> SOLR-2700.patch
>
>
> A transaction log is needed for durability of updates, for a more performant 
> realtime-get, and for replaying updates to recovering peers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2565) Prevent IW#close and cut over to IW#commit

2011-08-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2565:
--

Attachment: fix+hossmans-test.patch

a fix for the delete issue and hossmans test with the test method that was 
ignored not ignored anymore.

that test now fails for me due to what looks like a timing issue 

> Prevent IW#close and cut over to IW#commit
> --
>
> Key: SOLR-2565
> URL: https://issues.apache.org/jira/browse/SOLR-2565
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-2565-revert.patch, SOLR-2565.patch, 
> SOLR-2565.patch, SOLR-2565.patch, SOLR-2565__HuperDuperAutoCommitTest.patch, 
> dump.txt, fix+hossmans-test.patch, slowtests.txt
>
>
> Spinnoff from SOLR-2193. We already have a branch to work on this issue here 
> https://svn.apache.org/repos/asf/lucene/dev/branches/solr2193 
> The main goal here is to prevent solr from closing the IW and use IW#commit 
> instead. AFAIK the main issues here are:
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> 2. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 3. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 4. Address the current issues we face because multiple original/'reloaded' 
> cores can have a different IndexWriter on the same index.
> Eventually this is a preparation for NRT support in Solr which I will create 
> a followup issue for.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1301) Solr + Hadoop

2011-08-24 Thread Alexander Kanarsky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090429#comment-13090429
 ] 

Alexander Kanarsky commented on SOLR-1301:
--

Mark, I planned to add some unit tests and the packaging for hadoop 0.21.x but 
unfortunately had no time for this. The problem with unit tests is that you 
need to either to use your own external hadoop cluster or to run mini-cluster, 
both ways do not work well for a Solr contrib module in my opinion. I tried to 
use MRUnit approach while ago with 0.20.x, without success. Maybe will get back 
to this and try again with 0.21 but I do not anticipate this until mid of 
September.  

> Solr + Hadoop
> -
>
> Key: SOLR-1301
> URL: https://issues.apache.org/jira/browse/SOLR-1301
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.4
>Reporter: Andrzej Bialecki 
> Fix For: 3.4, 4.0
>
> Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
> SOLR-1301-hadoop-0-20.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
> SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SolrRecordWriter.java, 
> commons-logging-1.0.4.jar, commons-logging-api-1.0.4.jar, 
> hadoop-0.19.1-core.jar, hadoop-0.20.1-core.jar, hadoop.patch, log4j-1.2.15.jar
>
>
> This patch contains  a contrib module that provides distributed indexing 
> (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
> twofold:
> * provide an API that is familiar to Hadoop developers, i.e. that of 
> OutputFormat
> * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
> SolrOutputFormat consumes data produced by reduce tasks directly, without 
> storing it in intermediate files. Furthermore, by using an 
> EmbeddedSolrServer, the indexing task is split into as many parts as there 
> are reducers, and the data to be indexed is not sent over the network.
> Design
> --
> Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
> which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
> instantiates an EmbeddedSolrServer, and it also instantiates an 
> implementation of SolrDocumentConverter, which is responsible for turning 
> Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
> batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
> task completes, and the OutputFormat is closed, SolrRecordWriter calls 
> commit() and optimize() on the EmbeddedSolrServer.
> The API provides facilities to specify an arbitrary existing solr.home 
> directory, from which the conf/ and lib/ files will be taken.
> This process results in the creation of as many partial Solr home directories 
> as there were reduce tasks. The output shards are placed in the output 
> directory on the default filesystem (e.g. HDFS). Such part-N directories 
> can be used to run N shard servers. Additionally, users can specify the 
> number of reduce tasks, in particular 1 reduce task, in which case the output 
> will consist of a single shard.
> An example application is provided that processes large CSV files and uses 
> this API. It uses a custom CSV processing to avoid (de)serialization overhead.
> This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
> issue, you should put it in contrib/hadoop/lib.
> Note: the development of this patch was sponsored by an anonymous contributor 
> and approved for release under Apache License.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2565) Prevent IW#close and cut over to IW#commit

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090412#comment-13090412
 ] 

Robert Muir commented on SOLR-2565:
---

Looks cool, I'm beasting this test on a couple of machines now...

> Prevent IW#close and cut over to IW#commit
> --
>
> Key: SOLR-2565
> URL: https://issues.apache.org/jira/browse/SOLR-2565
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-2565-revert.patch, SOLR-2565.patch, 
> SOLR-2565.patch, SOLR-2565.patch, SOLR-2565__HuperDuperAutoCommitTest.patch, 
> dump.txt, slowtests.txt
>
>
> Spinnoff from SOLR-2193. We already have a branch to work on this issue here 
> https://svn.apache.org/repos/asf/lucene/dev/branches/solr2193 
> The main goal here is to prevent solr from closing the IW and use IW#commit 
> instead. AFAIK the main issues here are:
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> 2. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 3. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 4. Address the current issues we face because multiple original/'reloaded' 
> cores can have a different IndexWriter on the same index.
> Eventually this is a preparation for NRT support in Solr which I will create 
> a followup issue for.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 10285 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10285/

All tests passed

Build Log (for compile errors):
[...truncated 12824 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2565) Prevent IW#close and cut over to IW#commit

2011-08-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2565:
---

Attachment: SOLR-2565__HuperDuperAutoCommitTest.patch

As mark mentioned, i started looking into a better overall meme for testing 
autocommit -- i could never reproduce any of the failures charlie cron was 
complaining about, but the tests didn't make any sense to me anyway.

path demonstrates a new overall approach, using more detailed monitor of events 
-- not just "the most recent event" but all of them in a queue of (rough) 
timestamps.

at the moment one of the tests has an @Ignore:nocommit because of the delete 
bug that miller mentioned, but it would be helpful to know if people who were 
seeing problems running AutoCOmmitTest.testSoftAndHardCommitMaxTime (ie: 
charlie cron and simon) could try this patch out and see if it works better for 
them.

> Prevent IW#close and cut over to IW#commit
> --
>
> Key: SOLR-2565
> URL: https://issues.apache.org/jira/browse/SOLR-2565
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-2565-revert.patch, SOLR-2565.patch, 
> SOLR-2565.patch, SOLR-2565.patch, SOLR-2565__HuperDuperAutoCommitTest.patch, 
> dump.txt, slowtests.txt
>
>
> Spinnoff from SOLR-2193. We already have a branch to work on this issue here 
> https://svn.apache.org/repos/asf/lucene/dev/branches/solr2193 
> The main goal here is to prevent solr from closing the IW and use IW#commit 
> instead. AFAIK the main issues here are:
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> 2. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 3. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 4. Address the current issues we face because multiple original/'reloaded' 
> cores can have a different IndexWriter on the same index.
> Eventually this is a preparation for NRT support in Solr which I will create 
> a followup issue for.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 10284 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10284/

All tests passed

Build Log (for compile errors):
[...truncated 12840 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr support for stored procedures

2011-08-24 Thread Maria Vazquez
Thank you Erick!


On 8/24/11 9:37 AM, "Erick Erickson"  wrote:

> This question is more suited to the user's list, please post any additional
> questions/comments over there. This list is intended for internal Lucene/Solr
> development discussions
> 
> But I don't think so. See: https://issues.apache.org/jira/browse/SOLR-1262
> 
> Best
> Erick
> 
> On Tue, Aug 23, 2011 at 7:33 PM, Maria Vazquez 
> wrote:
>> Does Solr support calling stored procedures in the data-config.xml?
>> 
>>    > rootEntity="true"
>> dataSource="db_qa"
>> query="{ CALL getTaxonomyData ( Œmain¹ ) }"
>> transformer="RegexTransformer"
>> onError="continue">
>> 
>> 
>> Thanks!
>> Maria
>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[Lucene.Net] [jira] [Commented] (LUCENENET-407) Signing the assembly

2011-08-24 Thread michael herndon (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENENET-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090345#comment-13090345
 ] 

michael herndon commented on LUCENENET-407:
---

It could have been created as a patch and that is far as its gotten. 

Assuming that we're talking we're just talking about creating a simple .snk to 
create strongly named assemblies when talking about signing assembly:

I didn't see a .snk file in the Lucene.Net_2_9_4g branch when setting up the 
build scripts. I haven't looked in trunk yet.  

The Lucene.Net_4e branch should have a already have a snk. I could add the same 
snk file to trunk and the 2.9.4 branch when I go add the build scripts to the 
trunk this weekend, so that all the branches are building strong assemblies.

Someone would still need to go back to the tag and create a 2.9.2 version using 
the snk whenever the next release does come out.  
 

> Signing the assembly
> 
>
> Key: LUCENENET-407
> URL: https://issues.apache.org/jira/browse/LUCENENET-407
> Project: Lucene.Net
>  Issue Type: Improvement
>  Components: Lucene.Net Core
>Affects Versions: Lucene.Net 2.9.2, Lucene.Net 2.9.4, Lucene.Net 3.x
>Reporter: Itamar Syn-Hershko
> Fix For: Lucene.Net 2.9.4, Lucene.Net 3.x
>
> Attachments: Lucene.NET.snk, signing.patch
>
>
> For our usage of Lucene.NET we need the assembly to be signed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [JENKINS] Lucene-Solr-tests-only-3.x-java7 - Build # 312 - Failure

2011-08-24 Thread Robert Muir
great, java7 has broken breakiterators too I'll try to create a
testcase and open an oracle bug

On Wed, Aug 24, 2011 at 12:53 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x-java7/312/
>
> 1 tests failed.
> REGRESSION:  org.apache.lucene.analysis.th.TestThaiAnalyzer.testRandomStrings
>
> Error Message:
> 255
>
> Stack Trace:
> java.lang.ArrayIndexOutOfBoundsException: 255
>        at 
> java.text.DictionaryBasedBreakIterator.lookupCategory(DictionaryBasedBreakIterator.java:319)
>        at 
> java.text.RuleBasedBreakIterator.handleNext(RuleBasedBreakIterator.java:926)
>        at 
> java.text.DictionaryBasedBreakIterator.handleNext(DictionaryBasedBreakIterator.java:281)
>        at 
> java.text.RuleBasedBreakIterator.next(RuleBasedBreakIterator.java:621)
>        at 
> org.apache.lucene.analysis.th.ThaiWordFilter.incrementToken(ThaiWordFilter.java:128)
>        at 
> org.apache.lucene.analysis.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:48)
>        at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:280)
>        at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:247)
>        at 
> org.apache.lucene.analysis.th.TestThaiAnalyzer.testRandomStrings(TestThaiAnalyzer.java:153)
>        at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1339)
>        at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1241)
>
>
>
>
> Build Log (for compile errors):
> [...truncated 9894 lines...]
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>



-- 
lucidimagination.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-3.x-java7 - Build # 312 - Failure

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x-java7/312/

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.th.TestThaiAnalyzer.testRandomStrings

Error Message:
255

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 255
at 
java.text.DictionaryBasedBreakIterator.lookupCategory(DictionaryBasedBreakIterator.java:319)
at 
java.text.RuleBasedBreakIterator.handleNext(RuleBasedBreakIterator.java:926)
at 
java.text.DictionaryBasedBreakIterator.handleNext(DictionaryBasedBreakIterator.java:281)
at 
java.text.RuleBasedBreakIterator.next(RuleBasedBreakIterator.java:621)
at 
org.apache.lucene.analysis.th.ThaiWordFilter.incrementToken(ThaiWordFilter.java:128)
at 
org.apache.lucene.analysis.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:48)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:280)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:247)
at 
org.apache.lucene.analysis.th.TestThaiAnalyzer.testRandomStrings(TestThaiAnalyzer.java:153)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1339)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1241)




Build Log (for compile errors):
[...truncated 9894 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 10283 - Failure

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10283/

All tests passed

Build Log (for compile errors):
[...truncated 12803 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr support for stored procedures

2011-08-24 Thread Erick Erickson
This question is more suited to the user's list, please post any additional
questions/comments over there. This list is intended for internal Lucene/Solr
development discussions

But I don't think so. See: https://issues.apache.org/jira/browse/SOLR-1262

Best
Erick

On Tue, Aug 23, 2011 at 7:33 PM, Maria Vazquez  wrote:
> Does Solr support calling stored procedures in the data-config.xml?
>
>     rootEntity="true"
> dataSource="db_qa"
> query="{ CALL getTaxonomyData ( ‘main’ ) }"
> transformer="RegexTransformer"
> onError="continue">
>
>
> Thanks!
> Maria
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2565) Prevent IW#close and cut over to IW#commit

2011-08-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090331#comment-13090331
 ] 

Mark Miller commented on SOLR-2565:
---

Hossman has been working on a new test and it has picked up a further issue 
along the lines of the one Vadim brought up - if using time based auto commit 
for both hard and soft commits, the soft commits won't happen when triggered by 
deletes.

> Prevent IW#close and cut over to IW#commit
> --
>
> Key: SOLR-2565
> URL: https://issues.apache.org/jira/browse/SOLR-2565
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 4.0
>Reporter: Simon Willnauer
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-2565-revert.patch, SOLR-2565.patch, 
> SOLR-2565.patch, SOLR-2565.patch, dump.txt, slowtests.txt
>
>
> Spinnoff from SOLR-2193. We already have a branch to work on this issue here 
> https://svn.apache.org/repos/asf/lucene/dev/branches/solr2193 
> The main goal here is to prevent solr from closing the IW and use IW#commit 
> instead. AFAIK the main issues here are:
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> 2. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 3. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 4. Address the current issues we face because multiple original/'reloaded' 
> cores can have a different IndexWriter on the same index.
> Eventually this is a preparation for NRT support in Solr which I will create 
> a followup issue for.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Need help with lucene 3.2/3.2

2011-08-24 Thread Erick Erickson
This question is probably better suited to the user's list, this list is for
internal Lucene developers...

But at a guess, your'e on a *nix system, and you're not closing the old
searchers correctly, leaving open files around. A quick check would be
to see if the files disappear when you restart your server

If you use reopen, be sure you close the old searcher, that's not done
automatically by reopen you have to do it yourself if the underlying
reader, there's an example in the API at:

http://lucene.apache.org/java/3_0_0/api/core/org/apache/lucene/index/IndexReader.html#reopen()

And if this turns out to not to help, could you post further questions over on
the user's list?

Best
Erick

On Tue, Aug 23, 2011 at 7:05 PM, Wilson Penha Jr.
 wrote:
>
>
> Hello there
> I need some help with a odd behave with Lucene since version 2.9
> I have a project with Lucene 2.41 that runs fine with 600k document where I
> use two index folder to separate two set of index, one primary and another
> one secondary, this allow me to switch the indexreader/indexsearcher by the
> time I want to build a fresh index and also keep a safe copy of it,
> While I am using Lucene 2.4.1 I have no problem with my application
> regarding have a web application hitting many time the searcher, which is
> shared, when I have to switch the index, I do check which one is stale and
> then close the old one and open the new one, even having many threads into
> my container, it can safely close and open this, without having any
> problem.
> This days, since version 3 arrives, I've being trying to upgrade my
> application for all matters, by the end of this, everything seems OK, BUT
> DID NOT,
> I used JMeter to put it to burn, then I got this :
> java    12650 root  290r   REG             253,11   70277718    6193160
> /index/secondary/spell/_12.frq (deleted)
> java    12650 root  291r   REG             253,11   31312083    6193167
> /index/secondary/spell/_10.prx
> java    12650 root  292r   REG             253,11   24031741    6193158
> //index/secondary/spell/_12.tis (deleted)
> java    12650 root  293r   REG             253,11   24031741    6193158
> /index/secondary/spell/_12.tis (deleted)
> java    12650 root  294r   REG             253,11   70277718    6193160
> /index/secondary/spell/_12.frq (deleted)
> java    12650 root  295r   REG             253,11   24031741    6193158
> /index/secondary/spell/_12.tis (deleted)
> java    12650 root  296r   REG             253,11   25261490    6193159
> /index/secondary/spell/_10.fdt
> java    12650 root  297r   REG             253,11    6180968    6193162
> /index/secondary/spell/_12.nrm (deleted)
> java    12650 root  298r   REG             253,11   12366572    6193163
> /index/secondary/spell/_10.fdx
> java    12650 root  299r   REG             253,11   31296075    6193161
> /index/secondary/spell/_12.prx (deleted)
> java    12650 root  300r   REG             253,11   25249792    6193156
> /index/secondary/spell/_12.fdt (deleted)
> java    12650 root  301r   REG             253,11   12361932    6193157
> /index/secondary/spell/_12.fdx (deleted)
> java    12650 root  311r   REG             253,11    6183288    6193168
> /index/secondary/spell/_10.nrm
> etc...
> so, somehow my container keeps old indexsearcher/indexreader opened, even
> the files no longer exists, that making it to reach max open files from the
> System, and you know what comes next.
> Even look like complex, for the time it grown the many file opened is invert
> when use Lucene 2.4.1, which manage and close all old
> indexsearcher/indexreader and keeping clean the opened files from lsof
> command from linux, where I run this application
> As I can not change my entire application to use Solr, which seems to have a
> good approach, and I also can not walk back to Lucene 2.4.1 and let it there
> forever, I ask for help here.
> If anyone from there can help me, please let me know
> Best regards and very thanks in advanced
> Wilson
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3218) Make CFS appendable

2011-08-24 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090320#comment-13090320
 ] 

Simon Willnauer commented on LUCENE-3218:
-

I committed this to trunk. I will leave this issue open until we decide to 
backport to 3.x.

simon

> Make CFS appendable  
> -
>
> Key: LUCENE-3218
> URL: https://issues.apache.org/jira/browse/LUCENE-3218
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 3.4, 4.0
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, 
> LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, 
> LUCENE-3218_3x.patch, LUCENE-3218_test_fix.patch, LUCENE-3218_tests.patch
>
>
> Currently CFS is created once all files are written during a flush / merge. 
> Once on disk the files are copied into the CFS format which is basically a 
> unnecessary for some of the files. We can at any time write at least one file 
> directly into the CFS which can save a reasonable amount of IO. For instance 
> stored fields could be written directly during indexing and during a Codec 
> Flush one of the written files can be appended directly. This optimization is 
> a nice sideeffect for lucene indexing itself but more important for DocValues 
> and LUCENE-3216 we could transparently pack per field files into a single 
> file only for docvalues without changing any code once LUCENE-3216 is 
> resolved.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090306#comment-13090306
 ] 

Simon Willnauer commented on LUCENE-3400:
-

chris, can we make stemDict final then?

simon

> Deprecate / Remove DutchAnalyzer.setStemDictionary
> --
>
> Key: LUCENE-3400
> URL: https://issues.apache.org/jira/browse/LUCENE-3400
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3400-3x.patch, LUCENE-3400.patch
>
>
> DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and 
> also uses a File which isn't ideal).  It should be deprecated in 3x, removed 
> in trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2726) NullPointerException when using spellcheck.q

2011-08-24 Thread valentin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090274#comment-13090274
 ] 

valentin commented on SOLR-2726:


I've run some tests, and I found that it makes this error when i add a 
spellcheck component to a handler and i try to use spellcheck.q

So spellcheck.q works with this kind of use :

http://localhost:8983/solr/db/suggest_full?q=american%20israel&spellcheck.q=american%20israel&qt=spellchecker
 (with the original spellchecker of db)

But this spellchecker has the class solr.SpellCheckerRequestHandler that 
doesn't have all the options I want (like collation).

> NullPointerException when using spellcheck.q
> 
>
> Key: SOLR-2726
> URL: https://issues.apache.org/jira/browse/SOLR-2726
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 3.3, 4.0
> Environment: ubuntu
>Reporter: valentin
>  Labels: nullpointerexception, spellcheck
>
> When I use spellcheck.q in my query to define what will be "spellchecked", I 
> always have this error, for every configuration I try :
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:476)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:131)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:202)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> All my other functions works great, this is the only thing which doesn't work 
> at all, just when i add "&spellcheck.q=my%20sentence" in the query...
> Example of a query : 
> http://localhost:8983/solr/db/suggest_full?q=american%20israel&spellcheck.q=american%20israel
> In solrconfig.xml :
> 
>suggestTextFull
>
> suggest_full
> org.apache.solr.spelling.suggest.Suggester
>  name="lookupImpl">org.apache.solr.spelling.suggest.tst.TSTLookup
> text_suggest_full
> suggestTextFull
>
> 
>  class="org.apache.solr.handler.component.SearchHandler">
>   
>true
>suggest_full
>10
>true
>   
>   
>suggest_full
>   
> 
> I'm using SolR 3.3, and I tried it too on SolR 4.0

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3400:
---

Attachment: LUCENE-3400-3x.patch

3x patch.

> Deprecate / Remove DutchAnalyzer.setStemDictionary
> --
>
> Key: LUCENE-3400
> URL: https://issues.apache.org/jira/browse/LUCENE-3400
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3400-3x.patch, LUCENE-3400.patch
>
>
> DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and 
> also uses a File which isn't ideal).  It should be deprecated in 3x, removed 
> in trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3400:
---

Attachment: LUCENE-3400.patch

Patch for trunk.

> Deprecate / Remove DutchAnalyzer.setStemDictionary
> --
>
> Key: LUCENE-3400
> URL: https://issues.apache.org/jira/browse/LUCENE-3400
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3400.patch
>
>
> DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and 
> also uses a File which isn't ideal).  It should be deprecated in 3x, removed 
> in trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090235#comment-13090235
 ] 

Robert Muir commented on LUCENE-3400:
-

+1, i don't even think this needs an additionally ctor to control this little 
stem dictionary, if you want a customized one you can always make your own 
analyzer, where you instantiate the stemmeroverridefilter with whatever 
dictionary you want?

> Deprecate / Remove DutchAnalyzer.setStemDictionary
> --
>
> Key: LUCENE-3400
> URL: https://issues.apache.org/jira/browse/LUCENE-3400
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3400.patch
>
>
> DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and 
> also uses a File which isn't ideal).  It should be deprecated in 3x, removed 
> in trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3400) Deprecate / Remove DutchAnalyzer.setStemDictionary

2011-08-24 Thread Chris Male (JIRA)
Deprecate / Remove DutchAnalyzer.setStemDictionary
--

 Key: LUCENE-3400
 URL: https://issues.apache.org/jira/browse/LUCENE-3400
 Project: Lucene - Java
  Issue Type: Sub-task
  Components: modules/analysis
Reporter: Chris Male


DutchAnalyzer.setStemDictionary(File) prevents reuse of TokenStreams (and also 
uses a File which isn't ideal).  It should be deprecated in 3x, removed in 
trunk.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 10280 - Failure

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10280/

No tests ran.

Build Log (for compile errors):
[...truncated 1009 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3397) Cleanup Test TokenStreams so they are reusable

2011-08-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090207#comment-13090207
 ] 

Robert Muir commented on LUCENE-3397:
-

looks good!

> Cleanup Test TokenStreams so they are reusable
> --
>
> Key: LUCENE-3397
> URL: https://issues.apache.org/jira/browse/LUCENE-3397
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3397.patch, LUCENE-3397.patch
>
>
> Many TokenStreams created in tests are not reusable.  Some do some really 
> messy things which prevent their reuse so we may have to change the tests 
> themselves.
> We'll target back porting this to 3x.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Could possible donate webapp for dynamic core create/deletion.

2011-08-24 Thread Brian O'Neill
Great, thanks Arvind.   It sounds like interest is there.
I¹ll get it up on github.

-brian

On 8/24/11 3:12 AM, "Arvind Srini"  wrote:

> Thanks, Brian.  This would certainly be useful.
> 
> Ideally , putting this one up in github, will help it open it for the
> community try it out immediately and pass the feedback/improve
> documentation/api usage etc. , while the migration process to the lucene
> contribs can happen concurrently still. 
> 
> 
> 
> On Tue, Aug 23, 2011 at 6:55 PM, Brian O'Neill
>  wrote:
>> Sure thingŠ
>>  
>> Basically, we mimicked the core creation script and procedure described here:
>> http://blog.dustinrue.com/archives/690
>> > p-Ce6zCVK_8I9LfzAm4PhOrKrgYOnzYSUMbzxNEV78BO5ov8YYUD-DP9GX4HYWmBYbvOUafrJYfXl
>> JIj_w09J6UVZcSyOyMyUqejhOe76T73obZ8Qg2lmPQ3h0JmB943_d425orfPh00blBzh0Xm9Ewgbg
>> Rmlzh1qIvd45E_I_N3P_DaIedPYfDyN-OwrhdFCQTXLLLEL6XNP->
>>  
>> We wrapped it that process in a RESTful web service.  A client can post a
>> schema to the  service, which will create the file for you then POST to SOLR
>> to create the core.  The web service is configured using a properties file
>> right now, which among other things has a list of hosts.  It will loop
>> through the hosts and perform this operation on each host.  If one fails, it
>> rolls the core creation back on each host.
>>  
>> If you want, I could pass along the WADL that we have for the service.
>>  
>> -brian
>>  
>>  
>> 
>> From: mohit soni [mailto:mohitsoni1...@gmail.com]
>> Sent: Tuesday, August 23, 2011 5:32 AM
>> To: dev@lucene.apache.org
>> Subject: Re: Could possible donate webapp for dynamic core create/deletion.
>> 
>>  
>> Hi Brian
>> 
>> Can you share a brief summary of the work done, features offered, etc.
>> 
>> ~mohit
>> 
>> On Mon, Aug 22, 2011 at 6:43 PM, Brian O'Neill
>>  wrote:
>> 
>> All,
>>  
>> My team has developed a small web app that can dynamically create/delete
>> cores in a cluster of SOLR instances.  Is this feature already under
>> development?  Is anyone interested in it?  If so, we might be able to donate
>> it.
>>  
>> -brian
>>  

-- 
Brian O'Neill
Lead Architect, Software Development
Health Market Science | 2700 Horizon Drive | King of Prussia, PA 19406
p: 215.588.6024
www.healthmarketscience.com



trunk test failure (1314190861)

2011-08-24 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314190861.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[Lucene.Net] [jira] [Commented] (LUCENENET-407) Signing the assembly

2011-08-24 Thread Itamar Syn-Hershko (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENENET-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090201#comment-13090201
 ] 

Itamar Syn-Hershko commented on LUCENENET-407:
--

Hmm... I just looked around the branches and couldn't see this committed 
anywhere. Ideas?

> Signing the assembly
> 
>
> Key: LUCENENET-407
> URL: https://issues.apache.org/jira/browse/LUCENENET-407
> Project: Lucene.Net
>  Issue Type: Improvement
>  Components: Lucene.Net Core
>Affects Versions: Lucene.Net 2.9.2, Lucene.Net 2.9.4, Lucene.Net 3.x
>Reporter: Itamar Syn-Hershko
> Fix For: Lucene.Net 2.9.4, Lucene.Net 3.x
>
> Attachments: Lucene.NET.snk, signing.patch
>
>
> For our usage of Lucene.NET we need the assembly to be signed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (LUCENE-3397) Cleanup Test TokenStreams so they are reusable

2011-08-24 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3397:
---

Attachment: LUCENE-3397.patch

Better patch which removes changing the state of a static variable.

I'll commit this in day or so.

> Cleanup Test TokenStreams so they are reusable
> --
>
> Key: LUCENE-3397
> URL: https://issues.apache.org/jira/browse/LUCENE-3397
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3397.patch, LUCENE-3397.patch
>
>
> Many TokenStreams created in tests are not reusable.  Some do some really 
> messy things which prevent their reuse so we may have to change the tests 
> themselves.
> We'll target back porting this to 3x.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3397) Cleanup Test TokenStreams so they are reusable

2011-08-24 Thread Chris Male (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Male updated LUCENE-3397:
---

Attachment: LUCENE-3397.patch

Patch which adds state resetting to TokenStreams created in tests.  

The only other TSs which are not reusable are done so by design to test that 
something can handle non-reusable TSs.  When we made reuse mandatory, we can 
drop these tests and remove the non reusable TSs.

> Cleanup Test TokenStreams so they are reusable
> --
>
> Key: LUCENE-3397
> URL: https://issues.apache.org/jira/browse/LUCENE-3397
> Project: Lucene - Java
>  Issue Type: Sub-task
>  Components: modules/analysis
>Reporter: Chris Male
> Attachments: LUCENE-3397.patch
>
>
> Many TokenStreams created in tests are not reusable.  Some do some really 
> messy things which prevent their reuse so we may have to change the tests 
> themselves.
> We'll target back porting this to 3x.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2668) DIH - multithreaded DocBuilder ignores onError Attribute

2011-08-24 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-2668:


Attachment: SOLR-2668.patch

I was looking at this problem again today. The onError attributes are not used 
to deal with exceptions from EntityProcessor.init() method. They are only used 
for reading rows, applying transformers and inserting documents into Solr.

The real problem was that in multi-threaded mode, the exceptions from 
EntityProcessor.init were being eaten up so a commit was called instead of 
rolling back the changes. I've fixed that to re-throw the exception up the 
hierarchy.

> DIH - multithreaded DocBuilder ignores onError Attribute
> 
>
> Key: SOLR-2668
> URL: https://issues.apache.org/jira/browse/SOLR-2668
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 3.3
>Reporter: Frank Wesemann
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-2668.patch, SOLR-2668.patch
>
>
> If the EntityProcessor of a subentity throws an Exception in its init() 
> Method, DocBuilder ignores onError=continue or skip attributes on the parent 
> entity. DocBuilder stops immediately and logs "Import completed successfully".
>  

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-2509) spellcheck: StringIndexOutOfBoundsException: String index out of range: -1

2011-08-24 Thread Gregor Kaczor

Hi

I can reproduce  a StringIndexOutOfBoundsException with the suggested 
query "LYSROUGE1149-73190" or "ROUGE1149-73190" on Solr 3.3


Kind Regards

Gregor
--
How to find files on the Internet? FindFiles.net !

On 08/24/2011 09:53 AM, Jan Høydahl (JIRA) wrote:

 [ 
https://issues.apache.org/jira/browse/SOLR-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090066#comment-13090066
 ]

Jan Høydahl commented on SOLR-2509:
---

I see the same problem here (3.1), and it looks very much like the same as 
SOLR-1630


spellcheck: StringIndexOutOfBoundsException: String index out of range: -1
--

 Key: SOLR-2509
 URL: https://issues.apache.org/jira/browse/SOLR-2509
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.1
 Environment: Debian Lenny
JAVA Version "1.6.0_20"
Reporter: Thomas Gambier
Priority: Blocker

Hi,
I'm a french user of SOLR and i've encountered a problem since i've installed 
SOLR 3.1.
I've got an error with this query :
cle_frbr:"LYSROUGE1149-73190"
*SEE COMMENTS BELOW*
I've tested to escape the minus char and the query worked :
cle_frbr:"LYSROUGE1149(BACKSLASH)-73190"
But, strange fact, if i change one letter in my query it works :
cle_frbr:"LASROUGE1149-73190"
I've tested the same query on SOLR 1.4 and it works !
Can someone test the query on next line on a 3.1 SOLR version and tell me if he 
have the same problem ?
yourfield:"LYSROUGE1149-73190"
Where do the problem come from ?
Thank you by advance for your help.
Tom

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org







trunk test failure (1314183901)

2011-08-24 Thread Charlie Cron
A test failure occurred running the tests.

revert! revert! revert!

You can see the entire build log at 
http://sierranevada.servebeer.com/1314183901.log

Thanks,
Charlie Cron


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3218) Make CFS appendable

2011-08-24 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-3218:


Attachment: LUCENE-3218.patch

new patch, I renamed IndexInputHandle to IndexInputSlicer and made the 
createSlicer method public otherwise Directory impls outside of o.a.l.store can 
not delegate to it.

bq.I would also rename CFIndexInput to SliceIndexInput, it's private so does 
not matter, but wozuld be nice to have.

done

bq. Otherwise I agree with committing to trunk. As far as I see, the format did 
not change in trunk, so once we get this back into 3.x we are at the state 
pre-revert?

yes that's true.

I think is ready to commit, if nobody objects I am going to commit this later 
today.

> Make CFS appendable  
> -
>
> Key: LUCENE-3218
> URL: https://issues.apache.org/jira/browse/LUCENE-3218
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 3.4, 4.0
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 3.4, 4.0
>
> Attachments: LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, 
> LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, LUCENE-3218.patch, 
> LUCENE-3218_3x.patch, LUCENE-3218_test_fix.patch, LUCENE-3218_tests.patch
>
>
> Currently CFS is created once all files are written during a flush / merge. 
> Once on disk the files are copied into the CFS format which is basically a 
> unnecessary for some of the files. We can at any time write at least one file 
> directly into the CFS which can save a reasonable amount of IO. For instance 
> stored fields could be written directly during indexing and during a Codec 
> Flush one of the written files can be appended directly. This optimization is 
> a nice sideeffect for lucene indexing itself but more important for DocValues 
> and LUCENE-3216 we could transparently pack per field files into a single 
> file only for docvalues without changing any code once LUCENE-3216 is 
> resolved.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Edited] (SOLR-2509) spellcheck: StringIndexOutOfBoundsException: String index out of range: -1

2011-08-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090066#comment-13090066
 ] 

Jan Høydahl edited comment on SOLR-2509 at 8/24/11 8:22 AM:


I see this issue in 1.4 with query "Irano-Hind" and spellcheck w/collate. 
However, I cannot reproduce in 3.1, so my issue was probably SOLR-1630 related.

Sure you're on 3.1? Can you describe closer what you do, what field type you 
use, how you setup spellcheck etc?

  was (Author: janhoy):
I see the same problem here (3.1), and it looks very much like the same as 
SOLR-1630
  
> spellcheck: StringIndexOutOfBoundsException: String index out of range: -1
> --
>
> Key: SOLR-2509
> URL: https://issues.apache.org/jira/browse/SOLR-2509
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.1
> Environment: Debian Lenny
> JAVA Version "1.6.0_20"
>Reporter: Thomas Gambier
>Priority: Blocker
>
> Hi,
> I'm a french user of SOLR and i've encountered a problem since i've installed 
> SOLR 3.1.
> I've got an error with this query : 
> cle_frbr:"LYSROUGE1149-73190"
> *SEE COMMENTS BELOW*
> I've tested to escape the minus char and the query worked :
> cle_frbr:"LYSROUGE1149(BACKSLASH)-73190"
> But, strange fact, if i change one letter in my query it works :
> cle_frbr:"LASROUGE1149-73190"
> I've tested the same query on SOLR 1.4 and it works !
> Can someone test the query on next line on a 3.1 SOLR version and tell me if 
> he have the same problem ? 
> yourfield:"LYSROUGE1149-73190"
> Where do the problem come from ?
> Thank you by advance for your help.
> Tom

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



subscribe

2011-08-24 Thread Gong Ke
subscribe

--
龚珂
Moble: 13810407631
TEL:010-6272 7384
QQ: 275480298
E-Mail:keg...@sohu-inc.com



[jira] [Commented] (SOLR-2509) spellcheck: StringIndexOutOfBoundsException: String index out of range: -1

2011-08-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090066#comment-13090066
 ] 

Jan Høydahl commented on SOLR-2509:
---

I see the same problem here (3.1), and it looks very much like the same as 
SOLR-1630

> spellcheck: StringIndexOutOfBoundsException: String index out of range: -1
> --
>
> Key: SOLR-2509
> URL: https://issues.apache.org/jira/browse/SOLR-2509
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.1
> Environment: Debian Lenny
> JAVA Version "1.6.0_20"
>Reporter: Thomas Gambier
>Priority: Blocker
>
> Hi,
> I'm a french user of SOLR and i've encountered a problem since i've installed 
> SOLR 3.1.
> I've got an error with this query : 
> cle_frbr:"LYSROUGE1149-73190"
> *SEE COMMENTS BELOW*
> I've tested to escape the minus char and the query worked :
> cle_frbr:"LYSROUGE1149(BACKSLASH)-73190"
> But, strange fact, if i change one letter in my query it works :
> cle_frbr:"LASROUGE1149-73190"
> I've tested the same query on SOLR 1.4 and it works !
> Can someone test the query on next line on a 3.1 SOLR version and tell me if 
> he have the same problem ? 
> yourfield:"LYSROUGE1149-73190"
> Where do the problem come from ?
> Thank you by advance for your help.
> Tom

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-3.x - Build # 10293 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x/10293/

No tests ran.

Build Log (for compile errors):
[...truncated 23 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 10275 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10275/

No tests ran.

Build Log (for compile errors):
[...truncated 23 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Could possible donate webapp for dynamic core create/deletion.

2011-08-24 Thread Arvind Srini
Thanks, Brian.  This would certainly be useful.

Ideally , putting this one up in github, will help it open it for the
community try it out immediately and pass the feedback/improve
documentation/api usage etc. , while the migration process to the lucene
contribs can happen concurrently still.



On Tue, Aug 23, 2011 at 6:55 PM, Brian O'Neill <
bone...@healthmarketscience.com> wrote:

> Sure thing…
>
> ** **
>
> Basically, we mimicked the core creation script and procedure described
> here:
>
> http://blog.dustinrue.com/archives/690
>
> ** **
>
> We wrapped it that process in a RESTful web service.  A client can post a
> schema to the  service, which will create the file for you then POST to SOLR
> to create the core.  The web service is configured using a properties file
> right now, which among other things has a list of hosts.  It will loop
> through the hosts and perform this operation on each host.  If one fails, it
> rolls the core creation back on each host.
>
> ** **
>
> If you want, I could pass along the WADL that we have for the service.
>
> ** **
>
> -brian
>
> ** **
>
> ** **
>
> *From:* mohit soni [mailto:mohitsoni1...@gmail.com]
> *Sent:* Tuesday, August 23, 2011 5:32 AM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Could possible donate webapp for dynamic core
> create/deletion.
>
> ** **
>
> Hi Brian
>
> Can you share a brief summary of the work done, features offered, etc.
>
> ~mohit
>
> On Mon, Aug 22, 2011 at 6:43 PM, Brian O'Neill <
> bone...@healthmarketscience.com> wrote:
>
> All,
>
>  
>
> My team has developed a small web app that can dynamically create/delete
> cores in a cluster of SOLR instances.  Is this feature already under
> development?  Is anyone interested in it?  If so, we might be able to donate
> it.
>
>  
>
> -brian
>
>  
>
> --
> Brian O'Neill
> Lead Architect, Software Development
> Health Market Science | 2700 Horizon Drive | King of Prussia, PA 19406
> p: 215.588.6024
> *www.healthmarketscience.com*
>
>  
>
> ** **
>


[JENKINS] Lucene-Solr-tests-only-3.x - Build # 10292 - Still Failing

2011-08-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x/10292/

No tests ran.

Build Log (for compile errors):
[...truncated 23 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2727) Upgrade httpclient to 4.1.2 (from 3.0.1 )

2011-08-24 Thread Aravind Srini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13090033#comment-13090033
 ] 

Aravind Srini commented on SOLR-2727:
-

For what we are looking at, it is very important to change the API, port to the 
'hc' (httpcomponents) world. It probably implies going back and revisiting the 
rest of the usages as well. 

> Upgrade httpclient to 4.1.2 (from 3.0.1 ) 
> --
>
> Key: SOLR-2727
> URL: https://issues.apache.org/jira/browse/SOLR-2727
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 3.3
>Reporter: Aravind Srini
> Fix For: 4.0
>
>
> Currently solr depends on commons-httpclient 3.x.  EOL has been announced , 
> for some time , for that release line. 
> Need to upgrade the same, to httpclient 4.1.x , to begin with. Targeting 4.0 
> . 
> jira logged as per the discussion of "solr - httpclient from 3.x to 4.1.x" 
> thread. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org