Hello,
I spoke with Chris Hostetter who recomended creating wiki pages with a
roadmap for adding at least two features to SOLR, realtime search and
Hadoop RMI based distributed search.
I added RealtimeSearch and HadoopRmi tasks to the task list wiki at:
http://wiki.apache.org/solr/TaskList
Jason
It's mostly dead and synchronizes on reads and writes.
On Tue, Dec 2, 2008 at 7:46 AM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> On Tue, Dec 2, 2008 at 10:12 AM, Andrzej Bialecki <[EMAIL PROTECTED]> wrote:
> > Please consider using JDBM, now in the Apache incubator,
>
> It doesn't look like it's
http://bobo-browse.wiki.sourceforge.net/
For faceting, the Bobo library from LinkedIn may be useful in cases where
the number of cached bitsets is excessive.
On Sun, Mar 22, 2009 at 8:35 PM, Lance Norskog wrote:
> LinkedIn open-sourced a pile of DocSet compression implementations as
> "Lucene-E
gt;
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
> > From: Jason Rutherglen
> > To: solr-dev@lucene.apache.org
> > Sent: Tuesday, March 24, 2009 2:48:25 PM
> > Subject: R
Has anyone looked at implementing automatic failover in SOLR using a naming
service (like Zookeeper)?
Basic failover, we can build from there?
2009/7/13 Noble Paul നോബിള് नोब्ळ्
> nope .
>
> what do you have in mind?
>
> On Tue, Jul 14, 2009 at 4:56 AM, Jason
> Rutherglen wrote:
> > Has anyone looked at implementing automatic failover in SOLR using a
> naming
Are we looking at implementing dependency injection for the config files?
I'd like to think about this as we also look at integrating Zookeeper for
config file management. I was trying (for example) to use
LogByteSizeMergePolicy however currently I can't set the parameters because
solrconfig.xml on
It would be useful to push snapshots as they are created from
the master to the slaves. I prefer this approach to constant
polling by slaves. Partially because the timing could be off on
the slave servers, the data is replicated, and the user sees
different snapshots?
Something like a virtual 2 ph
lexities associated with
> it.
> I have raised as issue already https://issues.apache.org/jira/browse/SOLR-1305
>
> On Thu, Jul 23, 2009 at 2:58 AM, Jason
> Rutherglen wrote:
>> It would be useful to push snapshots as they are created from
>> the master to the slaves. I prefer
Otis Gospodnetic wrote:
> Jason, are your patches in various other Solr JIRA issues essentially the
answer to your question above: Is there a way to plug into these low level
Solr functions without a massive redesign? In other words, are you providing
patches incrementally with the goal of getting
SOLR-567 is simple enough to add to the list.
On Wed, May 21, 2008 at 7:45 PM, Mike Klaas <[EMAIL PROTECTED]> wrote:
> There seems to be some sort of consensus building that there should be a
> 1.3 release in the near future. The first step is to figure out what we
> want to finish before it get
Please explain the maintenence considerations.
On Thu, May 22, 2008 at 5:35 PM, Chris Hostetter <[EMAIL PROTECTED]>
wrote:
>
> : SOLR-567 is simple enough to add to the list.
>
> I can't speak for anyone else, but SOLR-567 certianly doesn't meet my
> definition of "simple" ... this one line, by i
for social networks interested in realtime
search to get involved as it may be something that is difficult for
one company to have enough resources to implement to a production
level. I think this is where open source collaboration is
particularly useful.
Cheers,
Jason Rutherglen
[EMAIL PROTECTED]
On W
I think Hoss has a good point here. Solr has not shipped 1.3 yet and
really needs to. A lot of the functionality mentioned would probably
break any backward compatibility and/or require large rewrites of
code.
For Ocean I guess I should just state more clearly that it's really
supposed to be a r
Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> MultiCore.java is renamed to CoreContainer
>
> and SolrCore is changed a lot
>
> On Fri, Sep 5, 2008 at 4:27 PM, Jason Rutherglen (JIRA) <[EMAIL PROTECTED]>
> wrote:
>>
>>[
>> htt
e SolrIndexSearcher and SolrIndexWriter
>> pluggable ,that can be easier
>>
>>
>>>
>>> SolrCore Pluggable
>>> --
>>>
>>>Key: SOLR-567
>>> URL: https://issues.apache.org/jira/browse/SOLR-567
>>>Project:
the feature
> itself.
>
>
>
> On Fri, Sep 5, 2008 at 6:46 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>> On Fri, Sep 5, 2008 at 9:03 AM, Jason Rutherglen
>> <[EMAIL PROTECTED]> wrote:
>>> Ok, SOLR 2 can be a from the ground up rewrite?
>>
>>
o it will keep getting
>>> delayed. If with a few changes we can start the integration, that is
>>> the best way forward . Eventually , we can slowly , evolve to a
>>> better design. But, the design need not be as important as the feature
>>> itself.
>>&
Does Geohash enable range based queries? It's unclear from the wiki site.
Can we do something like this
http://code.google.com/apis/base/attrs-queries.html#LocDatQuer I
would rather see SOLR go in the direction of being compatible with
GData and GBase and invent more unique things, unless they a
Too many acronyms in there. What is the WKT used for? Storing the
data? The schema? I am confused by the wikipedia text about WKT.
On Thu, Sep 18, 2008 at 11:08 AM, patrick o'leary (JIRA)
<[EMAIL PROTECTED]> wrote:
>
>[
> https://issues.apache.org/jira/browse/SOLR-773?page=com.atlassian.j
> Placing both lat and long in the same field is good when used internally
The initial filtered range query should be much faster this way. Is
it possible to then extract the lat and long out of the geohash for
the narrowing portion?
On Thu, Sep 18, 2008 at 11:08 AM, patrick o'leary (JIRA)
<[EMA
> where we have a "stable" and an "experimental"
> release.
Good idea. Also may be good to nominate a release manager. It seemed
like features were being thrown in constantly that were perhaps beyond
the intended scope (was there one?) of SOLR 1.3. Probably next time,
maybe 2-3 large features a
It would be useful to enable a Solr document server (running
Cassandra, HBase, or Voldemort) to be transparently integrated
into a Solr search cluster. Meaning if I do a document update,
Solr underneath sends the document data to the dedicated
document server cluster. When performing a query, the r
ote:
>>
>>
>> On Aug 10, 2009, at 7:30 PM, Jason Rutherglen wrote:
>>
>>> It would be useful to enable a Solr document server (running
>>> Cassandra, HBase, or Voldemort) to be transparently integrated
>>> into a Solr search cluster. Meaning if
> So ,when the stored fields are fetched from the
"DocumentService" the version number also is sent?
Yes.
You're probably wondering how we manage the older versions of an
object (aka id + version) in the DocService. We'd probably
delete the object as it's deleted from the index?
I use the word "
It's could be due to CMS so I'll create a config with SMS, run 10
times and see if it fails.
On Wed, Aug 19, 2009 at 10:59 AM, Koji Sekiguchi wrote:
> OK, now the problem is gone.
>
> Koji
>
> Noble Paul നോബിള് नोब्ळ् wrote:
>>
>> I'm looking into it.
>>
>> I guess I goofed up somewhere
>>
>> 200
Fails consistently with SMS so I'll post a new patch shortly.
On Wed, Aug 19, 2009 at 11:31 AM, Jason
Rutherglen wrote:
> It's could be due to CMS so I'll create a config with SMS, run 10
> times and see if it fails.
>
> On Wed, Aug 19, 2009 at 10:59 AM, Koji Seki
Mike,
I'm also finding the Solr distributed process to be confusing. Lets
try to add things to the wiki as we learn them?
-J
On Fri, Aug 21, 2009 at 9:52 AM, Mike Anderson wrote:
> I'm trying to make my way through learning how to modify and write
> distributed search components.
>
> A few ques
Gabriel,
The most efficient way to improve highlighting is to perform it
on a dedicated server? That way it's not impacting the speed of
production query execution?
-J
On Tue, Sep 1, 2009 at 1:19 PM, Gabriel Farrell (JIRA) wrote:
>
> [
> https://issues.apache.org/jira/browse/SOLR-1268?page=c
Hi,
Jason Venner and I hacked all night on SOLR-1477 however didn't
get anywhere trying to make multileveled proxied shards work. I
was wondering if someone more in the know about the distributed
search code could give some pointers to help us move along.
We understand the getids stage needs to b
I haven't had any time to verify this, we're not currently
recording the HTML that failed which I'd use to reproduce with a
test case. Though when I do, it should be fairly comprehensive,
though I'm not sure I'd be able to fit it all in an actual unit
test unless the HTML was in files which probabl
Key: SOLR-1513
>>> URL: https://issues.apache.org/jira/browse/SOLR-1513
>>> Project: Solr
>>> Issue Type: Improvement
>>> Components: search
>>> Affects Versions: 1.4
>>> Re
Yes...
On Mon, Oct 19, 2009 at 10:35 AM, Walter Underwood
wrote:
> Please wait for an official release of Lucene. It makes thing SO much easier
> when you need to dig into the Lucene code.
>
> It is well worth a week delay.
>
> wunder
>
> On Oct 19, 2009, at 10:27 AM, Yonik Seeley wrote:
>
>> On
Nilya,
Can you ask your question on solr-u...@lucene.apache.org?
-J
On Mon, Oct 19, 2009 at 11:33 AM, Nilya wrote:
>
> I have been trying to integrate wordnet dictionary with solr. I used below
> link to generate indexes using prolog package from wordnet.
> http://chencer.com/techno/java/lucene
" is an older term. (They're
> "weak" because some bully can steal their candy.)
>
> On Sun, Oct 18, 2009 at 8:37 PM, Jason Rutherglen
> wrote:
>> Lance,
>>
>> Do you mean soft references?
>>
>> On Sun, Oct 18, 2009 at 3:59 PM, Lance Norsko
Maybe we can add generics to SolrCache or is there a design reason not to?
Stanford's is open source and works quite well.
http://nlp.stanford.edu/software/CRF-NER.shtml
On Tue, Nov 17, 2009 at 10:25 PM, Pradeep Pujari
wrote:
> Hello all,
>
> Does Lucene or Solr has entity extraction feature? If so, what is the wiki
> URL?
>
> Thanks,
> Pradeep.
>
>
> But I've kind of gotten used to thinking of shards as the
> actual physical queryable things...
I think a mistake was made referring to Solr cores as shards.
It's the same thing with 2 different names. Slices adds yet
another name which seems to imply the same thing yet again. I'd
rather see dis
think a "slice" is a good solution now...
> first it's a new term to a new view of the index (logical shard AFAIK don't
> really exists yet) so people won't need to get used to it, but it's also
> descriptive and intuitive. I do like Jason's idea about h
nd be aligned with the "rest of the
> world" (if there is such a thing... from what I've seen so far sharding is
> used differently in different contexts), then perhaps a "good" timing for
> making such terminology changes is with a major release (Solr 2.0?) as wi
Zipping cores/shards is in the latest patch...
On Fri, Jan 15, 2010 at 11:22 AM, Andrzej Bialecki wrote:
> On 2010-01-15 20:13, Ted Dunning wrote:
>>
>> This can also be a big performance win. Jason Venner reports significant
>> index and cluster start time improvements by indexing to local disk
that is a separate patch all
> together.
>
>
>
> On Jan 15, 2010, at 2:35 PM, Jason Rutherglen wrote:
>
>> Zipping cores/shards is in the latest patch...
>>
>> On Fri, Jan 15, 2010 at 11:22 AM, Andrzej Bialecki wrote:
>>> On 2010-01-15 20:13, Ted Dunn
Here's some rough notes after running the unit tests, reviewing
some of the code (though not understanding it), and reviewing
the wiki page http://wiki.apache.org/solr/SolrCloud
We need a protocol in the URL, otherwise it's inflexible
I'm overwhelmed with all the ?? question areas of the documen
swap in a new core on the
search proxy that represents the new version of the corelist,
then the old cores could go away. This isn't very different than
the segmentinfos system used in Lucene IMO.
On Fri, Jan 15, 2010 at 1:53 PM, Yonik Seeley wrote:
> On Fri, Jan 15, 2010 at 4:12 PM, Jason R
Hello there dude...
I started on this, http://issues.apache.org/jira/browse/SOLR-1606
However since then things have changed, so it may not work... You're
welcome to continue on it...
Cheers,
Jason
On Tue, Feb 9, 2010 at 3:20 PM, Kaktu Chakarabati wrote:
> Hey Guys,
> havent heard back from a
Can we move FileFetcher out of SnapPuller? This will assist with
reusing the replication handler for moving/copying cores.
I'm planning on using Solr Cloud, kind of waiting for the commit to
trunk so lets do it (ie, Java6).
On Wed, Apr 14, 2010 at 11:32 PM, Ryan McKinley wrote:
> I'm fine with 1.6 as a min requirement... but i imagine others have
> different opinions :)
>
>
> On Wed, Apr 14, 2010 at 2:53 PM, Yonik S
Does this mean that the Google system does some sort of realtime replication?
- Original Message
From: Doug Cutting <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Thursday, April 20, 2006 8:59:01 AM
Subject: GData
How hard would it be to build a GData server using Solr? An
op
In using Solr I've found the need to have a reload command in addition to a
commit. The reason for this is sometimes updates are made but are not
available via the server. The commit makes a snapshot which on a large index
is a potentially expensive operation. Is there a way to do reload toda
on the master,
sometimes I want to create a snapshot for the slave servers. This was very
confusing when I first started using Solr.
Thanks,
Jason
- Original Message
From: Chris Hostetter <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECT
can be run manaully. So one can disable the postCommit
listener
and run it manually to take snapshots at specific times rather than after
every commit.
Bill
On 4/23/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>
> Hi Jason,
>
> On 4/21/06, jason rutherglen <[EMAIL PROTECTED]> wr
http://jeremy.zawodny.com/blog/archives/006687.html
Here is a good blog entry with a talk on GData from someone who worked on it.
The only thing I think Solr needs is faster replication, which perhaps can be
done faster using a direct replication model, preferably over HTTP of the
segments fil
TED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Tuesday, April 25, 2006 12:42:58 PM
Subject: Re: GData
On 4/25/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
> Here is a good blog entry with a talk on GData from someone who worked on it.
> The
http://code.google.com/apis/gdata/protocol.html#Optimistic-concurrency
The versioning is for updates only.
<[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Tuesday, April 25, 2006 12:42:58 PM
Subject: Re: GData
On 4/25/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
> Here is a good blog entry with a talk on GData from someone who worked on i
makes sense for
Lucene? Or just assume the Google architecture is a lot more complex.
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Tuesday, April 25, 2006 3:21:07 PM
Subject: Re: GData
essage
From: Doug Cutting <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Tuesday, April 25, 2006 4:10:36 PM
Subject: Re: GData
jason rutherglen wrote:
> Is a faster method of loading or updating the IndexSearcher something that
> makes sense for Lucene?
Yes. Folks
4:10:36 PM
Subject: Re: GData
jason rutherglen wrote:
> Is a faster method of loading or updating the IndexSearcher something that
> makes sense for Lucene?
Yes. Folks have developed incrementally updateable IndexSearchers
before, but none is yet part of Lucene.
> Or just assume the Go
there any negatives to updateable
IndexSearchers?
Thanks,
Jason
- Original Message
From: Doug Cutting <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Tuesday, April 25, 2006 9:04:47 PM
Subject: Re: GData
jason rutherglen wrote:
> Ah ok
ginal Message
From: Doug Cutting <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Wednesday, April 26, 2006 11:27:44 AM
Subject: Re: GData, updateable IndexSearcher
jason rutherglen wrote:
> Interesting, does this mean there is a plan for incrementally updateable
> IndexSearchers
Where would this go in the code? I want to add a new XML based request handler
so that spans and range filters can be handled. Thanks.
Are you saying that ConstantScoreRangeQuery is interpreted by the IndexSearcher
to be the same as a filter?
- Original Message
From: Chris Hostetter <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Thursday, April 27, 200
at org.apache.solr.core.SolrCore.execute(SolrCore.java:600)
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Thursday, April 27, 2006 7:50:04 PM
Subject: Re: Adding range filter to Solr
On 4/2
Thanks that works well.
- Original Message
From: Chris Hostetter <[EMAIL PROTECTED]>
To: Solr Dev ; jason rutherglen <[EMAIL PROTECTED]>
Sent: Friday, April 28, 2006 1:58:57 PM
Subject: Re: Adding range filter to Solr
: This is a little bit confusing for me, the documentati
Can this be done today with a simple HTTP parameter to the /select? Mainly for
testing to see how long a query takes without using the cache.
Sorry to paste a bunch of code in here but I want to pass in order to make the
updateable IndexReader work, I need to pass in the existing IndexSearcher into
the new SolrIndexSearcher, however currently the existing SolrIndexSearcher is
obtained after in the code. The locking and referencing is
A solution to an index that requires 100s of millions of documents is to
distributed the documents over multiple servers. I thought I had the
RemoteSearchable like client for Solr pretty well done, however this normalized
scoring with weights throws a bit of a kink.
http://issues.apache.org/b
current /select call? I'm still trying to figure this
part out.
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Thursday, May 18, 2006 5:21:58 PM
Subject: Re: Making RemoteSearchable like client for Solr
On 5/18/06, jason
: Re: Making RemoteSearchable like client for Solr
On 5/18/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
> I used the XML, I think using HTTP is important.
Is this written in Java? Using HTTPClient? Anything you will be able to share?
No caching on the client yet, that is a good idea, h
On 5/18/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
> I used the XML, I think using HTTP is important.
Is this written in Java? Using HTTPClient? Anything you will be able to share?
No caching on the client yet, that is a good idea, however my personal
goal is to have an index that
res for
x and y anymore.
Can the /select call return the IDFs for each individual term in the XML result?
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Thursday, May 18, 2006 6:33:29 PM
Subject: Re: Making RemoteSearchable like clien
client for Solr
On 5/18/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
> > If you query for "x OR y", the doc score you get will be a combination
> of the doc score for x and the doc score for y. After you have the
> document score for the complete query, you can'
Yes that makes sense. I will develop along those lines. Thanks!
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Cc: jason rutherglen <[EMAIL PROTECTED]>
Sent: Thursday, May 18, 2006 7:28:12 PM
Subject: Re: Making RemoteSearchable lik
Have the Solr team looked at using ReentrantReadWriteLock for things like
LRUCache and in other places where read write locks are used?
e. Is it needed for anything right now?
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Thursday, June 8, 2006 1:50:45 PM
Subject: Re: ReentrantReadWriteLock
On 6/8/06, jason rutherglen <[
What about ReentrantLock? I like it because it cleans up sync blocks. Adds a
few nice methods. Makes threading more OO.
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Thursday, June
Why does the deleteByQuery method and the DirectUpdateHandler2 not use the
existing IndexSearcher for deleting. Why does it open it's own Searcher?
Is this being worked on? Otherwise I'll do it.
> Do you mean lazy loading of fields?
Yes.
- Original Message
From: Yonik Seeley <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org
Sent: Tuesday, August 22, 2006 5:23:48 PM
Subject: Re: Tackling Solr Fieldable
On 8/22/06, jason rutherglen <[EMAIL PROTECTED]> wrote:
Aug 28, 2006, at 1:41 PM, jason rutherglen wrote:
> Ok... Looks like its related to using SpanQueries (I hacked on the
> XML query code). I remember a discussion about this issue. Not
> something Solr specifically supports so my apologies. However if
> anyone knows about this
06, at 3:37 PM, jason rutherglen wrote:
> Could someone point me to where in the Solr code the Analyzer is
> applied to a query parser field?
IndexSchema.java is where the analyzers are created for indexing and
for query parsing. It's fairly sophisticated in order to take into
o: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Monday, August 28, 2006 6:29:29 PM
Subject: Re: Possible bug in copyField
: Thanks. Yes I came up with a hacked solution to the problem. Takes a
: Query and rewrites the Terms using the Analyzer. If the Analyzer
typcially the
.
- Original Message
From: Chris Hostetter <[EMAIL PROTECTED]>
To: solr-dev@lucene.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Monday, August 28, 2006 8:13:11 PM
Subject: Re: Possible bug in copyField
: It's coming from a custom XML query handler which is just a
: s
I implemented lazy field loading in Solr, but then get messed up results or
EOFExceptions. The index is fine, I turned off lazy loading and it looks fine.
Is this known behavior?
.apache.org; jason rutherglen <[EMAIL PROTECTED]>
Sent: Sunday, September 3, 2006 4:41:58 PM
Subject: Re: Lazy field loading error
: I implemented lazy field loading in Solr, but then get messed up results
: or EOFExceptions. The index is fine, I turned off lazy loading and it
: looks fin
Versions: 1.3
Reporter: Jason Rutherglen
Priority: Trivial
Fix For: 1.4
expungeDeletes is a useful method somewhat like optimize is offered by
IndexWriter that can be implemented in DirectUpdateHandler2.
--
This message is automatically generated by JIRA.
-
You can
Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
Fix For: 1.5
The goal is to give SOLR server clusters self-healing attributes
where if a server fails, indexing and searching don't stop and
all of the partitions remain searchable. For configuration
[
https://issues.apache.org/jira/browse/SOLR-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen closed SOLR-583.
-
Resolution: Invalid
> Name Service
>
>
> K
Reporter: Jason Rutherglen
Priority: Minor
Fix For: 1.5
Lucene 2.9 and later offer near realtime search (LUCENE-1516,
LUCENE-1313). For SOLR this means integrating
IndexWriter.getReader and adding a way to replicate newly
created segments that may not exist on the file
[
https://issues.apache.org/jira/browse/SOLR-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen closed SOLR-564.
-
Resolution: Invalid
> Realtime search in Solr
> ---
>
>
[
https://issues.apache.org/jira/browse/SOLR-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated SOLR-1277:
---
Summary: Implement a Solr specific naming service (using Zookeeper) (was:
Implement a SOLR
[
https://issues.apache.org/jira/browse/SOLR-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated SOLR-1277:
---
Description:
The goal is to give Solr server clusters self-healing attributes
where if a
[
https://issues.apache.org/jira/browse/SOLR-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732032#action_12732032
]
Jason Rutherglen commented on SOLR-1275:
I can add some tests? (would like to
/SOLR-1296
Project: Solr
Issue Type: Improvement
Affects Versions: 1.3
Reporter: Jason Rutherglen
Priority: Trivial
Fix For: 1.4
We need to enable opening an IndexReader with termInfosIndexDivisor set in
solrConfig.
We need to e
[
https://issues.apache.org/jira/browse/SOLR-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated SOLR-1296:
---
Attachment: SOLR-1296.patch
* termInfosIndexDivisor is set in StandardIndexReaderFactory as
[
https://issues.apache.org/jira/browse/SOLR-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated SOLR-1296:
---
Attachment: SOLR-1296.patch
* Test case placed into TestConfig
* Created solrconfig
[
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12734376#action_12734376
]
Jason Rutherglen commented on SOLR-1301:
I downloaded the patch. I'd l
Reporter: Jason Rutherglen
Priority: Minor
Fix For: 1.5
Solr caches docsets and documents at the top level Multi*Reader
level. After a commit, the caches are flushed. Reloading the
caches in near realtime (i.e. commits every 1s - 2min)
unnecessarily consumes IO
[
https://issues.apache.org/jira/browse/SOLR-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12735679#action_12735679
]
Jason Rutherglen commented on SOLR-1308:
Perhaps in another issue we can imple
[
https://issues.apache.org/jira/browse/SOLR-908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12736208#action_12736208
]
Jason Rutherglen commented on SOLR-908:
---
Great Tom!
* Can we add a flag to not re
1 - 100 of 371 matches
Mail list logo