This is fixed in trunk.
2009/5/5 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com
hi Walter,
it needs synchronization. I shall open a bug.
On Mon, May 4, 2009 at 7:31 PM, Walter Ferrara walters...@gmail.com
wrote:
I've got a ConcurrentModificationException during a cron-ed delta import
On Tue, May 5, 2009 at 11:11 AM, Amit Nithian anith...@gmail.com wrote:
I am trying to get at the configuration directory in an implementation of
the SolrEventListener.
Implement SolrCoreAware and use solrCore.getResourceLoader().getConfigDir()
--
Regards,
Shalin Shekhar Mangar.
I have a spring-ibatis project running in my development environment.
Now i am setting solr search as part of the application. Everything works
fine as expected and solr is providing good results.
The only problem i am having is that i have to set the database parameters
including the username
Hi
I have imported/indexed around half a million rows from my database
into solr and then rebuilt the spellchecker. I've also setup the
delta-import to handle and new or changed rows from the database. Do
I need to rebuild the spellchecker each time I run the delta-import?
Regards
Andrew
Hi,
I suppose if the new records contain terms which are not yet found in
the spellcheck index/dictionary, it should be rebuilt.
Cheers,
On Tue, 2009-05-05 at 11:49 +0100, Andrew McCombe wrote:
Hi
I have imported/indexed around half a million rows from my database
into solr and then
If Solr is a part of your application, then why not have tokens in your
data-config.xml as place holders for db username, password etc which can be
replaced with the actual values as a part of your project build/deploy task.
Cheers
Avlesh
On Tue, May 5, 2009 at 3:32 PM, con convo...@gmail.com
On Tue, May 5, 2009 at 4:19 PM, Andrew McCombe eupe...@gmail.com wrote:
I have imported/indexed around half a million rows from my database
into solr and then rebuilt the spellchecker. I've also setup the
delta-import to handle and new or changed rows from the database. Do
I need to
There are two options.
1) pass on the user name and password as request parameters and use
the request parameters in the datasource
dataSource user=x password=${dataimporter.request.pwd} /
where pwd is a request parameter passed
2) if you can create jndi datasources in the appserver use the
Hi,
I am searching for English Portal using double quotes and I am getting all
the records which contains English Portal as together anywhere in any
field.
for e.g. records are appearing which have, English Portal, English Portal
Sacromanto, Core English Portal etc.
Problem is, if I am passing
I don't remember the answer, but I'm sure this has been discussed
many times on the mailing list. Have you tried searching that? You're
essentially asking about wildcarded phrase queries
Best
Erick
On Tue, May 5, 2009 at 9:52 AM, dabboo ag...@sapient.com wrote:
Hi,
I am searching for
Hi Eric,
I searched but couldnt find anything related. I am still looking in some
threads to find out if I can get somthing related. I would appreciate if you
can please provide me some pointers.
Thanks,
Amit Garg
Erick Erickson wrote:
I don't remember the answer, but I'm sure this has been
Hi All,
I'm [still!] evaluating Solr and setting up a PoC. The requirements are to
index the following objects:
- people - name, status, date added, address, profile, other people specific
fields like group...
- organisations - name, status, date added, address, profile, other
I am using dismax request to achieve this. Though I am able to do wildcard
search with dismax but I am not sure if I can do the wildcard with phrase.
Please suggest.
Amit
Erick Erickson wrote:
I don't remember the answer, but I'm sure this has been discussed
many times on the mailing
That is how we do it at Netflix. --wunder
On 5/5/09 7:59 AM, Chris Masters roti...@yahoo.com wrote:
1) Is this approach/design sensible and do others use it?
More precisely, we use a single core, flat schema, with a type field.
wunder
On 5/5/09 8:48 AM, Walter Underwood wunderw...@netflix.com wrote:
That is how we do it at Netflix. --wunder
On 5/5/09 7:59 AM, Chris Masters roti...@yahoo.com wrote:
1) Is this approach/design sensible and do
Lucene/Solr Meetup / May 20th, Reston VA, 6-8:30 pm
http://www.meetup.com/NOVA-Lucene-Solr-Meetup/
Join us for an evening of presentations and discussion on
Lucene/Solr, the Apache Open Source Search Engine/Platform, featuring:
Erik Hatcher, Lucid Imagination, Apache Lucene/Solr PMC: Solr power
Hi,
I am facing an issue while performing snapshot pulling thru Snappuller
script from slave server :
We have the setup of multicores on Master Solr and Slave Solr servers.
Scenario , 2 cores are set :
i) CORE_WWW.ABCD.COM
ii) CORE_WWW.XYZ.COM
rsync-enable and rsync-start script run
I am having frequent OutOfMemory error on our slaves server.
SEVERE: Error during auto-warming of
key:org.apache.solr.search.queryresult...@aca6b9cb:java.lang.OutOfMemoryError:
allocLargeObjectOrArray - Object size: 34279632, Num elements: 8569904
SEVERE: Error during auto-warming of
What's the best way to upgrade solr from 1.2.0 to 1.3.0 ?
We have the current index that our users search running on 1.2.0 Solr version.
We would like to upgrade it to 1.3.0?
We have Master/Slaves env.
What's the best way to upgrade it without affecting the search? Do we need to
do it on
Hello,
I am trying to sort MoreLikeThis results by a date field instead of
relevance. Regular sort parameters don't seem to have any effect on the
results and I can't find any mlt.sort or similar parameters in MoreLikeThis
handler. My conclusion is that MoreLikeThis does not have a sort
I'm guessing (and it's only a guess) that you have some field
that's a datestamp and that you're sorting on it in your warmup
queries??? If so, there are possibilities.
It would help a lot if you'd tell us more about the structure of
your index and what your autowarm queries look like, otherwise
Hi Francis,
How big are your caches? Please paste the relevant part of the config.
Which of your fields do you sort by? Paste definitions of those fields from
schema.xml, too.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Francis
I don't think you can do wildcard with a phrase. A path for that is sitting in
Lucene's JIRA.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: dabboo ag...@sapient.com
To: solr-user@lucene.apache.org
Sent: Tuesday, May 5, 2009 11:35:40
Chris,
1) I'd put different types of data in different cores/instances, unless you
relly need to search them all together. By using only common attributes
you are kind of killing the richness of data and your ability to do something
useful with it.
2) I'd triple-check the do a second
Hi,
I've a distributed Solr instances. I'm using Java's UUID
(UUID.randomUUID()) to generate the unique id for my documents. Before
adding unique key I was able to commit 50K records in 15sec (pretty
constant over the growing index), after adding unique key it's taking
over 35 sec for 50k and
Here is cache in solrconfig.xml
!-- Cache used by SolrIndexSearcher for filters (DocSets),
unordered sets of *all* documents that match a query.
When a new searcher is opened, its caches may be prepopulated
or autowarmed using data from caches in the old searcher.
Hi,
Timestamp is your most likely source of the problem. Round that as much as you
can or use tdate field type (you'll need to grab the nightly build). How many
documents are in this index - 1.5GB is a relatively large heap.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
You really had nothing in uniqueKey element in schema.xml at first? I'm not
looking at Solr code right now, but it could be the lack of the cost of that
lookup that made things faster. Now you have a lookup + generation + more data
to pass through analyzer + write out, though I can't imagine
Chris Masters schrieb:
- flatten the searchable objects as much as I can - use a type field
to distinguish - into a single index
- use multi-core approach to segregate domains of data
Some newbie questions:
(1) What is a type field? Is it to designate different types of
documents, e.g.
On Tue, May 5, 2009 at 1:49 PM, vivek sar vivex...@gmail.com wrote:
I've a distributed Solr instances. I'm using Java's UUID
(UUID.randomUUID()) to generate the unique id for my documents. Before
adding unique key I was able to commit 50K records in 15sec (pretty
constant over the growing
1 - A field that is called type which is probably a string field
that you index values such as people, organization, product.
2 - Yes, for each document you are indexing, you will include it's
type, ie. person
3, 4, 5 - You would have a core for each domain. Each domain will
then have
I did clean up the indexes and re-started the index process from
scratch (new index file). As another test if I use simple numeric
counter for unique id the index speed is fast (within 20 sec for
commit 50k records). I'm thinking UUID might not be the way to go for
unique id - I'll look into using
Hi all,
I was wondering if anyone had used the new helper methods in
SolrPluginutils added as part of
Solr-948https://issues.apache.org/jira/browse/SOLR-948.
I tried the same implementation with Solr 1.3 and everything works correctly,
but for one issue.
In the response XML, the
That's how we do it in Orbitz. We use type field to separate content, review
and promotional information in one single index. And then we use the
last-components to plugin these data together.
Only thing that we haven't yet tested is the scalability of this model, since
our data is small.
Dear Erik,
It would be great if you can upload the presentation online. It would help
all of us. And if possible video too.
Warm Regards,
Allahbaksh
On Tue, May 5, 2009 at 11:40 PM, Lukáš Vlček lukas.vl...@gmail.com wrote:
Hello,any plans to upload these presentations on the web (or even better
35 matches
Mail list logo