One of the requirements we have is that when we deploy new data for solr
config (synonyms, dictionary etc) we should NOT be restarting the solr
instances for the changes to take effect.
Are there ConfigReload capabilities through JMX that can help us do
this?
Thanks in Advance
-Raghu
Yes we are sending the commits.
-Raghu
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Tuesday, August 05, 2008 12:01 PM
To: solr-user@lucene.apache.org
Subject: Re: Diagnostic tools
On Tue, Aug 5, 2008 at 12:43 PM, Kashyap, Raghu
, at 2:39 PM, Kashyap, Raghu wrote:
One of the requirements we have is that when we deploy new data for
solr
config (synonyms, dictionary etc) we should NOT be restarting the solr
instances for the changes to take effect.
Are there ConfigReload capabilities through JMX that can help us do
reload JMX capabilities
On Wed, Aug 6, 2008 at 3:09 AM, Kashyap, Raghu
[EMAIL PROTECTED]wrote:
Are there ConfigReload capabilities through JMX that can help us do
this?
No, only statistics are exposed through JMX at present.
SOLR-561 enables support for automatic config file replication
Not sure if this will work for you but you can have 3 cores (using
multicore) and have your solr server or the client decide on to which
core it should be hitting. With this approach your can have separate
schema.xml solrconfig.xml for each of the cores obviously separate
index in each core.
Hi,
Today I started seeing this exception when I started solr instance.
Any ideas what might be causing this problem?
INFO: xsltCacheLifetimeSeconds=5
Aug 13, 2008 9:20:45 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.UnsupportedClassVersionError: Bad version number in
@lucene.apache.org
Subject: Re: Exception during Solr startup
Can you tell us a little bit more about your situation? What changed
today? New Solr WAR? What version of Solr are you using?
-Grant
On Aug 13, 2008, at 10:55 AM, Kashyap, Raghu wrote:
Hi,
Today I started seeing
@lucene.apache.org
Subject: Re: Exception during Solr startup
On Wed, Aug 13, 2008 at 10:55 AM, Kashyap, Raghu
[EMAIL PROTECTED] wrote:
SEVERE: java.lang.UnsupportedClassVersionError: Bad version number in
.class file
This is normally a mismatch between java compiler and runtime (like
using Java6
Is it possible that the index is corrupted? Did you try re indexing it?
-Raghu
-Original Message-
From: Jeremy Hinegardner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 27, 2008 11:34 AM
To: Solr Users
Subject: java.io.FileNotFoundException: no segments* file found
Hi all,
I've
Hi,
I have a use case where I need to define my own datatype (Money).
Will something like this work? Are there any issues with this approach?
Schema.xml
fieldType name=money class=xyz.Money omitNorms=true /
Thanks,
Raghu
Ps: We are using the trunk version of solr
Thanks for your response Hoss.
Yes we do have Money class already implemented in other areas of our
application which I would like to use.
I will try this out.
-Raghu
-Original Message-
From: Chris Hostetter [mailto:[EMAIL PROTECTED]
Sent: Friday, September 05, 2008 12:20 AM
To:
SOLR 1.3.0 is in process of being released soon. If you wait for it you
can get the latest official release that you can use.
http://wiki.apache.org/solr/SolrInstall
http://wiki.apache.org/solr/Solr1.3?highlight=(1.3)
-Raghu
-Original Message-
From: sunnyfr [mailto:[EMAIL PROTECTED]
Hi,
Rsyncd is the rsync(http://samba.anu.edu.au/rsync/) daemon. You need
to make sure that Rsynchd is running on both master the slave
machines. You use snapshooter on the master server to create the
snapshot run snappuller on the slave machines to pull those snapshots
from master server and
Hi Geoff,
I cannot vouch for Autonomy however, earlier this year we did evaluate
Endeca Solr and we went with Solr some of the reasons were:
1. Freedom of open source with Solr
2. Very good active solr open source community
3. Features pretty much overlap with both solr Endeca
4. Endeca
Hi,
I am trying to delete a record from the index using SolrJ. When I
execute it I get a status of 0 which means success. I see that the
cummulative_deletbyquery count increases by 1 and also the commit
count increases by one. I don't see any decrease on the numDocs count.
When I query it
Thanks for your response Chris.
I do see the reviewid in the index through luke. I guess what I am
confused about is the field cumulative_delete. Does this have any
significance to whether the delete was a success or not? Also shouldn't
the method deleteByQuery return a diff status code based on
Hi Otis,
{quote}It's hard to tell where exactly the bottleneck is without looking
at the server and a few other things. {quote}
Can you suggest some areas where we can start looking into this issue?
-Raghu
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent:
Anyone knows if the solr-ruby gem is compatible with solr 1.3??
Also anyone using acts_as_solr plugin? Off late the website is down and
can't find any recent activities on that
-Raghu
We are running solr on a solaris box with 4 CPU's(8 cores) and 3GB Ram.
When we try to index sometimes the HTTP Connection just hangs and the
client which is posting documents to solr doesn't get any response back.
We since then have added timeouts to our http requests from the clients.
I
4, 2008, at 10:40 PM, Kashyap, Raghu wrote:
We are running solr on a solaris box with 4 CPU's(8 cores) and 3GB
Ram.
When we try to index sometimes the HTTP Connection just hangs and the
client which is posting documents to solr doesn't get any response
back.
We since then have added
on the operating
system (linux or Solaris). It is depenong on the CPU (Intel ro SPARC).
Don't know why, but based on my performance test, SPARC machine requires
MORE memory for java application.
Jae
On Thu, Dec 4, 2008 at 10:40 PM, Kashyap, Raghu
[EMAIL PROTECTED]wrote:
We are running solr
/content/zones/
- Jon
On Dec 5, 2008, at 10:58 AM, Kashyap, Raghu wrote:
Jon,
What do you mean by off a Zone? Please clarify
-Raghu
-Original Message-
From: Jon Baer [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 04, 2008 9:56 PM
To: solr-user@lucene.apache.org
Subject: Re
Hi,
We are seeing a strange behavior with snappuller
We have 2 cores Hotel Location
Here are the steps we perform
1. index hotel on master server
2. index location on master server
3. execute snapshooter for hotel core on master server
4. execute snapshooter
@lucene.apache.org
Subject: Re: snappuller issue with multicore
I notices that you are using the same rysncd port for both core. Do you
have a scripts.conf for each core?
Bill
On Tue, Dec 9, 2008 at 11:40 PM, Kashyap, Raghu
[EMAIL PROTECTED]wrote:
Hi,
We are seeing a strange behavior with snappuller
-Original Message-
From: payalsharma [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 10, 2008 9:11 AM
To: solr-user@lucene.apache.org
Subject: Re: Can we extract contents from two Core folders
Hi,
Will you please explain what exactly you mean by :
Distributed search over the cores.
Ok I think the problem is what Bill mentioned earlier. The rsync port
was the same for both the cores and due to which it was copying the same
snapshot for both the cores
Thanks for all the help
-Raghu
-Original Message-
From: Kashyap, Raghu [mailto:[EMAIL PROTECTED]
Sent: Wednesday
Hi,
One of things we are looking for is to Autofill the keywords when people
start typing. (e.g. Google autofill)
Currently we are using the RangeQuery. I read about the PrefixQuery and feel
that it might be appropriate for this kind of implementation.
Has anyone implemented the autofill
27 matches
Mail list logo