when I try without adaptive parameter I've OOME:
HTTP Status 500 - Java heap space java.lang.OutOfMemoryError: Java heap
space
Shalin Shekhar Mangar wrote:
On Mon, Sep 22, 2008 at 9:19 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
There is something wierd :
I've plan cron job every 5mn
When I try without adaptive parameter I've an out of memory error.
Shalin Shekhar Mangar wrote:
On Mon, Sep 22, 2008 at 9:19 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
There is something wierd :
I've plan cron job every 5mn which heat delta-import's url and works fine
:
The point is :
Hi Otis,
Currently I am creating indexes from Java standalone program.
I am preparing data by using query have made data to index.
Function as blow can we write.
I have large number of product we want to user it at production level.
Please provide me sample or tutorials.
/**
*
On 23.09.2008 00:30 Chris Hostetter wrote:
: Here is what I was able to get working with your help.
:
: (productId:(102685804)) AND liveDate:[* TO NOW] AND ((endDate:[NOW TO *]) OR
: ((*:* -endDate:[* TO *])))
:
: the *:* is what I was missing.
Please, PLEASE ... do yourself a favor
Hi Dinesh,
Your code is hardly useful to us since we don't know what you are trying to
achieve or what all those Dao classes do.
Look at the Solr tutorial first -- http://lucene.apache.org/solr/
Use the SolrJ client for communicating with Solr server --
http://wiki.apache.org/solr/Solrj
Also
Hi,
Current we are using Lucene api to create index.
It creates index in a directory with 3 files like
xxx.cfs , deletable segments.
If I am creating Lucene indexes from Solr, these file will be created or not?
Please give me example on MySQL data base instead of hsqldb
Regards,
Dinesh
Hello everyone, I'm new to Solr (have been using Lucene for a few years
now). We are looking into Solr and have heard many good things about the
project:)
I have a few questions regarding the EmbeddedSolrServer in Solrj and the
MultiCore features... I've tried to find answers to this in
On Tue, Sep 23, 2008 at 5:33 PM, Dinesh Gupta [EMAIL PROTECTED]wrote:
Hi,
Current we are using Lucene api to create index.
It creates index in a directory with 3 files like
xxx.cfs , deletable segments.
If I am creating Lucene indexes from Solr, these file will be created or
not?
The
Hi Shalin Shekhar,
Let me explain my issue.
I have some tables in my database like
Product
Category
Catalogue
Keywords
Seller
Brand
Country_city_group
etc.
I have a class that represent product document as
Document doc = new Document();
// Keywords which can be used directly for
Hi,
Probably a stupid question with the obvious answer, but if I am
running a Solr master and accepting updates, do I have to stop the
updates when I start the optimise of the index? Or will optimise just
take the latest snapshot and work on that independently of the
incoming updates?
Really
Yes In deed it was problem with the path .. thanks a lot,
Just didnt get this part If you turn up your logging to FINE what does
that mean ?
Huge thanks for your answer,
hossman wrote:
: And I did change my config file :
:
: !-- A postCommit event is fired after every commit or
Hi,
I don't know why when I start commit manually it doesn't fire snapshooter ?
I did it manually because no snapshot was created and if i run it manually
it works.
so my auto commit is activated (I think) :
autoCommit
maxDocs1/maxDocs
maxTime1000/maxTime
/autoCommit
My
Hi Dinesh,
This seems straightforward for Solr. You can use the embedded jetty server
for a start. Look at the tutorial on how to get started.
You'll need to modify the schema.xml to define all the fields that you want
to index. The wiki page at http://wiki.apache.org/solr/SchemaXml is a good
On Tue, Sep 23, 2008 at 7:06 PM, Geoff Hopson [EMAIL PROTECTED]wrote:
Probably a stupid question with the obvious answer, but if I am
running a Solr master and accepting updates, do I have to stop the
updates when I start the optimise of the index? Or will optimise just
take the latest
On Tue, Sep 23, 2008 at 7:36 PM, sunnyfr [EMAIL PROTECTED] wrote:
My snapshooter too:
!-- A postCommit event is fired after every commit or optimize command
--
listener event=postCommit class=solr.RunExecutableListener
str name=exe./data/solr/book/logs/snapshooter/str
str
Right my bad it was bin directory, but even when i fire commit no snapshot
created ??
Does it check the number of document even when i fire it and another
question I dont rember have put in the conf file the path to commit, but
even manually it doesnt work
[EMAIL PROTECTED]:/#
Hi,
I'm quite new to solr and I'm looking for a way to extend the list of used
synonyms used at query-time without having to reload the config. What I've
found so far are these tow thread linked to below, of which neither really
helped me out.
Especially the MultiCore solution seems a little bit
This is probably not useful because synonyms work better at index time
than at query time. Reloading synonyms also requires reindexing all
the affected documents.
wunder
On 9/23/08 7:45 AM, Batzenmann [EMAIL PROTECTED] wrote:
Hi,
I'm quite new to solr and I'm looking for a way to extend
Thanks for your response Chris.
I do see the reviewid in the index through luke. I guess what I am
confused about is the field cumulative_delete. Does this have any
significance to whether the delete was a success or not? Also shouldn't
the method deleteByQuery return a diff status code based on
I have searched the forum and the internet at large to find an answer to my
simple problem, but have been unable. I am trying to get a simple dataimport
to work, and have not been able to. I have Solr installed on an Apache
server on Unix. I am able to commit and search for files using the usual
I've got a small configuration question. When posting docs via SolrJ, I get
the following warning in the Solr logs:
WARNING: The @Deprecated SolrUpdateServlet does not accept query parameters:
wt=xmlversion=2.2
If you are using solrj, make sure to register a request handler to /update
rather
On Sep 23, 2008, at 12:35 PM, Gregg wrote:
I've got a small configuration question. When posting docs via
SolrJ, I get
the following warning in the Solr logs:
WARNING: The @Deprecated SolrUpdateServlet does not accept query
parameters:
wt=xmlversion=2.2
If you are using solrj, make sure
Are there any exceptions in the log file when you start Solr?
On Tue, Sep 23, 2008 at 9:31 PM, KyleMorrison [EMAIL PROTECTED] wrote:
I have searched the forum and the internet at large to find an answer to my
simple problem, but have been unable. I am trying to get a simple
dataimport
to
Problem with the span filter - removing some test - re-posting.
water4u99 wrote:
Hi,
Some additional clue as to where the issue is: the computed number changed
when there is an additional query it in the query request.
Ex1: .../select/?q=_val_:%22sum(stockPrice_f,10.00)%22fl=*,score
Simply set text to be multivalued (one for each *_t field).
Erik
On Sep 22, 2008, at 1:08 PM, Jon Drukman wrote:
I have a dynamicField declaration:
dynamicField name=*_t type=text indexed=true stored=true/
I want to copy any *_t's into a text field for searching with
dismax. As
Thank you for help. The problem was actually just stupidity on my part, as it
seems I was running the wrong startup and shutdown shells for the server,
and thus the server was getting restarted. I restarted the server and I can
at least access those pages. I'm getting some wonky output, but I
This turned out to be a fairly pedestrian bug on my part: I had /update
appended to the Solr base URL when I was adding docs via SolrJ.
Thanks for the help.
--Gregg
On Tue, Sep 23, 2008 at 12:42 PM, Ryan McKinley [EMAIL PROTECTED] wrote:
On Sep 23, 2008, at 12:35 PM, Gregg wrote:
I've got
Ok, I'm very frustrated. I've tried every configuraiton I can and parameters
and I cannot get fragments to show up in the highlighting in solr. (no
fragments at the bottom or highlights em/em in the text. I must be
missing something but I'm just not sure what it is.
Make sure the fields you're trying to highlight are stored in your schema
(e.g. field name=synopsis type=string stored=true /)
David Snelling-2 wrote:
Ok, I'm very frustrated. I've tried every configuraiton I can and
parameters
and I cannot get fragments to show up in the highlighting in
This is the configuration for the two fields I have tried on
field name=shortdescription type=string indexed=true stored=true/
field name=synopsis type=string indexed=true stored=true
compressed=true/
On Tue, Sep 23, 2008 at 1:59 PM, wojtekpia [EMAIL PROTECTED] wrote:
Make sure the
Try a query where you're sure to get something to highlight in one of your
highlight fields, for example:
/select/?qt=standardq=synopsis:crayonhl=truehl.fl=synopsis,shortdescription
David Snelling-2 wrote:
This is the configuration for the two fields I have tried on
field
At this point, it's roll your own. I'd love to see the BTQ in Solr
(and Spans!), but I wonder if it makes sense w/o better indexing side
support. I assume you are rolling your own Analyzer, right? Spans
and payloads are this huge untapped area for better search!
On Sep 23, 2008, at
It may be too early to say this but I'll say it anyway :)
There should be a juicy case study that includes payloads, BTQ, and Spans in
the upcoming Lucene in Action 2. I can't wait to see it, personally.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original
At this point, it's roll your own.
That's where I'm getting bogged down - I'm confused by the various queryparser
classes in lucene and solr and I'm not sure exactly what I need to override.
Do you know of an example of something similar to what I'm doing that I could
use as a reference?
Hmmm. That doesn't actually return anything which is odd because I know
it's in the field if I do a query without specifying the field.
http://qasearch.donorschoose.org/select/?q=synopsis:students
returns nothing
http://qasearch.donorschoose.org/select/?q=students
returns items with query in
Your fields are all of string type. String fields aren't tokenized or
analyzed, so you have to match the entire text of those fields to actually
get a match. Try the following:
/select/?q=firstname:Kathrynhl=onhl.fl=firstname
The reason you're seeing results with just q=students, but not
Ok, thanks, that makes a lot of sense now.
So, how should I be storing the text for the synopsis or shortdescription
fields so it would be tokenized? Should it be text instead of string?
Thank you very much for the help by the way.
On Tue, Sep 23, 2008 at 2:49 PM, wojtekpia [EMAIL PROTECTED]
Yes, you can use text (or some custom derivative of it) for your fields.
David Snelling-2 wrote:
Ok, thanks, that makes a lot of sense now.
So, how should I be storing the text for the synopsis or shortdescription
fields so it would be tokenized? Should it be text instead of string?
On Sep 23, 2008, at 5:39 PM, Ensdorf Ken wrote:
At this point, it's roll your own.
That's where I'm getting bogged down - I'm confused by the various
queryparser classes in lucene and solr and I'm not sure exactly what
I need to override. Do you know of an example of something similar
Hi,
Can't tell with certainty without looking, but my guess would be slow disk,
high IO, and a large number of processes waiting for IO (run vmstat and look at
the wa column).
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: rahul_k123
hi,
How to weightage more frequently searched word in solr?
what is the functionality in Apache solr module?
I have a list of more frequently searched word in my site , i need to
highlight those words.From the net i found out that 'score' is used for this
purpose. Isn't it true?
Anybody knows
Hi,
Thanks for the reply.
I am not using SOLR for indexing and serving search requests, i am using
only the scripts for replication.
Yes it looks like I/O, but my question is how to handle this problem and is
there any optimal way to achieve this.
Thanks.
Otis Gospodnetic wrote:
Hi,
42 matches
Mail list logo