If i don't explicity set any default query in the solrconfig.xml for
caching and make use of the default config file,does solr do the caching
automatically based on the query?
Thanks
my documents (products) have a price field, and I want to have
a dynamically calculated range facet for that in the response.
E.g. I want to have this in the response
price:[* TO 20] - 23
price:[20 TO 40] - 42
price:[40 TO *] - 33
if prices are between 0 and 60
but
price:[* TO 100] -
How can I stop this?
Noble Paul നോബിള് नोब्ळ् wrote:
if the DIH status does not say that it optimized, it is lucene
mergeing the segments
On Mon, Mar 23, 2009 at 8:15 PM, sunnyfr johanna...@gmail.com wrote:
I checked this out but It doesn't say nothing about optimizing.
I'm sure
I want following output from solr:
I index a field with value - A B;C D;E F
I have applied a pattern tokenizer on this field because I know the value
will contain ;
fieldtype name=conditionText class=solr.TextField
analyzer
tokenizer
On Tue, Mar 24, 2009 at 2:03 PM, Ashish P ashish.ping...@gmail.com wrote:
So it indexes A B, C D, E F properly... So I get facets
A B (1)
C D (1)
E F (1)
This is the exact output of facets I want.
But I also want to search this document when I just search individual word
'A' or 'D' etc.
We should obviously get to the bottom of this. But I was thinking, should we
have some sort of timeouts on the SnapPuller in the slave to avoid such
scenarios? Locking out snap pulls forever is not a good idea.
On Mon, Mar 23, 2009 at 8:57 PM, Yonik Seeley yo...@lucidimagination.comwrote:
So
We do not set an conn_timeoout,read_timeout for the httpclient in snappuller.
I guess it should be set to some very high value say 1hr for
read-timeout and say 1 minute for conn_timeout and we can make it
configurable .
--Noble
On Tue, Mar 24, 2009 at 2:13 PM, Shalin Shekhar Mangar
Hi,
If I'm using autocommit, and I have a crash of tomcat (or the whole
machine) while there are still docs pending, will I lose those
documents in limbo, or will I just be able to restart and then the
commit will run?
If the answer is they go away: Is there anyway to ensure integrity
of an
Hello list,
I have a hard time in a project that's not yet fully converted to solr
with multiple versions of the lucene core classes. I can switch over
to the ones of solr (solr-lucene-core-1.3.0) but they are
incompatible with lucene-core-2.3.1 and don't share the same version
number
On Tue, Mar 24, 2009 at 3:30 PM, Paul Libbrecht p...@activemath.org wrote:
Is there a lucene version that solr-lucene-core-1.3.0 is?
The lucene jars shipped with Solr 1.3.0 were 2.4-dev built from svn revision
r691741. You can check out the source from lucene's svn using that revision
number.
Hi,
question ;-)
!DOCTYPE config SYSTEM http://java.sun.com/dtd/web-app_2_3.dtd; [
!ENTITY default_solrconfig SYSTEM
/var/lib/tomcat5.5/webapps/solr/default_solrconfig.xml
]
Is there a chance to set the home directory using a variable ? For
example an unix enviroment variable ?
On Tue, Mar 24, 2009 at 4:16 PM, Kraus, Ralf | pixelhouse GmbH
r...@pixelhouse.de wrote:
Hi,
question ;-)
!DOCTYPE config SYSTEM http://java.sun.com/dtd/web-app_2_3.dtd; [
!ENTITY default_solrconfig SYSTEM
/var/lib/tomcat5.5/webapps/solr/default_solrconfig.xml
]
Is there a chance
Hi Solr users
Our index could be much smaller if we could store some of fields not in
index directly but in some kind of external storage.
All I've found until now is ExternalFileField class which shows that it's
possible to implement such a storage, but I'm quite sure that the
requirement is
Andrey Klochkov wrote:
Hi Solr users
Our index could be much smaller if we could store some of fields not in
index directly but in some kind of external storage.
All I've found until now is ExternalFileField class which shows that it's
possible to implement such a storage, but I'm quite sure
Our index could be much smaller if we could store some of fields not in
index directly but in some kind of external storage.
All I've found until now is ExternalFileField class which shows that it's
possible to implement such a storage, but I'm quite sure that the
requirement is common and
On Tue, Mar 24, 2009 at 4:43 PM, Mark Miller markrmil...@gmail.com wrote:
Thats a tall order. It almost sounds as if you want to be able to not use
the index to store fields, but have them still fully functional as if
indexed. That would be quite the magic trick.
Look here, people wanted
Hello Friends,
I am newbee to solr. so sorry for silly question.
I am facing a problem related to multiple cores configuration. I have placed
a solr.xml file in solr.home directory. eventhough when I am trying to
access http://localhost:8983/solr/admin/cores it gives me tomcat error.
Can
mitulpatel wrote:
Hello Friends,
I am newbee to solr. so sorry for silly question.
I am facing a problem related to multiple cores configuration. I have placed
a solr.xml file in solr.home directory. eventhough when I am trying to
access http://localhost:8983/solr/admin/cores it gives me
I'd like to be able to index various documents and have the text extracted
from them using the DataImportHandler. I think I have this working just fine.
However, I'd later like to be able to update a field value or several, without
re-extracting the text all over again with the DIH. Yes - and
No problem Kimani. I am forwarding this message to the mailing list, in case
that it can help others.
Audrey
-- Forwarded message --
From: Kimani Nielsen kniel...@gmail.com
Date: Tue, Mar 24, 2009 at 8:57 AM
Subject: Re: multicore solrconfig issues
To: Audrey Foo chry...@gmail.com
thanks for your answer, then what fire merging because in my log i've
optimize=true, if it's not optimization because I don't fire it, it must me
marging how can I stop this??
Thanks a lot,
Shalin Shekhar Mangar wrote:
No, optimize is not automatic. You have to invoke it yourself just like
Hi All,
I have a txt file, that captured all of my network traffic. How can I use
Solr to filter out a particular IP address?
Thank you,
Nga.
Andrey Klochkov wrote:
On Tue, Mar 24, 2009 at 4:43 PM, Mark Miller markrmil...@gmail.com wrote:
Thats a tall order. It almost sounds as if you want to be able to not use
the index to store fields, but have them still fully functional as if
indexed. That would be quite the magic trick.
Hello all,
Our application involves a high index write rate - anywhere from a few
dozen to many thousands of docs per sec. The write rate is frequently
higher than the read rate (though not always), and our index must be
as fresh as possible (we'd like search results to be no more than a
couple
I don't think that Solr is the best thing to use for searching a text
file. I'd use grep myself, if you're on a unix-like system.
To use solr, you'd need to throw each network 'event' (GET, POST, etc
etc) into an XML document, and post those into Solr so it could
generate the index. You
Do you think luence is better to filter out a particular IP address from a
txt file?
Thank you Runo,
Nga
On Tue, Mar 24, 2009 at 10:21 AM, Matthew Runo mr...@zappos.com wrote:
I don't think that Solr is the best thing to use for searching a text file.
I'd use grep myself, if you're on a
Hi Dan,
We should turn this into a FAQ. In the mean time, have a look at SOLR-139 and
the issue linked to that one.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Dan A. Dickey dan.dic...@savvis.net
To: solr-user@lucene.apache.org
I'm performing this operation:
curl http://localhost:8983/solr/update/extract?ext.def.fl=text --data-binary
@ZOLA.doc -H 'Content-type:text/html'
in order to index word document ZOLA.doc into Solr using the example
schema.xml. It says I have not provided an 'id', which is a required field.
I'm
Looking at the example here:
http://wiki.apache.org/solr/SimpleFacetParameters#head-4ba81c89b265c3b5992e3292718a0d100f7251ef
This being the query for selecting PDF:
q=mainqueryfq=status:publicfq={!
tag=dt}doctype:pdffacet=onfacet.field={!ex=dt}doctype
How would you do the query for selecting
On a few occasions, our development server crashed and in the process
solr deleted the index folder. We are suspecting another app on the
server caused an OutOfMemoryException on Tomcat causing all apps
including solr to crash.
So my question is why is solr deleting the index? We are not
Well, I think you'll have the same problem. Lucene, and Solr (since
it's built on Lucene) are both going to expect a structured document
as input. Once you send in a bunch of documents, you can then query
them for whatever you want to find.
A quick search of the internets found me this
Somehow that sounds very unlikely. Have you looked at logs? What have you
found from Solr there? I am not checking the sources, but I don't think there
is any place in Solr where the index directory gets deleted.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
-
I've tried this too, still no luck:
curl http://localhost:8983/solr/update/extract?ext.def.fl=text -F id=123 -F
te...@zola.doc
2009/3/24 Chris Muktar ch...@wikijob.co.uk
I'm performing this operation:
curl http://localhost:8983/solr/update/extract?ext.def.fl=text--data-binary
@ZOLA.doc -H
:
: This is my query:
:
q=productPublicationDate_product_dt:[*%20TO%20NOW]facet=truefacet.field=productPublicationDate_product_dt:[*%20TO%20NOW]qt=dismaxrequest
that specific error is happening because you are passing this string...
productPublicationDate_product_dt:[*%20TO%20NOW]
Well,
A log file is theoretically structured. Every log record is a - very -
flat set of fields. So, every log file line would be a Lucene
document. Then, one could use Solr to search, filter and facet
records.
Of course, this requires parsing log file back into record components.
Most log files
On Tue, Mar 24, 2009 at 2:29 PM, Nasseam Elkarra nass...@bodukai.com wrote:
Looking at the example here:
http://wiki.apache.org/solr/SimpleFacetParameters#head-4ba81c89b265c3b5992e3292718a0d100f7251ef
This being the query for selecting PDF:
Correction: index was not deleted. The folder is still there with the
index files in it but a *:* query returns 0 results. Is there a tool
to check the health of an index?
Thanks,
Nasseam
On Mar 24, 2009, at 11:49 AM, Otis Gospodnetic wrote:
Somehow that sounds very unlikely. Have you
There is, it's called CheckIndex and it is a part of Lucene (and Lucene jars
that come with Solr, I believe):
http://lucene.apache.org/java/2_4_1/api/org/apache/lucene/index/CheckIndex.html
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
We have three Solr servers (several two processor Dell PowerEdge
servers). I'd like to get three newer servers and I wanted to see what
we should be getting. I'm thinking the following...
Dell PowerEdge 2950 III
2x2.33GHz/12M 1333MHz Quad Core
16GB RAM
6 x 146GB 15K RPM RAID-5 drives
: as far as I know solr.StrField is not analized but it is indexed as is
: (verbatim).
correct ... but there is definitely a bug here if the analysis.jsp
is implying that an analyzer is being used...
https://issues.apache.org/jira/browse/SOLR-1086
-Hoss
Ok i'm ok with the fact the solr gonna do X request to database for X
update.. but when i try to run the delta-import command with 2 row to
update is it normal that its kinda really slow ~ 1 document fetched / sec ?
Noble Paul നോബിള് नोब्ळ् wrote:
not possible really,
that may not
: My application is in prod and quite frequently�getting NullPointerException.
...
: java.lang.NullPointerException
: at
com.fm.search.incrementalindex.service.AuctionCollectionServiceImpl.indexData(AuctionCollectionServiceImpl.java:251)
: at
: Depending on your needs, you might want to do some sort of minimal
: analysis on the field (ignore punctuation, lowercase,...) Here's the
: text_exact field that I use:
Deans reply is a great example of what exact is a vague term.
with a TextField you can get an exact match using a simple
: Subject: Response schema for an update.
: In-Reply-To: shivayigjyfbf88vtu21...@shiva.ceiindia.com
: References: 69de18140903230141t38dbcd28n40bbcc944ddb0...@mail.gmail.com
: shivayigjyfbf88vtu21...@shiva.ceiindia.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on
: I am facing a problem related to multiple cores configuration. I have placed
: a solr.xml file in solr.home directory. eventhough when I am trying to
: access http://localhost:8983/solr/admin/cores it gives me tomcat error.
:
: Can anyone tell me what can be possible issue with this??
not
Deja-Vu...
http://www.nabble.com/Missing-required-field%3A-id-Using-ExtractingRequestHandler-to22611039.html
: I'm performing this operation:
:
: curl http://localhost:8983/solr/update/extract?ext.def.fl=text --data-binary
: @ZOLA.doc -H 'Content-type:text/html'
:
: in order to index word
Le 24-mars-09 à 11:14, Shalin Shekhar Mangar a écrit :
On Tue, Mar 24, 2009 at 3:30 PM, Paul Libbrecht
p...@activemath.org wrote:
Is there a lucene version that solr-lucene-core-1.3.0 is?
The lucene jars shipped with Solr 1.3.0 were 2.4-dev built from svn
revision
r691741. You can
Fantastic thank you!
I'm executing this:
curl -F te...@zheng.doc -F 'commit=true'
http://localhost:8983/solr/update/extract?ext.def.fl=text\ext.literal.id=2
however performing the query
http://localhost:8983/solr/select?q=id:2
produces the output but without a text field. I'm not sure if it's
Hi,
Sorry I still don't know what should I do ???
I can see in my log which clearly optimize somewhere even if my command is
deltaimportoptimize=false
is it a parameter to add to the commit or to the snappuller or ???
Mar 24 23:02:44 search-01 jsvc.exec[22812]: Mar 24, 2009 11:02:44 PM
If your text field is not stored, then it won't be available in
results. That's the likely explanation. Seems like all is well.
Erik
On Mar 24, 2009, at 11:34 PM, Chris Muktar wrote:
Fantastic thank you!
I'm executing this:
curl -F te...@zheng.doc -F 'commit=true'
Have you looked at http://wiki.apache.org/solr/SolrPerformanceData
?http://wiki.apache.org/solr/SolrPerformanceData
On Tue, Mar 24, 2009 at 4:51 PM, solr s...@highbeam.com wrote:
We have three Solr servers (several two processor Dell PowerEdge
servers). I'd like to get three newer servers and
The tool says there are no problems. Solr is pointing to the right
directory so not sure what is preventing it from returning any
results. Any ideas? Here is the output:
Segments file=segments_2 numSegments=1 version=FORMAT_USER_DATA
[Lucene 2.9]
1 of 1: name=_0 docCount=18021
Can I get all the facets in QueryResponse??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/get-all-facets-tp22693809p22693809.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Yonik,
Thanks for the response. If I shut down tomcat cleanly, does it
commit all uncommitted documents?
Best,
Jacob
-- Forwarded message --
From: Yonik Seeley yo...@lucidimagination.com
Date: Tue, Mar 24, 2009 at 8:48 PM
Subject: Re: autocommit and crashing tomcat
To:
Hm, you are not saying much about what you've tried. Could it be your Solr
home is wrong and not even pointing to the index you just checked?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Nasseam Elkarra nass...@bodukai.com
To:
On Wed, Mar 25, 2009 at 3:23 AM, Paul Libbrecht p...@activemath.org wrote:
could I suggest that the maven repositories are populated next-time a
release of solr-specific-lucenes are made?
But they are? It is inside the org.apache.solr group since those lucene jars
are released by Solr --
On Wed, Mar 25, 2009 at 2:25 AM, AlexxelA alexandre.boudrea...@canoe.cawrote:
Ok i'm ok with the fact the solr gonna do X request to database for X
update.. but when i try to run the delta-import command with 2 row to
update is it normal that its kinda really slow ~ 1 document fetched /
hossman wrote:
: I am facing a problem related to multiple cores configuration. I have
placed
: a solr.xml file in solr.home directory. eventhough when I am trying to
: access http://localhost:8983/solr/admin/cores it gives me tomcat error.
:
: Can anyone tell me what can be
Hi all,
Can we specify the index-time boost value for a particular field in
schema.xml?
Thanks,
Siddharth
We have been running our solr slaves without autowarming our new searchers
for a long time, but that was causing us 50-75 requests in 20+ seconds
timeframe after every update on the slaves. I have turned on autowarming and
that has fixed our slow response times, but I'm running into occasional
On Wed, Mar 25, 2009 at 10:14 AM, Gargate, Siddharth sgarg...@ptc.comwrote:
Hi all,
Can we specify the index-time boost value for a particular field in
schema.xml?
No. You can specify it along with the document when you add it to Solr.
--
Regards,
Shalin Shekhar Mangar.
61 matches
Mail list logo