I downloaded the nightly build and deployed the dist/solr-nightly-build.war
in tomcat with all other config same @solr.home. Trying to create a new core
gave me the same error. Do I still need to apply the patch you mentioned
earlier. I'm stuck, please help me out.
I used this to create a new
Hi All,
I am new here. Thanks for reading my question.
I want to use DataImportHandler to index my tons of xml files (7GB total)
stored in my local disk. My data-config.xml is attached below. It works fine
with one file (abc.xml), but how can I index all xml files at one time? Thanks!
Please do so ASAP. I'm stuck just becaue of that. If not then I might have
to move to lucene. I guess lucene has support for on the fly creation of new
indexes. Any idea on this?
Thanks,
KK.
2009/5/20 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com
hi KK,
the fix got removed in a subsequent
I downloaded the nightly build . How do I get the trunk and where to get it
from? I've never used trunks earlier . Please help me out. Tell me the
detailed steps for all this.
Thanks,
KK.
2009/5/20 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com
if you are using a trunk version you can still
what is the difference between query clause and filter query??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/query-clause-and-filter-query-tp23629715p23629715.html
Sent from the Solr - User mailing list archive at Nabble.com.
there is an example given for the same
http://wiki.apache.org/solr/DataImportHandler#head-8edbc7e588d97068aa0a61ed2e3e8e61b43debce
On Wed, May 20, 2009 at 11:35 AM, Jianbin Dai djian...@yahoo.com wrote:
Hi All,
I am new here. Thanks for reading my question.
I want to use DataImportHandler
On Tue, May 19, 2009 at 11:50 PM, solrpowr solrp...@hotmail.com wrote:
Besides my own offline processing via logs, does solr have the
functionality
to give me statistics such as top searches, how many results were returned
on these searches, and/or how long it took to get these results on
Do I've to check out this
http://svn.apache.org/repos/asf/lucene/solr/trunk/
using this command
svn checkout http://svn.apache.org/repos/asf/lucene/solr/trunk/
??? not much idea about svn chekout from public repo. give me some specific
pointers on this.
and then fire ant to build a fresh solr?
OK then , I assume that nightly build will solve my basic problem of On the
fly creation of new cores using dataDir as req parameter, then I can wait
for two more hours.
One more thing the new nightly build willl be called solr-2009-05-20.tgz,
right as teh current one is solr-2009-05-19.tgz,
Hi, I'm looking for some advice on how to add base query caching to SOLR.
Our use-case for SOLR is:
- a large Lucene index (32M docs, doubling in 6 months, 110GB increasing x 8
in 6 months)
- a frontend which presents views of this data in 5 categories by firing
off 5 queries with the same
Hi,
At the moment Solr does not have such functionality. I have written a plugin
for Solr though which uses a second Solr core to store/index the searches. If
you're interested, send me an email and I'll get you the source for the plugin.
Regards,
Patrick
-Original Message-
From:
Hello,
I'm sorry I wrote a mistake, I mean :
http://localhost:8983/solr/listings/select/?q=novelqf=title_s^5.0fl=title_s+isbn_sversion=2.2start=0rows=5indent=ondebugQuery=on
(using qf (Query Fields))
But it seems I need to add dismax as well and configure it by default in
solr config?
Thanks
Read Consider using filters section here:
http://wiki.apache.org/lucene-java/ImproveSearchingSpeed
On Wed, May 20, 2009 at 10:24 AM, Ashish P ashish.ping...@gmail.com wrote:
what is the difference between query clause and filter query??
Thanks,
Ashish
--
View this message in context:
AFAIK there's no way of getting it in static way. If you look into
SolrDispatchFilter.java, you'll see this lines:
// put the core container in request attribute
req.setAttribute(org.apache.solr.CoreContainer, cores);
So later in your servlet you can get this request attribute, I do it in this
On Wed, May 20, 2009 at 1:31 PM, Plaatje, Patrick
patrick.plaa...@getronics.com wrote:
At the moment Solr does not have such functionality. I have written a
plugin for Solr though which uses a second Solr core to store/index the
searches. If you're interested, send me an email and I'll get
I tried the following request after changed the dismax :
http://localhost:8983/solr/listings/select/?q=novelqt=dismaxqf=title_s^2.0fl=title_s+isbn_sversion=2.2start=0rows=5indent=ondebugQuery=on
But I don't get any results :
lst name=debug
str name=rawquerystringnovel/str
str
Hmm... somehow Lucene is flushing a new segment on closing the
IndexWriter, and thinks 1 doc had been added to the stored fields
file, yet the fdx file is the wrong size (0 bytes). This check (
exception) are designed to prevent corruption from entering the index,
so it's at least good to see
Hi Noble,
I downloaded the latest nightly build(20th may) and deployed the
nightly-build.war and tried to run the same thing for creating a new core
and the bad news is that it didn't work. I tried this
dataDir/opt/solr/data/${core.name}/dataDir also
Hi all,
I am not sure how to call optimize on the existing
index. I tried with following URL
http://localhost:9090/solr/update?optimize=true
With this request, the response took a long time, and the index folder
size doubled. Then again I queried the same URL and index size
Thanks for the response.
That means I've to have the directory before I pass it to solr, its not
going to create it by itself. Or just passing the name will make it create a
new directory? I've to give the full path? Seems I wont be able to register
a new core on the fly by just passing the name.
To send an optimize command, POST an optimize/ message to /solr/update
Erik
On May 20, 2009, at 6:49 AM, Gargate, Siddharth wrote:
Hi all,
I am not sure how to call optimize on the existing
index. I tried with following URL
I cringe to suggest this but you can use the deprecated call:
SolrCore.getSolrCore().getCoreContainer()
On May 19, 2009, at 11:21 AM, Giovanni De Stefano wrote:
Hello all,
I have a quick question but I cannot find a quick answer :-)
I have a Java client running on the same JVM where
In Solr 1.3, is there a setting that allows one to modified the where the
dataimport.properties file resides?
In a production environment, the solrconfig directory needs to be read-only.
I have observed that the DIH process works regards, but a whooping errors is
put in the logs when the
Error is below. This error does not appear when I manually copy the jar file
into the tomcat webapp directory only when I try to put it in the solr.home
lib directory.
SEVERE: org.apache.solr.common.SolrException: Error loading class
'org.apache.solr.handler.component.FacetCubeComponent'
at
Just a wild guess here, but...
Try doing one of two things:
1. change the package name to be something other than o.a.s
2. Change your config to use solr.FacetCubeComponent
You might also try turning on trace level logging for the
SolrResourceLoader and report back the output.
-Grant
Hi,
On May 12, 2009, at 12:33 , Nicolas Pastorino wrote:
Hi,
On May 7, 2009, at 6:03 , Noble Paul നോബിള്
नोब्ळ् wrote:
going forward the java based replication is going to be the preferred
means replicating index. It does not support replicating files in the
dataDir , it only supports
Is a place in a core's solrconfig, where one can set the directory/path
where the dataimport.properties file is written to?
On 5/20/09 2:09 PM, Giovanni De Stefano giovanni.destef...@gmail.com
wrote:
Doh,
can you please rephrase?
Giovanni
On Wed, May 20, 2009 at 3:47 PM, Wesley Small
On Wed, May 20, 2009 at 12:07 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Solr plays nice with HTTP caches. Perhaps the simplest solution is to put
Solr behind a caching server such as Varnish, Squid, or even Apache?
In Kent's case, the other query parameters (the other filters
Amit,
That's the same question as the other day, right?
Yes, DisMax doesn't play well with Boolean operators. Check JIRA, it has a
search box, so you may be able to find related patches.
I think the patch I was thinking about is actually for something else -
allowing field names to be
Using a NGramTokenizerFactory as an analyzer for your field would help you
achieve the desired.
Here's a nice article -
http://coderrr.wordpress.com/2008/05/08/substring-queries-with-solr-acts_as_solr/
Cheers
Avlesh
On Wed, May 20, 2009 at 11:26 PM, Alex Life av_l...@yahoo.com wrote:
Hi All,
I'm running Solr with the default Jetty setup on Windows. If I start
solr with java -jar start.jar from a command window, then I can
cleanly shut down Solr/Jetty by hitting Control-C. In particular, this
causes the shutdown hook to execute, which appears to be important.
However, I don't
It seems I sent this out a bit too soon. After looking at the source it seems
there are two seperate paths for distributed and regular queries, however the
prepare method for for all components is run before the shards parameter is
checked. So I can build the shards portion by using the
Hi Mike, thanks for the quick response:
$ java -version
java version 1.6.0_11
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
I hadn't noticed the 268m trigger for LUCENE-1521 - I'm definitely not
hitting that yet!
The
Have two questions?
1) What is the limit on facet counts? ex : test(10,0).Is this valid?
2) What is the limit on the no of facets? how many facets can a query get?
--Sachin
--
View this message in context:
http://www.nabble.com/Facet-counts-limit-tp23641105p23641105.html
Sent from the
How often do you update the indexes? We update once per day, and our
HTTP cache has a hit rate of 75% once it gets warmed up.
wunder
On 5/20/09 9:07 AM, Otis Gospodnetic otis_gospodne...@yahoo.com wrote:
Kent,
Solr plays nice with HTTP caches. Perhaps the simplest solution is to put
Solr
Hi,
Yeah you are right. Can you please tell me the URL of JIRA.
Thanks,
Amit
Otis Gospodnetic wrote:
Amit,
That's the same question as the other day, right?
Yes, DisMax doesn't play well with Boolean operators. Check JIRA, it has
a search box, so you may be able to find related
Hi All,
Could you please help?
I have following document
Super PowerShot SD
I want this document to be found by phrase queries below (both):
super powershot sd
super power shot sd
Is this possible without sloppy phrase query? (at least theoretical)
I don't see any way setting term/positions
Hi,
I am wondering if it is possible to basically add the distributed portion of a
search query inside of a searchComponent.
I am hoping to build my own component and add it as a first-component to the
StandardRequestHandler. Then hopefully I will be able to use this component to
build the
Doh,
can you please rephrase?
Giovanni
On Wed, May 20, 2009 at 3:47 PM, Wesley Small wesley.sm...@mtvstaff.comwrote:
In Solr 1.3, is there a setting that allows one to modified the where the
dataimport.properties file resides?
In a production environment, the solrconfig directory needs to
Thank you all for your replies.
I guess I will stick with another approach: all my request handlers inherit
from a custom base handler which is CoreAware.
Its inform(core) method notifies a static map hold by another object
avoiding duplicates.
Thanks again!
Giovanni
On Wed, May 20, 2009 at
Some thoughts:
#1) This is sort of already implemented in some form... see this
section of solrconfig.xml and try uncommenting it:
!-- An optimization that attempts to use a filter to satisfy a search.
If the requested sort does not include score, then the filterCache
will
I tried to change the package name to com.zappos.solr.
When I declared the search component with:
searchComponent name=facetcube
class=com.zappos.solr.FacetCubeComponent/
I get:
SEVERE: org.apache.solr.common.SolrException: Unknown Search Component:
facetcube
at
Kent,
Solr plays nice with HTTP caches. Perhaps the simplest solution is to put Solr
behind a caching server such as Varnish, Squid, or even Apache?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Kent Fitch kent.fi...@gmail.com
To:
Wouldn't you want to run it as a windows service and use net start/
net stop? If you download and install Jetty it comes with the
appropriate scripts to be installed as a service.
Eric
On May 20, 2009, at 12:39 PM, Chris Harris wrote:
I'm running Solr with the default Jetty setup on
I created ticket SOLR-1178 for the small tweak.
https://issues.apache.org/jira/browse/SOLR-1178
Eric
On May 5, 2009, at 12:26 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
hi Eric,
there should be a getter for CoreContainer in EmbeddedSolrServer.
Open an issue
--Noble
On Tue, May 5, 2009 at
On Wed, May 20, 2009 at 12:43 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
useFilterForSortedQuerytrue/useFilterForSortedQuery
Of course the examples you gave used the default sort (by score) so
this wouldn't help if you do actually need to sort by score.
-Yonik
An HTTP cache will still work. We make three or four back end queries
for each search page. We use separate request handlers with filter query
specs instead of putting the filter query in the URL, but those two
approaches are equivalent for the HTTP cache.
We get similar cache hit rates on the
1. The limit parameter takes a signed integer, so the max value is
2,147,483,647.
2. I don't think there is a defined limit which would mean you are
only limited to want your system can handle.
Thanks,
Matt Weber
eSr Technologies
http://www.esr-technologies.com
On May 20, 2009, at
Hi Shalin,
Let me investigate. I think the challenge will be in storingmanaging these
statistics. I'll get back to the list when I have thought of something.
Rgrds,
Patrick
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: woensdag 20 mei 2009 10:33
So I am trying to filter down what I am indexing, and the basic XPath
queries don't work. For example, working with tutorial.pdf this
indexes all the div/:
curl http://localhost:8983/solr/update/extract?ext.idx.attr=true
\ext.def.fl=text\ext.map.div=foo_t\ext.capture=div
Alex,
You might want to paste in your tokenizer/token filter config. You may also
want to paste in how you analyzer configuration breaks those phrases and what
the position of each term is. This will make it easier for others to
understand what you have, what doesn't work, and what your
On Wed, May 20, 2009 at 11:44 PM, Wesley Small wesley.sm...@mtvstaff.comwrote:
Is a place in a core's solrconfig, where one can set the directory/path
where the dataimport.properties file is written to?
It is not configurable right now. Can you please open a jira issue for this?
--
Regards,
http://issues.apache.org/jira/browse/SOLR-405 ?
It's quite old and it's exactly what you want, but I think it might be
the JIRA ticket that Otis mentioned. Using a filter query was what we
really needed. I'm also not really sure why you need a dismax query
at all. You're not querying
what else is there in the solr.home/lib other than this component?
On Wed, May 20, 2009 at 9:08 PM, Jeff Newburn jnewb...@zappos.com wrote:
I tried to change the package name to com.zappos.solr.
When I declared the search component with:
searchComponent name=facetcube
54 matches
Mail list logo