You are right, they are not stored...
But I is possible to see them, as the schema browser in the admin application
does?
Regards Michael
--
Michael Szalay
Senior Software Engineer
basis06 AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 311 32 22
http://www.basis06.ch - source of smart business
Hi Erick,
This is one of the errors I get (at the 4GB memory machine) and after
a while Tomcat crashes:
SEVERE: SolrIndexWriter was not closed prior to finalize(),
indicates a bug -- POSSIBLE RESOURCE LEAK!!!
And this is part of my solrconfig.xml (I'm indexing 200k documents per run):
Hi Erick
I downloaded the latest build from (
https://builds.apache.org/job/Solr-3.x/lastSuccessfulBuild/artifact/artifacts/
)
But, I don't find the required class CollapseComponent in the src.
(org.apache.solr.handler.component.CollapseComponent).
The SolrJ in 3.4 does seem to have something
Eh eh you're right! This is what happens when you try to learn too much
things in the same moment!
Btw, i found this
http://webdevelopersjournal.com/columns/connection_pool.html which is
perfect, i can use the provided code as my singleton instance, now i just
have to figure out how i can detect
Sorry, it's been a long time since my last post...
Now I found out that the only goog solution is too do a core reload:
http://wiki.apache.org/solr/CoreAdmin#RELOAD
It's been working very good for our needs.
--
View this message in context:
Hi,
why don't you index the file metadata, i.e. file name for instance. So when
file's metadata is indexed you could start querying by file name.
BR,
Oleg
On Wed, Aug 31, 2011 at 12:02 PM, occurred
schaubm...@infodienst-ausschreibungen.de wrote:
Hi,
I'm looking for a solution to find out
Hi Oleg,
ah, maybe there is a misunderstanding.
With document I meant a record in the index not a file.
The records are indexed via a DB.
cheers
Charlie
--
View this message in context:
http://lucene.472066.n3.nabble.com/Find-out-why-a-document-wasn-t-found-tp3297821p3297875.html
Sent from the
Hi.
Suppose i have a field price with different values, and i want to
get ranges for this field depending on docs count, for example i want
to get 5 ranges for 100 docs with 20 docs in each range, 6 ranges for
200 docs = 34 docs in each field, etc.
Is it possible with solr?
Thanks Erick.. If I figure out something I will let you know also.. No body
replied except you I thought there might be more people involve here..
Thanks
On Wed, Aug 31, 2011 at 3:47 AM, Erick Erickson erickerick...@gmail.comwrote:
OK, I'll have to defer because this makes no sense.
4+
Sure, just use the Luke handler. See LukeRequest and LukeResponse
in the API documentation.
Best
Erick
On Wed, Aug 31, 2011 at 2:23 AM, Michael Szalay
michael.sza...@basis06.ch wrote:
You are right, they are not stored...
But I is possible to see them, as the schema browser in the admin
Actually, I haven't used the new stuff yet, so I'm not entirely sure either,
but that sure would be the place to start. There's some historical
ambiguity, Grouping started out as Field Collapsing, and they are
used interchangeably.
If you go to the bug I linked to and open up the patch file,
Satish
You don't say which platform you are on but have you tried links (with ln on
linux/unix) ?
François
On Aug 31, 2011, at 12:25 AM, Satish Talim wrote:
I have 1000's of cores and to reduce the cost of loading unloading
schema.xml, I have my solr.xml as mentioned here -
The CollapseComponent was never comitted. This class exists in the
SOLR-236 patches. You don't need to change the configuration in order
to use grouping.
The blog you mentioned is based on the SOLR-236 patches. The current
grouping in Solr 3.3 has superseded these patches.
From Solr 3.4 (not yet
Well, that's one use case, there're others where you need to highlight only
what is matching.
For now, I solved the problem by writing an additional procedure to correct
the highlighting. Not nice, but it works!
On Sat, Aug 6, 2011 at 11:10 AM, Kissue Kissue kissue...@gmail.com wrote:
I think
Frankie, Have you fixes this issue? I'm interested in your solution,,
--
View this message in context:
http://lucene.472066.n3.nabble.com/Find-results-with-or-without-whitespace-tp3117144p3298298.html
Sent from the Solr - User mailing list archive at Nabble.com.
up !
No-one have any clue about this question ? Is it more a dev-related question
?
2011/8/26 Gérard Dupont ger.dup...@gmail.com
Hi all,
Playing with multicore and dynamic creation of new core, I found out that
there is one mandatory parameter instanceDir which is mandaotry to find
out the
Not sure if this has progressed further but I'm getting test failure
for 3.3 also.
Trunk builds and tests fine but 3.3 fails the test below
(Note i've a new box so could be a silly set up issue i've missed but
i think everything is in place (latest version of java 1.6, latest
version of ant)
Hello,
Anyone knows how can you access property environmen from a custom
Transformer I defined?
Also, I am wondering where solrcore.properties should be located in a
multicore setup and how can I access the properties defined inside from
various solr plugins?
Many thanks for your help,
Boubaker
There is no scheduling built into Solr. But many, including the search system
deployed on our (Lucid's) website, is powered by cron jobs kicking off indexers
of various varieties all the time.
Look into your operating systems scheduling capabilities and leverage those, is
my advice. Cron is
you can use the task scheduler of windows or tomcat listener ,the related
solution is posted on the solr wiki
http://wiki.apache.org/solr/DataImportHandler#HTTPPostScheduler
You can have a look at this page:
http://wiki.apache.org/solr/DataImportHandler#HTTPPostScheduler
this scheduler can post not only command like delta-import but also command
like full-import
I'm printing a big bold cheatsheet about it and stickin' it everywhere :-)
I wish I could change this thread's subject to alexei is not working
properly :-/
2011/8/30 Erick Erickson erickerick...@gmail.com
Yep, that one takes a while to figure out, then
I wind up re-figuring it out every time
I am using 64 bit JVM and we are going out of memory in extraction phase where
TIKA assigns the content after extracting to SOLRInputDocument in the pipeline
which gets loaded in memory.
We are using released 3.1 version of SOLR.
Thanks,
Tirthankar
-Original Message-
From: simon
Also noticed that waitSearcher parameter value is not honored inside commit.
It is always defaulted to true which makes it slow during indexing.
What we are trying to do is use an invalid query (which wont return any
results) as a warming query. This way the commit returns faster. Are we
Try looking at your warming queries. Create a warming query that will not
return ay results. See if it helps returning commits faster.
Thx
-Original Message-
From: Bill Au [mailto:bill.w...@gmail.com]
Sent: Friday, May 27, 2011 3:47 PM
To: solr-user@lucene.apache.org
Subject: Re: very
Hi
I want to upgrade my solr version 1.4 to 3.1. Please suggest the steps
what challenges might occur.
I have started using solr from 1.4 this is my 1st experience to upgrade
the version
thanks
Pawan
Everything you need to know about upgrading is listed in CHANGES.txt
On Wednesday 31 August 2011 18:14:11 Pawan Darira wrote:
Hi
I want to upgrade my solr version 1.4 to 3.1. Please suggest the steps
what challenges might occur.
I have started using solr from 1.4 this is my 1st
SEVERE: org.apache.solr.common.SolrException: Error Instantiating
UpdateRequestProcessorFactory, ToTheGoCustom is not a
org.apache.solr.update.processor.UpdateRequestProcessorFactory
i'm getting this error, but i don't know how to fix it
this is solrconfig.xml:
updateRequestProcessorChain
(11/09/01 1:22), samuele.mattiuzzo wrote:
SEVERE: org.apache.solr.common.SolrException: Error Instantiating
UpdateRequestProcessorFactory, ToTheGoCustom is not a
org.apache.solr.update.processor.UpdateRequestProcessorFactory
i'm getting this error, but i don't know how to fix it
this is
You need to create also a class that
extends org.apache.solr.update.processor.UpdateRequestProcessorFactory. This
is the one that you indicate on the solrconfig and it's the one that will
instantiate your UpdateRequestProcessor.
see
Believe I found it, wasn't populating the docset and doclist. Again
thanks for all of the support.
On Tue, Aug 30, 2011 at 11:00 PM, Jamie Johnson jej2...@gmail.com wrote:
Found score, so this works for regular queries but now I'm getting an
exception when faceting.
SEVERE: Exception during
Hello, I have a very specific question about the Solr response passed to
remote JsonStore.
*Solr response passed to remote JsonStore*
var myJsonStore = new Ext.data.JsonStore({
// store configs
url: myurl,
baseParams:
I've set up a master slave configuration and it's working great! I know
this is the better setup but if I had just one index due to requirements,
I'd like to know more about the performance hit of the commit. let's just
assume I have a decent size index of a few gig normal sized documents with
Well, if it is for creating a *new* core, Solr doesn't know it is pointing to
your shared conf directory until after you create it, does it?
JRJ
-Original Message-
From: Gérard Dupont [mailto:ger.dup...@gmail.com]
Sent: Wednesday, August 31, 2011 8:17 AM
To: solr-user@lucene.apache.org
So if I understand you, you are using Tika /SolrJ together in a Solr client
process which talks to your Solr server ? What is the heap size ? Can you
give us a stack trace from the OOM exception ?
-Simon
On Wed, Aug 31, 2011 at 10:58 AM, Tirthankar Chatterjee
tchatter...@commvault.com wrote:
: Why doesn't AND text:foo fill this requirement?
or fq=text:foo (if you don't want it to affect scoring, and it sounds
like you don't)
But since you asked: if you want to use functions in fq you have to tell
solr to parse it as a function. There are a variaty of options...
Yes, Ranged Facets
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range
2011/8/31 Denis Kuzmenok forward...@ukr.net
Hi.
Suppose i have a field price with different values, and i want to
get ranges for this field depending on docs count, for example i want
to get 5 ranges
No, this form is part of the Apache Solr project. Lucid does maintain
a searchable index of this list though...
Best
Erick
On Tue, Aug 30, 2011 at 10:40 PM, solrnovice manisha...@yahoo.com wrote:
hi Lance, thanks for the link, i went to their site, lucidimagination forum,
when i searched on
This is the one I've used,
http://wiki.apache.org/solr/SpatialSearch
Best
Erick
On Tue, Aug 30, 2011 at 9:09 PM, solrnovice manisha...@yahoo.com wrote:
hi Erik, today i had the distance working. Since the solr version under
LucidImagination is not returning geodist(), I downloaded Solr 4.0
The first question I'd ask is why are there duplicates
in your index in the first place?. If you're denormalizing,
that would account for it. Mostly, I'm just asking to be
sure that you expect duplicate product IDs. If you make
your productid a uniqueKey, there'll only be one of each
You'll
For a specific document, try explainOther, see:
http://wiki.apache.org/solr/SolrRelevancyFAQ#Why_doesn.27t_document_id:juggernaut_appear_in_the_top_10_results_for_my_query
Don't quite know whether this will work for your users, you may have to
massage the output to make something more concise.
Thanks! I appreciate your input. You are right, yesterday I actually
denormalized my index using multivalued fields. Now I am using Solr the way
it was designed and I am happy, everything seems to work great.
On Wed, Aug 31, 2011 at 6:06 PM, Erick Erickson erickerick...@gmail.comwrote:
The
The first question I'd ask is why are there duplicates
in your index in the first place?. If you're denormalizing,
that would account for it. Mostly, I'm just asking to be
sure that you expect duplicate product IDs. If you make
your productid a uniqueKey, there'll only be one of each
hi
I have some issues with search result relevancy.
default query operator is OR
i search for iphone 4. I m not sure how would i get iphone 4 results to show
first.
I tried
?q=iphone+4start=0wt=jsonindent=onfl=displayName,scoreqt=dismaxfq=productType:Devicedebug=truepf=displayNameps=3
Debug output of a few would help. There can be other factors that produce more
weight than pf/ps. Most of the time it's tf and norms that play a big part.
hi
I have some issues with search result relevancy.
default query operator is OR
i search for iphone 4. I m not sure how would i get
Would it work to just (relative) path the schema
file for your cores with the schema parameter?
Best
Erick
2011/8/31 François Schiettecatte fschietteca...@gmail.com:
Satish
You don't say which platform you are on but have you tried links (with ln on
linux/unix) ?
François
On Aug 31,
I am experimenting Solr on Windows, for now.
Satish
2011/8/31 François Schiettecatte fschietteca...@gmail.com
Satish
You don't say which platform you are on but have you tried links (with ln
on linux/unix) ?
François
On Aug 31, 2011, at 12:25 AM, Satish Talim wrote:
I have 1000's of
You might want to check your analyzers in schema.xml. It appears numbers are
filtered out.
So basically you are searching for iphone instead of iphone 4
--
View this message in context:
http://lucene.472066.n3.nabble.com/word-proximity-and-queryoperator-OR-tp3299729p3299919.html
Sent from
I want to go a geodist() calculation on 2 different sfields. How would
I do that?
http://localhost:8983/solr/select?q={!func}add(geodist(),geodist())fq={!geofilt}pt=39.86347,-105.04888d=100sfield=store_lat_lon
But I really want geodist() for one pt, and another geodist() for another pt.
Can I
hi
i dont understand why though.
here is my displayName filed type text
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.SynonymFilterFactory
ok, thank you Erick, i will check this forum as well.
thanks
SN
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Geodist-tp3287005p3300236.html
Sent from the Solr - User mailing list archive at Nabble.com.
Solr 3.3 is what I use and I have configured grouping of results by default.
I have some 30-40 sample documents in my index. I use Solritas UI . When I
search I don't get the results across pages. Even when I specify an empty
query the results that are returned are just for the first page.
What
But i don't know what values would be price field in that query. It
can be 100-1000, and 10-100, and i want to get ranges in every query,
just split price field by docs number.
Yes, Ranged Facets
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range
2011/8/31 Denis Kuzmenok
53 matches
Mail list logo