Thanks Gora, I tried that but didn't help.
Regards.
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-incorrect-datasource-being-picked-up-by-XPathEntityProcessor-tp3994802p3995211.html
Sent from the Solr - User mailing list archive at Nabble.com.
Look at the index with the Schema Browser in the Solr UI. This pulls
the terms for each field.
On Sun, Jul 15, 2012 at 8:38 PM, Giovanni Gherdovich
g.gherdov...@gmail.com wrote:
Hi all,
are stopwords from the stopwords.txt config file
supposed to be indexed?
I would say no, but this is the
Ok i'm added the debug, there is the query from the response after executing
query :
facet=true,sort=publishingdate
desc,debugQuery=true,facet.mincount=1,q=service:1 AND
David,
Thanks for such a detailed response. The data volume I mentioned is the
total set of records we have - but we would never ever need to search the
entire base in one query; we would divide the data by region or zip code.
So, in that case I assume that for a single region, we would not have
OK: that is helpful, thanks!
On 13 July 2012 15:44, Mark Miller markrmil...@gmail.com wrote:
It really comes down to you.
Many people run a trunk version of Solr in production. Some never would.
Generally, bugs are fixed quickly, and trunk is pretty stable. The main
issue is index format
Yes,
sorry Just a typo.
I meant
q=*:*fq=start=0rows=10qt=wt=explainOther=fl=product:(if(show_product:true,
product, )
thanks
On Sat, Jul 14, 2012 at 11:27 PM, Erick Erickson [via Lucene]
ml-node+s472066n3995045...@n3.nabble.com wrote:
I think in 4.0 you can, but not 3.x as I remember. Your
On Mon, Jul 16, 2012 at 4:43 AM, maurizio1976
maurizio.picc...@gmail.com wrote:
Yes,
sorry Just a typo.
I meant
q=*:*fq=start=0rows=10qt=wt=explainOther=fl=product:(if(show_product:true,
product, )
thanks
Functions normally derive their values from the fieldCache... there
isn't currently
So you want to re-use same SQL sentence in many entities?
Yes
is it necessary to deploy complete solr and lucene for this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-include-Fieldset-in-query-tp3994798p3995228.html
Sent from the Solr - User mailing list archive at
Hi Giovanni,
you have entered the stopwords into stopword.txt file, right? But in the
definition of the field type you are referencing stopwords_FR.txt..
best regards,
Michael
On Mon, 16 Jul 2012 05:38:04 +0200, Giovanni Gherdovich
g.gherdov...@gmail.com wrote:
Hi all,
are stopwords from
Hi all, thank you for your replies.
Lance:
Look at the index with the Schema Browser in the Solr UI. This pulls
the terms for each field.
I did it, and it was the first alarm I got.
After the indexing, I went on the schema browser hoping
to don't see any stopword in the top-terms, but...
they
Hi,
Is the any way to make grouping searches more efficient?
My queries look like:
/select?q=querygroup=truegroup.field=idgroup.facet=truegroup.ngroups=truefacet.field=category1facet.missing=falsefacet.mincount=1
For index with 3 mln documents query for all docs with group=true takes
almost
Okay... found the problem after some more debugging. I was using a wrong
datasource tag in the data-config.xml, may be Solr should validate the xml
against a schema so these kind of issues are caught upfront.
wrong: datalt;bs*ource name=fieldSource type=FieldReaderDataSource /
correct:
Yes, This feature will solve the below problem very neatly.
All,
Is there any approach to achieve this for now?
--Rajani
On Sun, Jul 15, 2012 at 6:02 PM, Jack Krupansky j...@basetechnology.comwrote:
The answer appears to be No, but it's good to hear people express an
interest in proposed
You'll have to query the index for the fields and sift out the _s ones
and cache them or something.
On Mon, 2012-07-16 at 16:52 +0530, Rajani Maski wrote:
Yes, This feature will solve the below problem very neatly.
All,
Is there any approach to achieve this for now?
--Rajani
On
Andrew:
I'm not entirely sure that's your problem, but it's the first thing I'd try.
As for your config files, see the section Replicating solrconfig.xml
here: http://wiki.apache.org/solr/SolrReplication. That at least
allows you to centralize separate solrconfigs for master and
slave,
Ahhh, you need to look down another few lines. When you specify fq, there
should be a section of the debug output like
arr name=filter_queries
.
.
.
/arr
where the array is the parsed form of the filter queries. I was thinking about
comparing that with the parsed form of the q parameter in
Hi Agnieszka ,
if you don't need number of groups, you can try leaving out
group.ngroups=true param.
In this case Solr apparently skips calculating all groups and delivers
results much faster.
At least for our application the difference in performance
with/without group.ngroups=true is
In this URL - https://issues.apache.org/jira/browse/SOLR-247
there are *patches *and one patch with name *SOLR-247-FacetAllFields*
Will that help me to fix this problem?
If yes, how do I add this to solr plugin ?
Thanks Regards
Rajani
On Mon, Jul 16, 2012 at 5:04 PM, Darren Govoni
Hi Pavel,
I tried with group.ngroups=false but didn't notice a big improvement.
The times were still about 4000 ms. It doesn't solve my problem.
Maybe this is because of my index type. I have millions of documents but
only about 20 000 groups.
Cheers
Agnieszka
2012/7/16 Pavel Goncharik
Michael,
Thanks for the response. Below is the stack trace.
Note: Our environment is 64 bit and the Initial Pool size is set to 4GB and
Max pool size is 12GB so it doesn't makes sense why it tries to allocate
24GB (even that is available as the total RAM is 64GB).
This issue doesn't come with
Hello, Bruno,
No, 4 simultaneous requests should not be a problem.
Have you checked the Tomcat logs or logged the data in the query
response object to see if there are any clues to what the problem
might be?
Michael Della Bitta
Appinions, Inc.
samabhiK wrote
David,
Thanks for such a detailed response. The data volume I mentioned is the
total set of records we have - but we would never ever need to search the
entire base in one query; we would divide the data by region or zip code.
So, in that case I assume that for a single
On Jul 15, 2012, at 2:45 PM, Nick Koton wrote:
I converted my program to use
the SolrServer::add(CollectionSolrInputDocument docs) method with 100
documents in each add batch. Unfortunately, the out of memory errors still
occur without client side commits.
This won't change much
Thinking more about this, the way to get a Lucene based system to scale to
the maximum extent possible for geospatial queries would be to get a
geospatial query to be satisfied by just one (usually) Lucene index segment.
It would take quite a bit of customization and work to make this happen. I
I have server with 24GB RAM. I have 4 shards on it, each of them with 4GB
RAM for java:
JAVA_OPTIONS=-server -Xms4096M -Xmx4096M
The size is about 15GB for one shard (i use ssd disk for index data).
Agnieszka
2012/7/16 alx...@aim.com
What are the RAM of your server and size of the data
Thanks Erick,
I will look harder at our current configuration and how we're handling
config replication, but I just realized that a backup script was doing a
commit and an optimize on the slave prior to taking the backup. This
happens daily, after updates and replication from the master. This is
This is strange. We have data folder size 24Gb, RAM for java 2GB. We query
with grouping, ngroups and highlighting, do not query all fields and query
time mostly is less than 1 sec it rarely goes up to 2 sec. We use solr 3.6 and
tuned off all kind of caching.
Maybe your problem is with
Maybe try EdgeNgramFilterFactory
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters/#solr.EdgeNGramFilterFactory
On Mon, Jul 16, 2012 at 6:57 AM, santamaria2 aravinda@contify.comwrote:
I'm about to implement an autocomplete mechanism for my search box. I've
read
about some of
Thank you,
I am already on 4alpha. Patch feels a little too unstable for my
needs/familiarity with the codes.
What about something around multiple cores? Could I have full-text
fields stored in a separate cores and somehow (again, minimum
hand-coding) do search against all those cores and get
Hello.
We are running Solr 3.5 multicore in master-slave mode.
-Our delta-import looks like:
/solr/core01/dataimport?command=delta-import*optimize=false*
The size of the index in 1.18GB
When delta-import is going on, on the slave admin UI
8983/solr/core01/admin/replication/index.jsp
I can
term component will be faster.
like below:
http://host:port/solr/terms?terms.fl=contentterms.prefix=sol
--
View this message in context:
http://lucene.472066.n3.nabble.com/Wildcard-query-vs-facet-prefix-for-autocomplete-tp3995199p3995378.html
Sent from the Solr - User mailing list archive at
Hello Michael,
I will check the log, but today I think to another thing may be it's my
program that it losts some requests.
It's the first time where the download is so fast.
With Jetty, it's a little bit slower so may be for this reason my
program works fine.
Do you think I can use Jetty
Hello Bruno,
Jetty is a legitimate choice. I do, however, worry that you might be
masking an underlying problem by making that choice, without a
guarantee that it won't someday hurt you even if you use Jetty.
A question: are you using a client to connect to Solr and issue your
queries? Something
Erick Erickson wrote
Ahhh, you need to look down another few lines. When you specify fq, there
should be a section of the debug output like
arr name=filter_queries
.
.
.
/arr
where the array is the parsed form of the filter queries. I was thinking
about
comparing that with the
That suggests you're running out of threads
Michael,
Thanks for this useful observation. What I found just prior to the problem
situation was literally thousands of threads in the server JVM. I have
pasted a few samples below obtained from the admin GUI. I spent some time
today using this
Any thought on this? Is the default Mmap?
Sent from my mobile device
720-256-8076
On Feb 14, 2012, at 7:16 AM, Bill Bell billnb...@gmail.com wrote:
Does someone have an example of using unmap in 3.5 and chunksize?
I am using Solr 3.5.
I noticed in solrconfig.xml:
directoryFactory
Hi,
Our index is divided into two shards and each of them has 120M docs , total
size 75G in each core.
The server is a pretty good one , jvm is given memory of 70G and about same
is left for OS (SLES 11) .
We use all dynamic fields except th eunique id and are using long queries
but almost all
We all know that MMapDirectory is fastest. However we cannot always
use it since you might run out of memory on large indexes right?
Here is how I got iSimpleFSDirectoryFactory to work. Just set
-Dsolr.directoryFactory=solr.SimpleFSDirectoryFactory.
Your solrconfig.xml:
directoryFactory
Yep.
-Dsolr.directoryFactory=solr.SimpleFSDirectoryFactory
or
-Dsolr.directoryFactory=solr.MMapDirectoryFactory
works great.
On Mon, Jul 16, 2012 at 7:55 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Hi Bill,
Standard picks one for you. Otherwise, you can hardcode the
Thanks Brian. Excellent suggestion.
I haven't used VisualVM before but I am going to use it to see where CPU is
going. I saw that CPU is overly used. I haven't seen so much CPU use in
testing.
Although I think GC is not a problem, splitting the jvm per shard would be
a good idea.
On Mon, Jul
Another thing you may wish to ponder is this blog entry from Mike
McCandless:
http://blog.mikemccandless.com/2011/04/just-say-no-to-swapping.html
In it, he discusses the poor interaction between OS swapping, and
long-neglected allocations in a JVM. You're on Linux, which has decent
control over
42 matches
Mail list logo