I've added you Susheel, go ahead :)
-Stefan
On Tuesday, March 4, 2014 at 5:09 AM, Susheel Kumar wrote:
My user name is SusheelKumar for solr wiki.
-Original Message-
From: Susheel Kumar [mailto:susheel.ku...@thedigitalgroup.net]
Sent: Monday, March 03, 2014 9:36 PM
To:
Hello list,
in the last couple of weeks one of my machines is experiencing
OutOfMemoryError: Java heap space errors. In a couple of hours after
starting the SOLR instance queries with execution times of unter 100ms
need more than 10s to execute and many Java heap space erros appear in
the
Hi Ahmet,
I forgot to include what I did for one customer :
1) Using StatsComponent I get min and max values of the field (year)
2) Calculate smart gap/range values according to minimum and maximum.
3) Re-issue the same query (for thee second time) that includes a set of
facet.query.
It's
Angel Tchorbadjiiski [angel.tchorbadjii...@antibodies-online.com] wrote:
[Single shard / 2 cores Solr 4.6.1, 65M docs / 50GB, 20 facet fields]
The OS in use is a 64bit linux with an OpenJDK 1.7 Java with 48G RAM.
I did not see your memory allocation anywhere. What is your Xmx?
P.S.: Here the
Hi Shalin,
I am making simple facet query on shards and the sort order is
based on the score. Actually sometimes I am getting the correct result from
the shards. I am not indexing the data at the time of query. I think as I am
making the shard query inside Solr search component, it is
Hi,
I think you could get some user traction if a user on your site would use
the same credentials as on the solr user mail list. Then when answering on
your site the answer would get posted on the user mail list. One thing to
check here is that the mail list keeper will like this :)
On Wed,
Hi,
I would like to know when designing index which approach is better,
Approach-1
Large number of documents (100 Million +) with 5-10 values per document for
one Multi value field
Approach-2
Less number of documents with 50-100 values per document for one multi value
field.
Right now I
Hello,
I'm using eDisMax to do scoring for my search results.
I have a nested structure of documents. The main (parent) document
with meta data and the child documents with fulltext content. So I
have to join them.
My qf looks like this title^40.0 subtitle^40.0 original_title^10.0
Hello,
I'm having issues with multicore management.
What I want to do :
*1st point :* Create new cores on the fly without restarting the Solr
instance
*2nd point :* Have these new cores registered in case of restarting Solr
instance
So, I tried *config A* :
/solr.xml/ :
Then I duplicated the
Hi,
do you have persistent=true in your solr.xml in the root element?
Dmitry
On Tue, Mar 4, 2014 at 3:30 PM, bengates benga...@aliceadsl.fr wrote:
Hello,
I'm having issues with multicore management.
What I want to do :
*1st point :* Create new cores on the fly without restarting the Solr
Hi,
I have a requirement where we need to sort the prices present for products
from all the user stores.
Let us assume if a product is present in 3 stores(store1001,1002,1003) ,i
have created the following fields in my schema.xml
* field name=storeid_str_mv type=string indexed=true
Yeah, sorry :( the fix applied is only for compatibility in one direction.
Older code won’t know what this type 19 is.
- Mark
http://about.me/markrmiller
On Mar 4, 2014, at 2:42 AM, Thomas Scheffler thomas.scheff...@uni-jena.de
wrote:
Am 04.03.2014 07:21, schrieb Thomas Scheffler:
Am
Does that mean newer clients work with older servers (I think so, from
reading this thread), or the other way round? If so, I guess the advice
would be -- upgrade all your clients first?
-Mike
On 03/04/2014 10:00 AM, Mark Miller wrote:
Yeah, sorry :( the fix applied is only for
Hi;
This maybe a simple question but when I query from Admin interface:
id:am.mobileworld.www:http/
returns me one document as well. However when I do it from Solrj with
deleteById it does not. Also when I send a query via Solrj it returns me
all documents (for id, id:am.mobileworld.www:http/
You are not escaping the Lucene query parser special characters:
+ - || ! ( ) { } [ ] ^ ~ * ? : \ /
-Original message-
From:Furkan KAMACI furkankam...@gmail.com
Sent: Tuesday 4th March 2014 16:57
To: solr-user@lucene.apache.org
Subject: Id As URL for Solrj
Hi;
This maybe
Or, maybe he is and shouldn't since deleteById is a SolrXML update handler
feature, not a query parser feature.
For example:
http://stackoverflow.com/questions/2657409/deleting-index-from-solr-using-solrj-as-a-client
-- Jack Krupansky
-Original Message-
From: Markus Jelsma
Sent:
we are currently using Oracle Java 1.7.0_11 23.6-b04 JDK with our Solr 4.6.1
setup
I was looking at upgrading to a more recent version but am wondering, are
there any versions to avoid?
reason I ask is that I see some versions that have GC issues but am not sure
how/if Solr is affected by them.
On 3/4/2014 10:52 AM, solr-user wrote:
we are currently using Oracle Java 1.7.0_11 23.6-b04 JDK with our Solr 4.6.1
setup
I was looking at upgrading to a more recent version but am wondering, are
there any versions to avoid?
reason I ask is that I see some versions that have GC issues but am
This information might come in handy for you:
https://issues.apache.org/jira/browse/LUCENE-5212
https://issues.apache.org/jira/browse/LUCENE-5241
On Tue, Mar 4, 2014 at 9:52 AM, solr-user solr-u...@hotmail.com wrote:
we are currently using Oracle Java 1.7.0_11 23.6-b04 JDK with our Solr
The Lucene PMC is pleased to announce that we have a new version of the
Solr Reference Guide available for Solr 4.7.
The 395 page PDF serves as the definitive user's manual for Solr 4.7. It
can be downloaded from the Apache mirror network:
Thanks Jack.
I could fix this problem by adding stopwords 'filter' condition in
fieldType definition for number and all_code
--
View this message in context:
http://lucene.472066.n3.nabble.com/stopwords-issue-with-edismax-tp4120339p4121176.html
Sent from the Solr - User mailing list
On 3/4/2014 2:23 AM, Angel Tchorbadjiiski wrote:
in the last couple of weeks one of my machines is experiencing
OutOfMemoryError: Java heap space errors. In a couple of hours after
starting the SOLR instance queries with execution times of unter 100ms
need more than 10s to execute and many
autoCommit
maxDocs25/maxDocs
maxTime90/maxTime
/autoCommit
-Original Message-
From: Lan [mailto:dung@gmail.com]
Sent: Monday, March 03, 2014 1:24 PM
To: solr-user@lucene.apache.org
Subject: Re: network slows when solr is running - help
How frequently
I'm attempting to run a polygon search but I'm getting back an Invalid
Number: Intersects(POLYGON(-83.63493346958422 42.47186899701156, response.
My geoloc data is stored in the index as follows: geoloc:
-82.549200,43.447400
My polygon query is as follows:
Here's what I believe is my solution:
Yesterday I changed nrtMode to false in my solrconfig.xml (see the example
solrconfig.xml for more info) on each master and slave server. And as of today
the numDocs are the same in each master/slave pair - but I'll continue watching
this for a bit.
We are looking to setup a highly available failover site across a WAN for our
SolrCloud instance. The main production instance is at colo center A and
consists of a 3-node ZooKeeper ensemble managing configs for a 4-node
SolrCloud running Solr 4.6.1. We only have one collection among the 4 cores
I am currently using SOLR 4.2 (non cloud mode). I see that most of the
changes made to the config files (solrconfig.xml, schema.xml, elevate.xml,
stopwords.txt etc..) gets updated when reloading the core.
Is there any particular change (in any of the config files) requires a
restart instead of
Hi,
I have the requirement to index and stem Croatian, Macedonian, Serbian
and Slovenian content. I started by creating a collection _hr_ for the
Croatian content and configured the HunSpellStemFilterFactory using the
.dic and .aff files provided by OpenOffice. While testing my
configuration
Hi Erick,
I understand what you pointing out but the thing is.. this is for
autocomplete feature. I cannot ignore parenthesis or other special
characters as in certain titles like 'A Team of five', if the user fives 'a
team' then titles containing a-team and rest also comes off and this one
gets
On 5 March 2014 02:14, bbi123 bbar...@gmail.com wrote:
I am currently using SOLR 4.2 (non cloud mode). I see that most of the
changes made to the config files (solrconfig.xml, schema.xml, elevate.xml,
stopwords.txt etc..) gets updated when reloading the core.
Is there any particular change
Yes, if they are tokenized text fields, but I was assuming that number was
a strictly numeric field.
That said, you could have numeric and non-tokenized string fields, but
copyField them to text fields (or a single text field) for purposes of
queries.
-- Jack Krupansky
-Original
On 3/4/2014 1:44 PM, bbi123 wrote:
I am currently using SOLR 4.2 (non cloud mode). I see that most of the
changes made to the config files (solrconfig.xml, schema.xml, elevate.xml,
stopwords.txt etc..) gets updated when reloading the core.
Is there any particular change (in any of the config
I did the following as you suggested. I have a lib dir under /mnt/solr/
(this is the solr.solr.home dir) and moved all my jars in it. I do not have
anySharedLib or lib references in my solr or solrconfig. xml file
The jars are not getting loaded for a few custom analyzers I have in the
schema.
On 3/4/2014 3:09 PM, KNitin wrote:
I did the following as you suggested. I have a lib dir under /mnt/solr/
(this is the solr.solr.home dir) and moved all my jars in it. I do not have
anySharedLib or lib references in my solr or solrconfig. xml file
The jars are not getting loaded for a few
: It's possible that this just a mistake in the error message after some
: real error with your actual geo/conf/solrconfig.xml has already been
Confirmed, the error message itself is bad...
https://issues.apache.org/jira/browse/SOLR-5814
-Hoss
http://www.lucidworks.com/
Has it really gone up in size from 5Mb for 4.6 version to 30Mb for 4.7
version? Or some mirrors are playing tricks (mine is:
http://www.trieuvan.com/apache/lucene/solr/ref-guide/ )
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn:
i want to use the following in fq and i need to set the operator to OR. My q.op
is AND but I need OR in fq. I have read about ofq but that is for putting OR
between multiple fq. Can I set the operator for fq?
(-organisations:[ TO *] -roles:[ TO *]) (+organisations:(150 42)
+roles:(174
Thanks a lot, Shawn! I was missing an ICU jar as a part of my original
setup. I then copied the analysis jars into solr/lib and removed all
reference in solrconfig.xml and it worked like a charm
The permgen space also seems to have reduced significantly
Thanks
Nitin
On Tue, Mar 4, 2014 at 2:41
Unfortunately, there is no out-of-the-box solution for this at the moment.
In the past, I solved this using a couple of different approaches, which
weren't all that elegant but served the purpose and were simple enough to allow
the ops folks to setup monitors and alerts if things didn't work.
Just my 2 cents on this while I wait for a build ... I think we have to ensure
that an older client will work with a newer server or newer client will work
with older server to support hot rolling upgrades. It's not unheard of these
days for an org to have 10's (or even 100's) of Solr cloud
Thanks, Tim, it's great to hear you say that! I tried to make that
point myself with various patches, but they never really got taken up by
committers, so I kind of gave up, but I agree with you 100% this is a
critical feature if you want to get real-world large deployments to
accept frequent
I would like to reduce the number of documents that are returned in search,
based on the inquiry terms in response and their IDF.
For example for the query q=(Definitive Java Book), I don't want to see the
result documents which has 'Book' and other irrelevant terms in it. For
example I don't
On 3/4/2014 8:15 PM, Michael Sokolov wrote:
Thanks, Tim, it's great to hear you say that! I tried to make that
point myself with various patches, but they never really got taken up by
committers, so I kind of gave up, but I agree with you 100% this is a
critical feature if you want to get
Hi all,
I have the following requirement where I have an application talking to
Solr via SolrJ where I don't know upfront which type of Solr instance that
will be communicating with, while this is easily solvable by using
different SolrServer implementations I also need a way to ensure that all
44 matches
Mail list logo