Do you seen any (a lot?) of the warming searchers on deck, i.e. value for N:
PERFORMANCE WARNING: Overlapping onDeckSearchers=N
On Wed, May 6, 2015 at 10:58 AM, adfel70 adfe...@gmail.com wrote:
Hello
I have a cluster of 16 shards, 3 replicas. the cluster indexed nested
documents.
it
On Wed, 2015-05-06 at 00:58 -0700, adfel70 wrote:
each shard has around 200 million docs. size of each shard is 250GB.
this runs on 12 machines. each machine has 4 SSD disks and 4 solr processes.
each process has 28GB heap. each machine has 196GB RAM.
[...]
1. heavy GCs when soft commit is
Hi,
I am a newbie to SOLR. I have setup Master Slave configuration with SOLR 4.0. I
am trying to identify what is the best way to backup an old core and delete the
same so as to free up space from the disk.
I did get the information on how to unload a core and delete the indexes from
the
We are currently having many custom properties defined in the
core.properties which are used in our solrconfig.xml, e.g.
str name=enabled${solr.enable.cachewarming:true}/str
Now we want to migrate to SolrCloud and want to define these properties for
a collection. But defining properties
1. yes, I'm sure that pauses are due to GCs. I monitor the cluster and
receive continuously metric from system and from java process.
I see clearly that when soft commit is triggered, major GCs start occurring
(sometimes reocuuring on the same process) and latency rises.
I use CMS GC and jdk
Nope, there is no way to find that out without actually doing the split. If
you have composite keys then you could also split using the prefix of a
composite id via the split.key parameter.
On Wed, May 6, 2015 at 9:32 AM, anand.mahajan an...@zerebral.co.in wrote:
Looks like its not possible to
Ah, I remember seeing this when we first started using Solr (which was 4.0
because we needed Solr Cloud), I never got around to filing an issue for it
(oops!), but we have a note in our schema to leave the key field a normal
string (like Bruno we had tried to lowercase it which failed).
We didn't
Hello
I have a cluster of 16 shards, 3 replicas. the cluster indexed nested
documents.
it currently has 3 billion documents overall (parent and children).
each shard has around 200 million docs. size of each shard is 250GB.
this runs on 12 machines. each machine has 4 SSD disks and 4 solr
Hi.
This is my first experience with Solr Cloud.
I installed three Solr nodes with three ZooKeeper instances and they
seemed to start well.
Now I have to create a new replicated core and I'm trying to found out
how I can do it.
I found many examples about how to create shards and cores, but I have
Hello,
When I starting solr-5.1.0 in Ubuntu 12.04 by,
*/bin/var/www/solr-5.0.0/bin ./solr start*
Solr is being started and shows as below,
*Started Solr server on port 8983 (pid=14457). Happy searching!*
When I starting Solr on http://localhost:8983/solr/ its not starting.
Then I have
Okay - Thanks for the confirmation Shalin. Could this be a feature request
in the Collections API - that we have a Split shard dry run API that accepts
sub-shards count as a request param and returns the optimal shard ranges for
the number of sub-shards requested to be created along with the
Ok, I found out that the creation of new core/collection on Solr 5.1
is made with the bin/solr script.
So I created a new collection with this command:
./solr create_collection -c test -replicationFactor 3
Is this the correct way?
Thank you very much,
Bye!
2015-05-06 10:02 GMT+02:00 shacky
Hi list.
I created a new collection on my new SolrCloud installation, the new
collection is shown and replicated on all three nodes, but on the
first node (only on this one) I get this error:
new_core:
Hi Anand,
The nature of the hash function (murmur3) should lead to a approximately
uniform distribution of documents across sub-shards. Have you investigated
why, if at all, the sub-shards are not balanced? Do you use composite keys
e.g. abc!id1 which cause the imbalance?
I don't think there is
UnsupportedClassVersionError means you have an old JDK. Use a more recent
one.
Markus
2015-05-06 12:59 GMT+02:00 Mayur Champaneria ma...@matchmytalent.com:
Hello,
When I starting solr-5.1.0 in Ubuntu 12.04 by,
*/bin/var/www/solr-5.0.0/bin ./solr start*
Solr is being started and shows as
Hi,
I have been using Solr for sometime now and by mistake I used String for my
date fields. The format of the string is like this: 2015-05-05T13:24:10Z
Now, If I need to change the field type to date from String will this
require complete reindex?
*Vishal Sharma**Team Leader, SFDC*T:+1 302
Hi,
I'm currently using solr to index a moderate amount of information with
the help of logstash and the solr_http contrib output plugin.
solr is receiving documents, I've got banana as a web interface and I am
running it with a schemaless core.
I'm feeding documents via the contrib plugin
We use the following merge policy on SSD's and are running on physical
machines with linux OS.
mergeFactor10/mergeFactor
mergePolicy class=org.apache.lucene.index.TieredMergePolicy/
mergeScheduler
class=org.apache.lucene.index.ConcurrentMergeScheduler
int
Thank you for the detailed answer.
How can I decrease the impact of opening a searcher in such a large index?
especially the impact of heap usage that causes OOM.
regarding GC tuning - I am doint that.
here are the params I use:
AggresiveOpts
UseLargePages
ParallelRefProcEnabled
I dont see any of these.
I've seen them before in other clusters and uses of SOLR but don't see any
of these messages here.
Dmitry Kan-2 wrote
Do you seen any (a lot?) of the warming searchers on deck, i.e. value for
N:
PERFORMANCE WARNING: Overlapping onDeckSearchers=N
On Wed, May 6,
Hi
I'm getting org.apache.lucene.index.CorruptIndexException
liveDocs.count()=2000699 info.docCount()=2047904 info.getDelCount()=47207
(filename=_ney_1g.del).
This just happened for the 4th time in 2 weeks.
each time this happens in another core, usually when a replica tries to
recover, then it
Exactly Tomnaso ,
I was referring to that !
I wrote another mail in the dev mailing list, I will open a Jira Issue for
that !
Cheers
2015-04-29 12:16 GMT+01:00 Tommaso Teofili tommaso.teof...@gmail.com:
2015-04-27 19:22 GMT+02:00 Alessandro Benedetti
benedetti.ale...@gmail.com
:
Just
On 5/6/2015 6:37 AM, Markus Heiden wrote:
UnsupportedClassVersionError means you have an old JDK. Use a more recent
one.
Specifically, Unsupported major.minor version 51.0 means you are
trying to use Java 6 (1.6.0) to run a program that requires Java 7
(1.7.0). Solr 4.8 and later (including
Yes - I'm using 2 level composite ids and that has caused the imbalance for
some shards.
Its cars data and the composite ids are of the form year-make!model-and
couple of other specifications. e.g. 2013Ford!Edge!123456 - but there are
just far too many Ford 2013 or 2011 cars that go and occupy the
On 5/6/2015 7:03 AM, Vishal Sharma wrote:
Now, If I need to change the field type to date from String will this
require complete reindex?
Yes, it absolutely will require a complete reindex. A change like that
probably will result in errors on queries until a reindex is done. You
may even need
On 5/6/2015 1:58 AM, adfel70 wrote:
I have a cluster of 16 shards, 3 replicas. the cluster indexed nested
documents.
it currently has 3 billion documents overall (parent and children).
each shard has around 200 million docs. size of each shard is 250GB.
this runs on 12 machines. each machine
I'm trying to get the AnalyzingInfixSuggester to work but I'm not successful.
I'd be grateful if someone can point me to a working example.
Problem:
My content is product descriptions similar to a BestBuy or NewEgg catalog.
My problem is that I'm getting only single words in the suggester
Hi
Is there a equivalent of Completion suggester of ElasticSearch in Solr ?
I am a user who uses both Solr and ES, in different projects.
I am not able to find a solution in Solr, where i can use :
1) FSA Structure
2) multiple terms as synonyms
3) assign a weight to each document based on
Hi Everyone,
I am using the Schema API to add a new copy field per:
https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-AddaNewCopyFieldRule
Unlike the other Add APIs, this one will not fail if you add an existing
copy field object. In fact, after when I call the API over and
Hi - I'm very interested in the new heat map capability of Solr 5.1.0.
Has anyone looked at combining geotool's HeatmapProcess method with this
data? I'm trying this now, but I keep getting an empty image from the
GridCoverage2D object.
Any pointers/tips?
Thank you!
-Joe
yes textSuggest is of type text_general with below definition
fieldType name=text_general class=solr.TextField
positionIncrementGap=100 sortMissingLast=true omitNorms=true
analyzer type=index
tokenizer class=solr.ClassicTokenizerFactory/
filter class=solr.ClassicFilterFactory/
Thank you Rajesh. I think I got a bit of help from the answer at:
http://stackoverflow.com/a/29743945
While that example sort of worked for me, I'm not had the time to test what
works and what didn't.
So far I have found that I need the the field in my searchComponent to be of
type 'string'. In
Have you seen this? I tried to make something end-to-end with assorted
gotchas identified
Best,
Erick
On Wed, May 6, 2015 at 3:09 PM, O. Olson olson_...@yahoo.it wrote:
Thank you Rajesh. I think I got a bit of help from the answer at:
http://stackoverflow.com/a/29743945
While that
On 5/6/2015 2:29 PM, Tim Dunphy wrote:
I'm trying to setup an old version of Solr for one of our drupal
developers. Apparently only versions 1.x or 3.x will work with the current
version of drupal.
I'm setting up solr 3.4.2 under tomcat.
And I'm getting this error when I start tomcat and
I'm trying to setup an old version of Solr for one of our drupal
developers. Apparently only versions 1.x or 3.x will work with the current
version of drupal.
I'm setting up solr 3.4.2 under tomcat.
And I'm getting this error when I start tomcat and surf to the /solr/admin
URL:
HTTP Status 404
Thank you Rajesh for responding so quickly. I tried it again with a restart
and a reimport and I still cannot get this to work i.e. I'm seeing no
difference.
I'm wondering how you define: 'textSuggest' in your schema? In my case I use
the field 'text' that is defined as:
field name=text
Just add the queryConverter definition in your solr config you should use
see multiple term suggestions.
and also make sure you have shingleFilterFactory as one of the filter in
you schema field definitions for your field text_general.
filter class=solr.ShingleFilterFactory maxShingleSize=5
I just tested your config with my schema and it worked.
my config :
searchComponent class=solr.SpellCheckComponent name=suggest1
lst name=spellchecker
str name=namesuggest/str
str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
make sure you have this query converter defined in your config
queryConverter name=queryConverter
class=org.apache.solr.spelling.SuggestQueryConverter/
*Thanks,*
*Rajesh**.*
On Wed, May 6, 2015 at 12:39 PM, O. Olson olson_...@yahoo.it wrote:
I'm trying to get the AnalyzingInfixSuggester to
Thank you Rajesh. I'm not familiar with the queryConverter. How do you wire
it up to the rest of the setup? Right now, I just put it between the
SpellCheckComponent and the RequestHandler i.e. my config is as:
searchComponent class=solr.SpellCheckComponent name=suggest
lst
Hi Steve,
It’s by design that you can copyField the same source/dest multiple times -
according to Yonik (not sure where this was discussed), this capability has
been used in the past to effectively boost terms in the source field.
The API isn’t symmetric here though: I’m guessing deleting a
Hey Chris,
Thanks for reply.
The exception is ArrayIndexOutOfBound. It is coming because searcher may
return bitDocSet for query1 and sortedIntDocSet for query2 [could be
possible]. In that case, sortedIntDocSet doesn't implement intersection and
will cause this exception.
Thanks and regards,
On Wed, May 6, 2015 at 8:10 PM, Steve Rowe sar...@gmail.com wrote:
It’s by design that you can copyField the same source/dest multiple times -
according to Yonik (not sure where this was discussed), this capability has
been used in the past to effectively boost terms in the source field.
: DocSet docset1 = Searcher.getDocSet(query1)
: DocSet docset2 = Searcher.getDocSet(query2);
:
: Docset finalDocset = docset1.intersection(docset2);
:
: Is this a valid approach ? Give docset could either be a sortedintdocset or
: a bitdocset. I am facing ArrayIndexOutOfBoundException when
:
On 5/6/2015 8:55 AM, adfel70 wrote:
Thank you for the detailed answer.
How can I decrease the impact of opening a searcher in such a large index?
especially the impact of heap usage that causes OOM.
See the wiki link I sent. It talks about some of the things that
require a lot of heap and
Well, they're just files on disk. You can freely copy the index files
around wherever you want. I'd do a few practice runs first though. So:
1 unload the core (or otherwise shut it down).
2 copy the data directory and all sub directories.
3 I'd also copy the conf directory to insure a consistent
That should have put one replica on each machine, if it did you're fine.
Best,
Erick
On Wed, May 6, 2015 at 3:58 AM, shacky shack...@gmail.com wrote:
Ok, I found out that the creation of new core/collection on Solr 5.1
is made with the bin/solr script.
So I created a new collection with this
Have you looked arond at your directories on disk? I'm _not_ talking
about the admin UI here. The default is core discovery mode, which
recursively looks under solr_home and thinks there's a core wherever
it finds a core.properties file. If you find such a thing, rename it
or remove the directory.
Yes thanks it's now for me too.
Daniel, my pn is always in uppercase and I index them always in uppercase.
the problem (solved now after all your answers, thanks) was the request,
if users
requests with lowercase then solr reply no result and it was not good.
but now the problem is solved, I
Gopal:
Did you see my previous answer?
Best,
Erick
On Tue, May 5, 2015 at 9:42 PM, Gopal Jee zgo...@gmail.com wrote:
about 2 , live_nodes under zookeeper is ephemeral node (please see
zookeeper ephemeral node). So, once connection from solr zkClient to
zookeeper is lost, these nodes will
Hi,
I have installed Solr on remote server and started on port 8983.
Now, I have bind my local machine port 8983 with remote server 8983 of Solr
using *ssh* (Ubuntu OS). When I am requesting on Solr for getting the
suggestions on remote server through local machine calls. Sometimes it
gives
Hi,
Is it possible to restrict number of documents per shard in Solr cloud?
Lets say we have Solr cloud with 4 nodes, and on each node we have one
leader and one replica. Like wise total we have 8 shards that includes
replicas. Now I need to index my documents in such a way that each shard
will
52 matches
Mail list logo