Oh ya. The previous log was from shard1. This log is from shard2.
INFO - 2015-03-18 15:06:51.019;
org.apache.solr.update.processor.LogUpdateProcessor; [logmill] webapp=/solr
path=/update params={distrib.from=
http://192.168.2.2:8983/solr/logmill/update.distrib=TOLEADERwt=javabinversion=2}
{} 0
Hi Charlee,
I've followed the setup from the Solr In Action book, and assign port 8983
to shard1 and port 8984 to shard2. Will it cause any issues?
Regards,
Edwin
On 19 March 2015 at 13:02, Charlee Chitsuk charlee...@gmail.com wrote:
The http://192.168.2.2:8984/solr/
Hi,
Thanks erick and shawn for the reply.
Just wanted to clarify that commit size of 10 was only an example and in
production commit is handled via auto-commit feature of solr.
The requirement we have is to store around 20-30 lakh docs out of which
around 5-6 lakh docs get updated daily. What I
Hi,
we have quite a problem with Solr. We are running it in a config 6x3, and
suddenly solr started to hang, taking all the available cpu on the nodes.
In the threads dump noticed things like this can eat lot of CPU time
- org.apache.solr.search.LRUCache.put(LRUCache.java:116)
-
Yes. Just before your email I was able to figure out. My project was set to
user solrj 4.10.3 everything was working fine except cloud so I didn't
noticed.
After I switched to Solrj 5 it's working now
Thanks everyone for supporting
Hi,
I'm using Solr Cloud now, with 2 shards known as shard1 and shard2, and
when I try to index rich-text documents using REST API or the default
Documents module in Solr Admin UI, the documents that are indexed do not
appear immediately when I do a search. It only appears after I restarted
the
Hi,
I started solr in cloud mode (interactive set up). 3 nodes, 3 shards and 1
replica and a collection. I stopped it using ./solr stop -all. How do I
get the same above cloud mode setup to start? ./solr -c start started
the new solr cloud instance all together where as I was looking for the
Hi Erick
Am I right to saywe need todo the combine of duplicate records into 1
before feeding it to Solr to index?
I am coming from Endecawhich support the combine of duplicate records
into 1 recordduring indexing. Was wondering if Solr support this.
-Derek
On 3/18/2015 11:21 PM, Erick
Hi Alxeandre,
Number of segment counts are different but document
counts are same.
With (soft commit - 300 and hardcommit - 6000) = No. of segment - 43
AND
With (soft commit - 6 and hardcommit - 6) = No. of segment - 31
I dont' have any idea related to segment
Hi, one morning my Solr server broke with this message below, it didn't
recover on its own - had to restart it - Is that a 4.7.2 known issue?
My topology is very simple: single Solr with a single shard replica, and
an embedded ZK (-zkrun).
Could it be related to a 4.8 fix: SOLR-5799: When
Hi,
When I started solr in cloud mode(interactive) and chose 2 nodes, it
started and in the cloud-view screen it showed some different ip with url
169.254.5.207:7574, when clicked on that, it says page not found. When I
modified url to localhost(http://localhost:7574/solr/#/~cloud) it
I think this is because of change in network ip address. I got it. Thanks.
On Thu, Mar 19, 2015 at 1:32 PM, davidphilip cherian
davidphilipcher...@gmail.com wrote:
Hi,
When I started solr in cloud mode(interactive) and chose 2 nodes, it
started and in the cloud-view screen it showed some
Hi Erick..
I read your Article. Really nice...
Inside that you said that for bulk indexing. Set soft commit = 10 mins and
hard commit = 15sec. Is it also okay for my scenario?
On Thu, Mar 19, 2015 at 1:53 AM, Erick Erickson erickerick...@gmail.com
wrote:
bq: As you said, do
Hi Edvin
Please review your commit/soft-commit configuration,
soft commits are about visibility, hard commits are about durability
by a wise man. :)
If you are doing NRT index and searching, your probably need a short soft
commit interval or commit explicitly in your request handler. Be
Hi Erick,
I'm sorry for this delay but I've just seen this reply.
I'm using the last version of solr and the default setting is to use the
new kind of indexing, it doesn't use schema.xml and for that I have no
idea about how set store for this field.
The content is grabbed because I've
It might be because LRUCache by default will try to evict its entries on
each call to put and putAll. LRUCache is built on top of java's
LinkedHashMap. Check the javadoc of removeEldestEntry
Hi Shawn ,
Thanks for replying .. I need clarity on following points
a) Making store false in schema for few fields will improve indexing time ?
b) Does soft commit and hard commit configuration depends on hard ware ?
c) Should i do merge factor , Rambuffersize configuration ? and how should
i
Hi Shawn,
Yes, I'm using the /update/extract handler. I'm not sure about the
shards.qt parameter too.
Regards,
Edwin
On 19 March 2015 at 13:18, Shawn Heisey apa...@elyograg.org wrote:
On 3/18/2015 1:22 AM, Zheng Lin Edwin Yeo wrote:
I'm having some issues with indexing rich-text documents
Thank you for the information.
Yes, the program is working correctly now and I can search for the
documents immediately after issuing commit=true.
Regards,
Edwin
On 20 March 2015 at 04:07, Erick Erickson erickerick...@gmail.com wrote:
The post jar issues a hard commit (openSearcher=true) as
Hi ,
- architecture : master (1) - slave(3)
solrconfig:
autoSoftCommit maxTime500/maxTime /autoSoftCommit
autoCommit maxTime15000/maxTime openSearcherfalse/openSearcher /
autoCommit
schema :
field name=id type=string indexed=true stored=true required=true
multiValued=false/ field
That or even hard commit to 60 seconds. It's strictly a matter of how often
you want to close old segments and open new ones.
On Thu, Mar 19, 2015 at 3:12 AM, Nitin Solanki nitinml...@gmail.com wrote:
Hi Erick..
I read your Article. Really nice...
Inside that you said that for
The post jar issues a hard commit (openSearcher=true) as part of the
operation. As Liu says, you are probably not committing the changes
after ingestion.
You can issue this from a browser:
.solr/collection/update?commit=true
to force a commit manually.
Best,
Erick
On Thu, Mar 19, 2015 at
Hmm, not all that sure. That's one thing about schemaless indexing, it
has to guess. It does the best it can, but it's quite possible that it
guesses wrong.
If this is a mananged schema, you can use the REST API commands to
make whatever field you want. Or you can start over with a concrete
Looks like it is still broken.
The fixed name of system property zkCredentialsProvider and zkACLProvider
are only impacted on the zkcli.sh script (org.apache.solr.cloud.ZkCLI).
So using command bellow, I'm able to *bootstrap *and *upconfig *to the
Zookeeper with appropriate credentials and ACLs:
On 3/19/2015 2:02 AM, davidphilip cherian wrote:
When I started solr in cloud mode(interactive) and chose 2 nodes, it
started and in the cloud-view screen it showed some different ip with url
169.254.5.207:7574, when clicked on that, it says page not found. When I
modified url to
Hello,
I am trying to use the 4.9.1 SOLR Core API and the 1.3.2.RELEASE version of the
Spring Data SOLR API, to connect to a SOLR server, but to no avail.
When I run Java application, I get the following errors:
---
Exception in thread main
I bet the problem is how the SolrServer instance is used within Spring
Repository. I think somewhere you should alternatively
- explicitly close the client each time.
- reuse the same instance (and finally close that)
But being a Spring newbie I cannot give you further information.
Best,
Then you just have to remove the group.sort especially if your group limit
is set to 1.
Le 19 mars 2015 16:57, kumarraj rajitpro2...@gmail.com a écrit :
*if the number of documents in one group is more than one then you cannot
ensure that this document reflects the main sort
Is there a way
: Does the Solr admin UIcloud view show the gettingstarted collection?
: The graph view might help. It _sounds_ like somehow you didn't
: actually create the collection.
: [Adnan]- Yes
if you can see the collection in the admin ui, can you please use the
Dump menu option in the Cloud section to
On 3/19/2015 12:24 AM, vicky desai wrote:
I fail to understand why this deleted docs are not removed from index on
merging. Is there a good documentation which explains how exactly is merging
done?
What can I do to solve this problem other than optimization?
Deleted docs *are* removed by
On 3/19/2015 2:09 AM, Derek Poh wrote:
Am I right to saywe need todo the combine of duplicate records into 1
before feeding it to Solr to index?
I am coming from Endecawhich support the combine of duplicate records
into 1 recordduring indexing. Was wondering if Solr support this.
If you
Hi Shawn,
Thanks you for the detailed explanation.
On Thu, Mar 19, 2015 at 7:31 PM, Shawn Heisey apa...@elyograg.org wrote:
On 3/19/2015 2:02 AM, davidphilip cherian wrote:
When I started solr in cloud mode(interactive) and chose 2 nodes, it
started and in the cloud-view screen it showed
Sorry, I've been a bit unfocused from this list for a bit. When I was
working with the APTF code I rewrote a big chunk of it and didn't include
the inclusion of the original tokens as I didn't need it at the time. That
feature could easily be added back in. I will see if I can find a bit of
time
bq: Am I right to saywe need todo the combine of duplicate records
into 1 before feeding it to Solr to index?
That's what I'd do. As Shawn says, if you simply fire them both at
Solr the more recent one will replace the older one.
Best,
Erick
On Thu, Mar 19, 2015 at 7:44 AM, Shawn Heisey
On 3/19/2015 11:47 AM, abhishek tiwari wrote:
autoSoftCommit maxTime500/maxTime /autoSoftCommit
You're doing soft commits as often as twice a second. You have
configured 500 milliseconds here. This might have something to do with
your slow indexing speed. A soft commit is less expensive than
Hello all,
I have a Solr 4.10.3 collection with ~55 million documents (index size about
6GB) with a LatLonType field and a dynamic field for storing the coordinates,
like stated here
https://wiki.apache.org/solr/SpatialSearch#Schema_Configuration
: Chris,
: Please find attached Dump
nothing jumps out at me as looking odd, but i'm not the expert on this
stuff either -- hopefully someone else can take a look.
can you provide us with some more detials on what exactly you've done?
you said ...
: : What steps did you follow to create
Hi Henrique,
Please see the Solr reference guide instead of the “community wiki” you
referenced:
https://cwiki.apache.org/confluence/display/solr/Spatial+Search (you can
download one for 4.10; the online link is always for the latest).
For spatial filtering, *especially* at-scale, you really
Dear Apache Lucene/Solr enthusiast,
In just a few weeks, we'll be holding ApacheCon in Austin, Texas, and we'd love
to have you in attendance. You can save $300 on admission by registering NOW,
since the early bird price ends on the 21st.
Register at http://s.apache.org/acna2015-reg
ApacheCon
Thanks, David. I’m looking at it now.
On Mar 19, 2015, at 4:51 PM, david.w.smi...@gmail.com wrote:
Hi Henrique,
Please see the Solr reference guide instead of the “community wiki” you
referenced:
https://cwiki.apache.org/confluence/display/solr/Spatial+Search (you can
download one for
On Fri, Mar 13, 2015 at 1:43 PM, Dominique Bejean
dominique.bej...@eolya.fr wrote:
Thank you for the response
This is something Heliosearch can do. Ionic Seeley, created a JIRA ticket
to back port this feature to Solr 5.
Oh, I'm charged now, am I? ;-)
I'ts been committed, and will be in
Are you using a SolrJ client from 4.x to connect to a Solr 5 cluster?
On Wed, Mar 18, 2015 at 1:32 PM, Adnan Yaqoob itsad...@gmail.com wrote:
I'm getting following exception while trying to upload document on
SolrCloud using CloudSolrServer.
Exception in thread main
*if the number of documents in one group is more than one then you cannot
ensure that this document reflects the main sort
Is there a way the top record which is coming up in the group is considered
for sorting?
We require to show the record from 212(even though price is low) in both the
cases
David
starting 1st node
bin\solr start -cloud -p 8983 -s C:\Java\solr-5.0.0\example\cloud\node1\solr
starting 2nd node
--
bin\solr -cloud -p 7574 -s C:\Java\solr-5.0.0\example\cloud\node2\solr -z
localhost:9983
The third would be similar to
Erick
Does the Solr admin UIcloud view show the gettingstarted collection?
The graph view might help. It _sounds_ like somehow you didn't
actually create the collection.
[Adnan]- Yes
What steps did you follow to create the collection in SolrCloud? It's
possible you have the wrong ZK root somehow
45 matches
Mail list logo