the
shard names explicitly, you have to use the CoreAdmin API to create each core -
that lets you set the shard id.
- Mark
On Feb 25, 2013, at 8:14 PM, Ryan Zezeski rzeze...@gmail.com wrote:
I would like to see a
similar fix made upstream and that is why I am posting here.
Please file a JIRA issue and attach your patch. Great write up! (Saw it pop up
on twitter, so I read it a little earlier).
- Mark
of that.
--
- Mark
You have to put the jar on each node on a std lib dir. Same as non Solrcloud
mode.
We will be adding support to put the jars in zookeeper in an upcoming release.
Mark
Sent from my iPhone
On Feb 24, 2013, at 7:25 AM, mitcoe4 mitc...@gmail.com wrote:
hi,
We are using solr 4.0.0 . We want
You either have to specifically upload a config set or use one of the bootstrap
sys props.
Are you doing either?
- Mark
On Feb 24, 2013, at 8:15 PM, Darren Govoni dar...@ontrenet.com wrote:
Thanks Michael.
I went ahead and just started an external zookeeper, but my solr node throws
How are you doing the backup? You have to coordinate with Solr - files may be
changing when you try and copy it, leaving to an inconsistent index. If you
want to do a live backup, you have to use the backup feature of the replication
handler.
- Mark
On Feb 23, 2013, at 3:54 AM, Prakhar Birla
have to bring that logic forward as well.
(Long story of why I'd want to do all this... and I know people think
adding ~2 to all tokens will give bad results anyway, trying to fix
inherited code that can't be scrapped, etc)
--
Mark Bennett / New Idea Engineering, Inc. / mbenn...@ideaeng.com
Direct
We are fixing this bug here: https://issues.apache.org/jira/browse/SOLR-4471
- Mark
On Feb 22, 2013, at 7:07 AM, Artyom ice...@mail.ru wrote:
I have the same problem. This bug appeared in 4.0 rarely, but 4.1 downloads
the full index every time.
It just means at some point a replication was done that required flipping to a
new directory. It's expected. Once you flip from the index directory to an
index.timestamp directory, you never go back.
- Mark
On Feb 22, 2013, at 8:14 PM, Mingfeng Yang mfy...@wisewindow.com wrote:
I see
You could copy each shard to a single node and then use the merge index feature
to merge them into one index and then start up a single Solr node on that. Use
the same configs.
- Mark
On Feb 22, 2013, at 8:11 PM, Erol Akarsu eaka...@gmail.com wrote:
I have a solr cloud 7 nodes, each has 2
It's not really any different in SolrCloud as the pre-cloud - distrib search is
still the same code done the same way by and large.
shards.qt should be just as valid an option as forcing a query component.
- Mark
On Feb 21, 2013, at 7:56 AM, AlexeyK lex.kudi...@gmail.com wrote:
In pre-cloud
The leader doesn't really do a lot more work than any of the replicas, so I
don't think it's likely that important. If someone starts running into
problems, that's usually when we start looking for solutions.
- Mark
On Feb 21, 2013, at 10:20 PM, Vaillancourt, Tim tvaillanco...@ea.com wrote
SolrJ update requests order deletes and adds in
the same request either, so that would also need to be addressed. Pretty sure
solrj will do the adds then the deletes.
- Mark
On Feb 19, 2013, at 2:23 PM, Vinay Pothnis vinay.poth...@gmail.com wrote:
Hello,
I have the following set up
Swap is unsupported - really it should throw an exception right now.
There is a JIRA issue to add support for swap in SolrCloud mode of some kind.
- Mark
On Feb 20, 2013, at 7:59 PM, Rollin.R.Ma (lab.sh04.Newegg) 41099
rollin.r...@newegg.com wrote:
Hi
I am a newer to solrCloud, I
On Feb 19, 2013, at 9:16 AM, Markus Jelsma markus.jel...@openindex.io wrote:
Ah, thanks. Got a Jira? I don't think i'm watching that one right now.
https://issues.apache.org/jira/browse/SOLR-3154
- Mark
think.
- Mark
On Feb 20, 2013, at 10:08 AM, Shankar Sundararaju shan...@ebrary.com wrote:
Hi All,
I am using Solr 4.1.
I have a Solr cluster of 3 leaders and 3 replicas hosting collection1
consisting of thousands of documents currently serving the search requests.
I would like re-index
Can you give some more details? When you look at the cloud tab of the admin UI,
does the cluster visualization look right? Are all the nodes green? Perhaps the
shard is a leader and a replica single shrad and you just think it's 2 shards?
- Mark
On Feb 20, 2013, at 8:26 PM, rulinma ruli
currently.
I wonder if one commit to only one leader, not every doc to different leaders
according the shards.
That's just an optimization. Updates are forwarded to the right now no matter
which you originally send them to.
- Mark
leader and just move on to the next candidate.
Still some tricky corner cases to deal with and such as well.
I think for most things you would use this to solve, there is probably
an alternate thing that should be addressed.
- Mark
On Mon, Feb 18, 2013 at 4:15 PM, Vaillancourt, Tim tvaillanco
We need to see more of your logs to determine why - there should be some
exceptions logged.
- Mark
On Feb 18, 2013, at 9:47 AM, Cool Techi cooltec...@outlook.com wrote:
I am seeing the following error in my Admin console and the core/ cloud
status is taking forever to load.
SEVERE
busy with other stuff..
Concerning CloudSolrServer, there is a JIRA to make it hash and send updates to
the right leader, but currently it still doesn't - it just favors leaders in
general over non leaders currently.
- Mark
On Feb 18, 2013, at 7:34 AM, Markus Jelsma markus.jel...@openindex.io
Not sure - any other errors? An optimize once a day is a very heavy operation
by the way! Be sure the gains are worth the pain you pay.
- Mark
On Feb 18, 2013, at 10:04 AM, adm1n evgeni.evg...@gmail.com wrote:
Hi,
I'm running SolrCloud (Solr4) with 1 core, 8 shards and zookeeper
My index
merge parameters and avoid optimize altogether. It's usually pre optimization
that leads to the over use of optimize and it's usually unnecessary and quite
costly.
- Mark
On Feb 18, 2013, at 11:12 AM, adm1n evgeni.evg...@gmail.com wrote:
Thanks for your response.
No, nothing else. Only those
On Feb 15, 2013, at 6:04 AM, o.mares ota.mares+nab...@gmail.com wrote:
Hey when running a solr cloud setup with 4 servers, managing 3 cores each
splitted on 2 shards, what are the proper steps to do a full index import?
Do you have to import the index on all of the solr instances? Or is it
Sounds like you should file a JIRA issue.
- Mark
On Feb 15, 2013, at 6:07 PM, Charton, Andre achar...@ebay-kleinanzeigen.de
wrote:
Hi,
I upgrade solr form 3.6 to 4.1. Since them the replication is full copy
the index from master.
Master is delta import via DIH every 10min. Slave poll
For 4.2, I'll try and put in https://issues.apache.org/jira/browse/SOLR-4078
soon.
Not sure about the behavior your seeing - you might want to file a JIRA issue.
- Mark
On Feb 15, 2013, at 8:17 PM, Gary Yngve gary.yn...@gmail.com wrote:
Hi all,
I've been unable to get the collections
I don't know - by chance, I'm actually doing about the same sequence of events
right now with Solr 4.1, and the cores are running fine…
What do the logs say?
- Mark
On Feb 14, 2013, at 10:18 PM, Anirudha Jadhav aniru...@nyu.edu wrote:
*1.empty Zookeeper*
*2.empty index directories for solr
A search for id is much too broad. I looked at 3 of the SolrCloud classes you
mention and none of those id's have anything to do with the unique field in
the schema. I have not looked at the hash based router, but if you find a real
issue then please file a JIRA issue.
- Mark
On Feb 12, 2013
On Feb 13, 2013, at 1:17 PM, Amit Nithian anith...@gmail.com wrote:
doesn't it do a commit to force solr to recognize the changes?
yes.
- Mark
well at the least and probably open a JIRA
issue to address if possible - of course if it could easily be addressed, I'm
sure Yonik would have done it when he wrote it.
- Mark
On Feb 13, 2013, at 1:25 PM, Mark Miller markrmil...@gmail.com wrote:
A search for id is much too broad. I looked at 3
Yes, though the reasons are not so interesting.
Soon solr.xml is going away regardless - perhaps in a another release or two.
- mark
On Feb 13, 2013, at 2:02 PM, Anirudha Jadhav aniru...@nyu.edu wrote:
is there a strong reason why we still need solr.xml on disk and it cannot
be persisted
of excludes, facet queries and
grouping and got this working?
Kind Regards,
Mark
configurable. In solr.xml change the
cores attribute leaderVoteWait to n milliseconds or 0. It defaults to 18
(3 minutes).
- Mark
On Feb 12, 2013, at 8:31 AM, adm1n evgeni.evg...@gmail.com wrote:
Hi all,
the first question:
is there a way to reduce timeout when sold shard comes up? it looks
Yonik looked into it and said the process was actually fine in his testing.
After the release, we did find one issue - if you don't explicitly set the
host, the host 'guess' feature has changed and may guess a different address.
- Mark
On Feb 11, 2013, at 1:16 PM, Shawn Heisey s
to both
dcs.
- Mark
On Feb 11, 2013, at 2:43 PM, mizayah miza...@gmail.com wrote:
This is good sollution.
One thing here is rly unyoing. The double indexing.
Is there a way to replicate to another dc? Seams solrcloud cant use his
ealier replication.
Would be nice if i can replicate somehow
Eventually, I'll get around to trying some more real world testing. Up till
now, no dev seems to have a real interest in this. I have 0 need for it
currently, so it's fairly low on my itch scale, but it's on my list anyhow.
- Mark
On Feb 11, 2013, at 12:26 PM, Shawn Heisey s...@elyograg.org
Doesn't sound right to me. I'd guess you heard wrong.
- mark
Sent from my iPhone
On Feb 11, 2013, at 7:15 PM, Shawn Heisey s...@elyograg.org wrote:
I have heard that SolrCloud may require the presence of a uniqeKey field
specifically named 'id' for sharding.
Is this true? Is it still
Looks odd - the supposedly missing class looks like an inner class in
MultiPhraseQuery.
- Mark
On Feb 9, 2013, at 6:19 AM, Markus Jelsma markus.jel...@openindex.io wrote:
Any ideas so far? I've not yet found anything that remotely looks like the
root of the problem so far
Did you clear the data dir for all 3 zk's? If not, you will find ghosts coming
back to haunt you :)
It's often easier to clear zk programmatically - for example it's one call from
the cmd line zkcli script.
http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
- Mark
On Feb 9, 2013, at 1
Nothing will ever open a new searcher unless you explicitly send a commit with
openSearcher=true.
Either change openSearcher on your auto hard commit to true, or start using
soft commit for visibility.
- Mark
On Feb 9, 2013, at 12:44 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:
Hello
on that flushed segment.
- Mark
On Feb 7, 2013, at 11:29 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:
Hello,
What actually happens when using soft (as opposed to hard) commit?
I understand somewhat very high-level picture (documents become available
faster, but you may loose them on power
You can unload the core for that node and it will be removed from zookeeper.
You can add it back after if you leave it's state on disk and recreate the core.
- Mark
On Feb 7, 2013, at 5:20 AM, yriveiro yago.rive...@gmail.com wrote:
Hi,
Exists any way to eject a node from a solr cluster
the old master-slave architecture as one
option.
With a small amount of dev, having some polling replication for the index side
and using solrcloud for the search side might be possible, though not
necessarily a perfect marriage.
- Mark
Re (2): Deploying new schema/config should
option that
guarantees a replication.
- Mark
On Feb 6, 2013, at 4:23 PM, Gregg Donovan gregg...@gmail.com wrote:
In the process of upgrading from 3.6 to 4.1, we've noticed that much of the
code we had that relied on the 3.6 behavior of SolrCore#getIndexDir() is
not working the same way
on and what are you seeing now?
- Mark
Thanks Gregg - can you file a JIRA issue?
- Mark
On Feb 6, 2013, at 5:57 PM, Gregg Donovan gregg...@gmail.com wrote:
Mark-
You're right that SolrCore#getIndexDir() did not directly read
index.properties in 3.6. In 3.6, it gets it indirectly from what is passed
to the constructor
the cluster. Stop/remove the tmp node.
- Mark
On Feb 5, 2013, at 12:22 PM, Mike Schultz mike.schu...@gmail.com wrote:
Just to clarify, I want to be able to replace the down node with a host with
a different name. If I were repairing that particular machine and replacing
it, there would
The request should give you access to the core - the core to the core
descriptor, the descriptor to the core container, which knows about all the
cores.
- Mark
On Feb 5, 2013, at 4:09 PM, Ryan Josal rjo...@rim.com wrote:
Hey guys,
I am writing an UpdateRequestProcessorFactory plugin
The SolrCoreAware interface?
- Mark
On Feb 5, 2013, at 5:42 PM, Ryan Josal rjo...@rim.com wrote:
By way of the deprecated SolrCore.getSolrCore method,
SolrCore.getSolrCore().getCoreDescriptor().getCoreContainer().getCores()
Solr starts up in an infinite recursive loop of loading cores
What led you to trying that? I'm not connecting the dots in my head - the
exception and the solution.
- Mark
On Feb 3, 2013, at 2:48 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I think the issue was not in zk client timeout, but POST request size. When
I increased the value
Do you see anything about session expiration in the logs? That is the likely
culprit for something like this. You may need to raise the timeout:
http://wiki.apache.org/solr/SolrCloud#FAQ
If you see no session timeouts, I don't have a guess yet.
- Mark
On Feb 2, 2013, at 7:35 PM, Marcin
You can use 'none' for the lock type in solrconfig.xml.
You risk corruption if two IW's try to modify the index at once though.
- Mark
On Feb 1, 2013, at 6:56 PM, dm_tim dm_...@yahoo.com wrote:
Well that makes sense. The problem is that I am working in both Solr and
Lucene directly. I have
likely).
I'd say jetty doesn't get anymore blessed than that. If you want to run another
container, fine, but I would pick jetty myself - specifically, the one we ship
with without darn good reason.
- Mark
The admin user interface and admin/cores are two very different things - they
just happen to share admin in the url.
It doesn't make any sense to secure admin/cores unless you are also going to
secure all the other Solr API's.
- Mark
On Jan 30, 2013, at 5:55 AM, AlexeyK lex.kudi...@gmail.com
to be incomplete.
I don't think it is? What is missing?
- Mark
On Jan 29, 2013, at 3:50 PM, Gregg Donovan gregg...@gmail.com wrote:
should we
just try uncommenting that line in ReplicationHandler?
Please try. I'd file a JIRA issue in any case. I can probably take a closer
look.
- Mark
I think this has come up on the mailing list before. I don't remember the
details, but you want to restrict the admin UI but not the CoreAdmin url -
/admin/cores.
- Mark
On Jan 28, 2013, at 4:37 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
If you add security constraint for /admin
Hey Shawn - got a suggestion for an addition for the wiki that would
have saved you some time here?
- Mark
On Sat, Jan 26, 2013 at 1:22 PM, Shawn Heisey s...@elyograg.org wrote:
On 1/26/2013 6:31 AM, Per Steffensen wrote:
We have actually tested this and found that the following will do
I don't have any targeted advice at the moment, but just for kicks, you might
try using Solr 4.1.
- Mark
On Jan 25, 2013, at 2:47 PM, Sean Siefert s...@gumiyo.com wrote:
So I have quite a few cores already where this exact (as far as replication
is concerned) solrconfig.xml works. The other
Yeah, I've noticed this two in some distrib search tests (it's not SolrCloud
related per say I think, but just distrib search in general).
Want to open a JIRA issue about making this consistent?
- Mark
On Jan 25, 2013, at 2:39 PM, Mingfeng Yang mfy...@wisewindow.com wrote:
We are migrating
for
production.
Post any questions with your results if you could. Perhaps we can beef up the
wiki a bit so others don't hit the same issues.
- Mark
, pass the numShards param.
The CoreAdmin API works with it.
You can pass it for every call, but the first call is the critical one.
- Mark
be an important feature
for those upgrading from 4.0).
You can also always explicitly set the host address. I would recommend this for
production. It's the host param in solr.xml and by default it's setup so that
you can pass a sys prop to set it on startup.
- Mark
On Jan 25, 2013, at 3:37 PM, davers
and remove the stuff left on the filesystem.
- Mark
On Jan 25, 2013, at 7:42 PM, Mingfeng Yang mfy...@wisewindow.com wrote:
Right now I have an index with four shards on a single EC2 server, each
running on different ports. Now I'd like to migrate three shards
to independent servers.
What should
On Jan 24, 2013, at 7:05 AM, Shawn Heisey s...@elyograg.org wrote:
My experience has been that you put the chroot at the very end, not on every
host entry
Yup - this came up on the mailing list not too long ago and it's currently
correctly documented on the SolrCloud wiki.
- Mark
{initShardHandler(null);}
};
- Mark
On Jan 24, 2013, at 9:22 AM, Ted Merchant ted.merch...@cision.com wrote:
We recently updated from Solr 4.0.0 to Solr 4.1.0. Because of the change we
were forced to upgrade a custom query parser. While the code change itself
was minimal, we found
/SolrCloud#Command_Line_Util to upload a new
schema.xml - then just Collections API reload command. Two lines in a script.
- Mark
posting I know.
Testing and reporting on the issue I posted, as well as discussion around
expanding it, will likely help pushing those features forward.
- Mark
Was your full logged stripped? You are right, we need more. Yes, the peer sync
failed, but then you cut out all the important stuff about the replication
attempt that happens after.
- Mark
On Jan 23, 2013, at 5:28 AM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
Previously, I took
Looks like it shows 3 cores start - 2 with versions that decide they are up to
date and one that replicates. The one that replicates doesn't have much logging
showing that activity.
Is this Solr 4.0?
- Mark
On Jan 23, 2013, at 9:27 AM, Upayavira u...@odoko.co.uk wrote:
Mark,
Take a peek
Does the admin cloud UI show all of the nodes as green? (active)
If so, something is not right.
- Mark
On Jan 23, 2013, at 10:02 AM, Roupihs joey.d...@gmail.com wrote:
I have a one shard collection, with one replica.
I did a dataImport from my oracle DB.
In the master, I have 93835 docs
It's hard to guess, but I might start by looking at what the new UpdateLog is
costing you. Take it's definition out of solrconfig.xml and try your test
again. Then let's take it from there.
- Mark
On Jan 23, 2013, at 11:00 AM, Kevin Stone kevin.st...@jax.org wrote:
I am having some
effects between queries.
- Mark
Yeah, I don't know what you are seeing offhand. You might try Solr 4.1 and see
if it's something that has been resolved.
- Mark
On Jan 23, 2013, at 3:14 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
cores are in sync
http calls. The
data directory is a local property for each SolrCore and other nodes in the
cloud do not need to know about it.
- Mark
On Jan 22, 2013, at 11:33 AM, Otis Gospodnetic otis.gospodne...@gmail.com
wrote:
Thanks Markus. Yes, I'm after the actual, physical directory on the local
FS
The logging shows that its finding transaction log entries.
Are you doing anything else while bringing the nodes up and down? Indexing? Are
you positive you remove the tlog files? It can't really have any versions if it
doesn't read them from a tlog on startup...
- Mark
On Jan 22, 2013, at 3
No idea - logs might help.
- Mark
On Jan 22, 2013, at 4:37 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Sorry, my mistake. I did 2 tests: in the 1st I removed just index directory
and in 2nd test I removed both index and tlog directory. Log lines I've
sent are related to the first case. So
Indexing should def not slow down substantially if you commit every minute or
something. Be sure to use openSearcher=false on the auto hard commit.
- Mark
On Jan 19, 2013, at 11:11 PM, Nikhil Chhaochharia nikhil...@yahoo.com wrote:
Hi,
We run a SolrCloud cluster using Solr 4.0
to the wiki.
- Mark
handler.
I think it would be nice to clean this up a bit somehow. Or document it better.
- Mark
On Jan 18, 2013, at 3:39 PM, Shawn Heisey s...@elyograg.org wrote:
On 1/18/2013 1:20 PM, Mike Schultz wrote:
Can someone explain the logic of not sending the qt parameter down to the
shards?
I
heap will likely mean a few second pauses
at least at some points. A well tuned concurrent collector will never step the
world in most situations.
-XX:+UseConcMarkSweepGC
I wrote an article that might be useful a while back:
http://searchhub.org/2011/03/27/garbage-collection-bootcamp-1-0/
- Mark
options is to setup solr.xml like you would locally, then start with
-Dconf_bootstrap=true and it will duplicate your local config and collection
setup in ZooKeeper.
- Mark
On Jan 17, 2013, at 9:10 PM, Shawn Heisey s...@elyograg.org wrote:
I'm trying to get a 2-node SolrCloud install off
was created
via the API, not via solr.xml, so I can't easily reconfigure the
collection.
You could just use the CoreAdmin API to create new replicas on whatever nodes.
- Mark
it try to hold them all in RAM?
And *if* the backlog caused an OOM condition, wouldn't that JVM have mostly
crashed (if not completely)?
Any guesses on the mostly likely failure point, and where to look?
Thanks,
Mark
--
Mark Bennett / New Idea Engineering, Inc. / mbenn...@ideaeng.com
Direct: 408
I've fixed this - thanks Gregg.
https://issues.apache.org/jira/browse/SOLR-4303
- Mark
On Jan 10, 2013, at 5:41 PM, Mark Miller markrmil...@gmail.com wrote:
Hmm…I don't recall that change. We use the force, so SolrCloud certainly does
not depend on it.
It seems like it might be a mistake
because the root
exception is being swallowed - it's likely a connect to zk failed exception
though.
- Mark
On Jan 10, 2013, at 1:34 PM, Christopher Gross cogr...@gmail.com wrote:
I'm trying to get SolrCloud working with more than one configuration going.
I have the base schema that Solr 4 comes
On Jan 10, 2013, at 12:06 PM, Shawn Heisey s...@elyograg.org wrote:
On 1/9/2013 8:54 PM, Mark Miller wrote:
I'd put everything into one. You can upload different named sets of config
files and point collections either to the same sets or different sets.
You can really think about
They point to the admin UI - or should - that seems right?
- Mark
On Jan 11, 2013, at 10:57 AM, Christopher Gross cogr...@gmail.com wrote:
I've managed to get my SolrCloud set up to have 2 different indexes up and
running. However, my URLs aren't right. They just point to
http
It may still be related. Even a non empty index can have no versions (eg one
that was just replicated). Should behave better in this case in 4.1.
- Mark
On Jan 10, 2013, at 12:41 AM, Zeng Lames lezhi.z...@gmail.com wrote:
thanks Mark. will further dig into the logs. there is another problem
Setup hard auto commit with openSeacher=false. I would do it at least once a
minute. Don't worry about the commit being out of sync on the different nodes -
you will be using soft commits for visibility. The hard commits will just be
about relieving the pressure on the tlog.
- Mark
On Jan 10
.
- Mark
On Jan 10, 2013, at 5:17 AM, mizayah miza...@gmail.com wrote:
Lets say i got one collection with 3 shards. Every shard contains indexed
data.
I want to unload one shard. Is there any way for data from unloaded shard to
be not lost?
How to remove shard with data withoud loosing them
about periodically flushing the tlog and the soft commit completely controls
visibility.
- Mark
On Jan 10, 2013, at 9:41 AM, Upayavira u...@odoko.co.uk wrote:
And you don't need to open a searcher (openSearcher=false) because
you've got caches built up already alongside the in-memory NRT
.
One of the tradeoffs off using a very fast soft commit is that Sol's std caches
will not be nearly as useful.
- Mark
On Jan 10, 2013, at 11:24 AM, Upayavira u...@odoko.co.uk wrote:
That's great Mark. Thx. One final question... all the stuff to do with
autowarming and static warming
and
look closer later - can't remember who made the change in Solr.
- Mark
use cases, you
might not worry about it.
- Mark
On Jan 10, 2013, at 2:33 PM, Upayavira u...@odoko.co.uk wrote:
Heh, the it depends answer :-)
Thanks for the clarification.
Upayavira
On Thu, Jan 10, 2013, at 05:01 PM, Mark Miller wrote:
I think it really depends - if you are gong
.
- Mark
On Jan 10, 2013, at 4:49 PM, Gregg Donovan gregg...@gmail.com wrote:
Thanks, Mark.
The relevant commit on the solrcloud branch appears to be 1231134 and is
focused on the recovery aspect of SolrCloud:
http://svn.apache.org/viewvc?diff_format=hview=revisionrevision=1231134
http
Looks like we are talking about making a release candidate next week.
Mark
Sent from my iPhone
On Jan 10, 2013, at 7:50 PM, Zeng Lames lezhi.z...@gmail.com wrote:
thanks Mark. may I know the target release date of 4.1?
On Thu, Jan 10, 2013 at 10:13 PM, Mark Miller markrmil...@gmail.com
are green and the
collection consists of 3 shards. Each shard has 1 leader and 1 replica, each
hosted by a different Solr instance.
In other words, it seemed to work for me.
- Mark
On Jan 9, 2013, at 10:58 AM, James Thomas jtho...@camstar.com wrote:
Hi,
Simple question, I hope.
Using
It may be able to do that because it's forwarding requests to other nodes that
are up?
Would be good to dig into the logs to see if you can narrow in on the reason
for the recovery_failed.
- Mark
On Jan 9, 2013, at 8:52 PM, Zeng Lames lezhi.z...@gmail.com wrote:
Hi ,
we meet below
sets of
config files across collections if you want to. You don't need to at all though.
I'm not sure if xinclude works with zk, but I don't think it does.
- Mark
On Jan 9, 2013, at 10:31 PM, Shawn Heisey s...@elyograg.org wrote:
I have a lot of experience with Solr, starting with 1.4.0
801 - 900 of 2164 matches
Mail list logo