No problem, that's even more simpler. You can safely drop my PR to the docs
(it was somewhat hard to explain what to do in a consistent way with other
steps anyway). I will test the procedure from the beginning with the
one-command branch this week-end, but it seems it's even better now (and it
I'm trying to work with Riak on PHP. I read the documentation but no luck.
So I've already configured the search option in /etc/init.d/app.conf
as true.
So, print_r($results) returns as null array.
?php
require_once('riakclient.php');
$client = new RiakClient('127.0.0.1', 8098);
$bucket =
Hello,
We are running a 6-node Riak 1.3.0 cluster in production. We recently
upgraded to 1.3. Prior to this, we were running Riak 1.2 on the same 6-node
cluster.
We are finding that the nodes are not balanced. For instance:
= Membership
I have exactly the same problem with my cluster. If anyone knows what those
errors mean... :-)
Godefroy
2013/3/28 Giri Iyengar giri.iyen...@sociocast.com
Hello,
We are running a 6-node Riak 1.3.0 cluster in production. We recently
upgraded to 1.3. Prior to this, we were running Riak 1.2 on
Godefroy:
Thanks. Your email exchange on the mailing list was what prompted me to
consider switching to Riak 1.3. I do see repair messages in the console
logs and so some healing is happening. However, there are a bunch of hinted
handoffs and ownership handoffs that are simply not proceeding
Giri, I've seen similar issues in the past when someone was adjusting
their ttl setting on the memory backend. Because one memory backend
has it and the other does not, it fails on handoff. The solution
then was to make sure that all memory backend settings are the same
and then do a rolling
Evan,
I verified that all of the memory backends have the same ttl settings and
have done rolling restarts but it doesn't seem to make a difference. One
thing to note though -- I remember this problem starting roughly around the
time I migrated a bucket from being backed by leveldb to being
it would if some of the nodes weren't migrated to the new
multi-backend schema; if a memory node was trying to hand off to a
eleveldb backed node, you'd see this.
On Thu, Mar 28, 2013 at 2:05 PM, Giri Iyengar
giri.iyen...@sociocast.com wrote:
Evan,
I verified that all of the memory backends
Evan,
I reconfirmed that all the servers are using identical app.configs. They
all use multi-backend schema. Are you saying that some of the vnodes are in
memory backend in one physical node and in eleveldb backend in another
physical node?
If so, how can I fix the offending vnodes?
Thanks,
Giri,
if all of the nodes are using identical app.config files (including
the joining node) and have been restarted since those files changed,
it may be some other, related issue.
On Thu, Mar 28, 2013 at 2:46 PM, Giri Iyengar
giri.iyen...@sociocast.com wrote:
Evan,
I reconfirmed that all the
Evan,
All nodes have been restarted (more than once, in fact) after the config
changes. Using riak-admin aae-status, I noticed that the anti-entropy
repair is still proceeding across the cluster.
It has been less than 24 hours since I upgraded to 1.3 and maybe I have to
wait till the first
No. AAE is unrelated to the handoff subsystem. I am not familiar
enough with the lowest level of it's working to know if it'd reproduce
the TTL stuff across on nodes that don't have it.
I am not totally sure about your timeline here.
When did you start seeing these errors, before or after your
Evan,
This has been happening for a while now (about 3.5 weeks now), even prior
to our upgrade to 1.3.
-giri
On Thu, Mar 28, 2013 at 6:36 PM, Evan Vigil-McClanahan
emcclana...@basho.com wrote:
No. AAE is unrelated to the handoff subsystem. I am not familiar
enough with the lowest level of
13 matches
Mail list logo