Hi,
What's the latest non-standard version of this function? v3 right? If
Basho adds more versions to this, is this somewhere documented?
For our nodes standard choose/wants claim functions were doing a weird
distribution so the numbers even out a bit better (just a bit better) by
using v3,
Hi Matthew!
I have a possibility of moving the data of anti-entropy directory to a
mechanic disk 7200, that exists on each of the machines. I was thinking of
changing the anti_entropy data dir config in app.config file and restart
the riak process.
Is there any problem using a mechanic disk to
Yes, you can send the AAE (active anti-entropy) data to a different disk.
AAE calculates a hash each time you PUT new data to the regular database. AAE
then buffers around 1,000 hashes (I forget the exact value) to write as a block
to the AAE database. The AAE write is NOT in series with
Hey there. There are a couple of things to keep in mind when deleting
invalid AAE trees from the 1.4.3-1.4.7 series after upgrading to 1.4.8:
* If AAE is disabled, you don't have to stop the node to delete the data in
the anti_entropy directories
* If AAE is enabled, deleting the AAE data in a
Thanks, I'll start the process and give you guys some feedback in the mean
while.
The plan is
1 - Disable AAE in the cluster via riak attach:
a.
rpc:multicall(riak_kv_entropy_manager, disable, []).
rpc:multicall(riak_kv_entropy_manager, cancel_exchanges, []).
z.
2 - Update the app.config
Hi Guido,
What's the latest non-standard version of this function? v3 right? If Basho
adds more versions to this, is this somewhere documented?
For our nodes standard choose/wants claim functions were doing a weird
distribution so the numbers even out a bit better (just a bit better) by
Thanks Engel,
That approach looks very accurate, I would only suggest to have a
riak-admin cluster stop-aae and similar for start, for the dummies ;-)
Guido.
On 10/04/14 14:22, Engel Sanchez wrote:
Hey there. There are a couple of things to keep in mind when deleting
invalid AAE trees from
For the mailing list's reference, this issue has been resolved by the following:
* Increase pb_backlog to 256 in the Riak app.config on all nodes
* Increase +zdbbl to 96000 in the Riak vm.args on all nodes
* Switch proxies from tengine (patched nginx) to HAProxy
* Reduce ring size from 256 to
What would be a high ring size that would degrade performance for v3:
128+? 256+?
I should had asked using the original response but I deleted it by accident.
Guido.
On 10/04/14 10:30, Guido Medina wrote:
Hi,
What's the latest non-standard version of this function? v3 right? If
Basho adds
Dear mailing list,
Our prod riak cluster goes down with an power outage. We restartet riak an
found a lot of the following messages:
[error] 0.22012.0 Hintfile
'/var/lib/riak/bitcask/174124218510127105489110888272838406638695088128/142
.bitcask.hint' has bad CRC 28991324 expected 0
[error]
Sebastian,
Those errors are normal following an outage of this sort. Hintfiles will be
regenerated during the next bitcask merge and any data file that is
incomplete or has invalid entries will be truncated. This does result in a
loss of replicas but as long as at least 1 replica of the data is
He Brian,
Thanks a Lot for The Instant answer . I will try the steps from the
documentation an an test cluster an if anything works fine I try to repair my
production cluster.
Best regards
Sebastian
Von meinem iPad gesendet
Am 10.04.2014 um 20:20 schrieb Brian Sparrow
12 matches
Mail list logo