Hi everyone!
Quick question, is it possible to update the PB ip:port entry in runtime
without restarting de node?
Best regards!
--
*Edgar Veiga*
__
*BySide*
*edgar.ve...@byside.com <edgar.ve...@byside.com>*
http://www.byside.com
Rua Visconde Bóbeda, 70 r/c
40
Hi everyone!
Quick question, is it possible to update the PB entry in runtime
without restarting de node?
Best regards!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
for reason: noproc
Please, can anyone give me a help on this? I'm starting to get worried with
this behaviour. Tell me if you need more info!
Thanks and Best regards,
Edgar Veiga
On 10 February 2015 at 16:16, Edgar Veiga edgarmve...@gmail.com wrote:
Hi all!
I have a riak cluster, working
Hi all!
I have a riak cluster, working smoothly in production for about one
year, with the following characteristics:
- Version 1.4.12
- 6 nodes
- leveldb backend
- replication (n) = 3
~ 3 billion keys
~ 1.2Tb per node
- AAE disabled
Two days ago I've upgraded all of the 6 nodes from
suggests that some
tuning may be needed..
Best regards,
Edgar
On 9 February 2015 at 14:54, Christopher Meiklejohn cmeiklej...@basho.com
wrote:
On Feb 6, 2015, at 1:53 AM, Edgar Veiga edgarmve...@gmail.com wrote:
It is expected that the total amount of data per node lowers quite a
lot, correct
, Jan 24, 2015 at 9:49 PM, Edgar Veiga edgarmve...@gmail.com
wrote:
Yeah, after sending the email I realized both! :)
Thanks! Have a nice weekend
On 24 January 2015 at 21:46, Sargun Dhillon sar...@sargun.me wrote:
1) Potentially re-enable AAE after migration. As your cluster gets
bigger
Sargun,
Regarding 1) - AAE is disabled. We had a problems with it and there's a lot
of threads here in the mailing list regarding this. AAE won't stop using
more and more disk space and the only solution was disabling it! Since then
the cluster has been pretty stable...
Regarding 6) Can you or
Hi Alexander! Thanks for the reply.
Ring actual size: 256;
Total amount of data on cluster: ~6.6TB (~1.1TB per node)
Best regards,
Edgar
On 24 January 2015 at 20:42, Alexander Sicular sicul...@gmail.com wrote:
I would probably add them all in one go so you have one vnode migration
plan that
divergence only becomes scarier in light of this. Losing data
!= awesome.
6) There shouldn't be any problems, but for safe measures you should
probably upgrade the old ones before the migration.
On Sat, Jan 24, 2015 at 1:31 PM, Edgar Veiga edgarmve...@gmail.com
wrote:
Sargun,
Regarding
to store the anti-entropy data?
Best regards!
On 8 April 2014 23:58, Edgar Veiga edgarmve...@gmail.com wrote:
I'll wait a few more days, see if the AAE maybe stabilises and only
after that make a decision regarding this.
The cluster expanding was on the roadmap, but not right now :)
I've
not be impacted. But this is
not something I have personally measured/validated.
Matthew
On Apr 10, 2014, at 7:33 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Matthew!
I have a possibility of moving the data of anti-entropy directory to a
mechanic disk 7200, that exists on each
Hi Timo,
...So I stopped AAE on all nodes (with riak attach), removed the AAE
folders on all the nodes. And then restarted them one-by-one, so they
all started with a clean AAE state. Then about a day later the cluster
was finally in a normal state.
I don't understand the difference between what
Well, my anti-entropy folders in each machine have ~120G, It's quite a
lot!!!
I have ~600G of data per server and a cluster of 6 servers with level-db.
Just for comparison effects, what about you?
Someone of basho, can you please advise on this one?
Best regards! :)
On 8 April 2014 11:02,
So basho, to resume:
I've upgraded to the latest 1.4.8 version without removing the anti-entropy
data dir because at the time that note wasn't already on the Release Notes
of 1.4.8.
A few days later, I've made it: Stopped the aae via riak attach, restarted
all the nodes one by one removing the
. I made changes
recently to the aggressive delete code. The second section of the
following (updated) web page discusses the adjustments:
https://github.com/basho/leveldb/wiki/Mv-aggressive-delete
Matthew
On Apr 6, 2014, at 4:29 PM, Edgar Veiga edgarmve...@gmail.com wrote:
Matthew
, Matthew Von-Maszewski matth...@basho.com wrote:
Argh. Missed where you said you had upgraded. Ok it will proceed with
getting you comparison numbers.
Sent from my iPhone
On Apr 8, 2014, at 6:51 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Thanks again Matthew, you've been very helpful
, Edgar Veiga edgarmve...@gmail.com wrote:
Thanks a lot Matthew!
A little bit of more info, I've gathered a sample of the contents of
anti-entropy data of one of my machines:
- 44 folders with the name equal to the name of the folders in level-db
dir (i.e
/mv-tiered-options
This feature might give you another option in managing your storage
volume.
Matthew
On Apr 8, 2014, at 11:07 AM, Edgar Veiga edgarmve...@gmail.com wrote:
It makes sense, I do a lot, and I really mean a LOT of updates per key,
maybe thousands a day! The cluster
, Edgar Veiga edgarmve...@gmail.com wrote:
Thanks Matthew!
Today this situation has become unsustainable, In two of the machines I
have an anti-entropy dir of 250G... It just keeps growing and growing and
I'm almost reaching max size of the disks.
Maybe I'll just turn off aae in the cluster
with this? I no longer need
those millions of values that are living in the cluster...
When the version 2.0 of riak runs stable I'll do the update and only then
delete those keys!
Best regards
On 18 February 2014 16:32, Edgar Veiga edgarmve...@gmail.com wrote:
Ok, thanks a lot Matthew.
On 18
level.
I apologize that I cannot give you a more useful answer. 2.0 is on the
horizon.
Matthew
On Apr 6, 2014, at 7:04 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Hi again!
Sorry to reopen this discussion, but I have another question regarding the
former post.
What if, instead
it with a nodename distinct from that of the last
one, and force-replace the new node for the old, dead one.
On Tue, Mar 18, 2014 at 1:03 PM, Edgar Veiga edgarmve...@gmail.com wrote:
Hello all!
I have a 6 machine cluster with leveldb as backend, using riak 1.4.8
version.
Today, the ssd
the upgrade process...
Best regards!
On 6 March 2014 03:08, Scott Lystig Fritchie fritc...@snookles.com wrote:
Edgar Veiga edgarmve...@gmail.com wrote:
ev Is this normal?
Yes. One or more of your vnodes can't keep up with the workload
generated by AAE repair or a vnode can't keep up
Hi all,
I have a cluster of 6 servers, with level-db as backend. Previously I was
with 1.4.6 version of riak.
I've updated to 1.4.8 and since then I have the output log of all of the
nodes flooded with messages like this:
2014-03-03 08:52:21.619 [info]
On the Release notes, theres a new section recommending the deletion of all
previous AAE info before upgrading to 1.4.8.
What are the risks (if any) of not doing this (the deletion) beside the
wasting resources?
Best Regards
___
riak-users mailing list
Hi all!
I have a fairly trivial question regarding mass deletion on a riak cluster,
but firstly let me give you just some context. My cluster is running with
riak 1.4.6 on 6 machines with a ring of 256 nodes and 1Tb ssd disks.
I need to execute a massive object deletion on a bucket, I'm talking
Sorry, forgot that info!
It's leveldb.
Best regards
On 18 February 2014 15:27, Matthew Von-Maszewski matth...@basho.com wrote:
Which Riak backend are you using: bitcask, leveldb, multi?
Matthew
On Feb 18, 2014, at 10:17 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Hi all!
I have
your production throughput.
We have new code to help quicken the actual purge of deleted data in Riak
2.0. But that release is not quite ready for production usage.
What do you hope to achieve by the mass delete?
Matthew
On Feb 18, 2014, at 10:29 AM, Edgar Veiga edgarmve...@gmail.com
, 2014, at 11:10 AM, Edgar Veiga edgarmve...@gmail.com wrote:
The only/main purpose is to free disk space..
I was a little bit concerned regarding this operation, but now with your
feedback I'm tending to don't do nothing, I can't risk the growing of
space...
Regarding the overhead I think
,
but there doesn't seem to be any real advantage to lowering the thread
count.
Thanks for raising the issue.
-John
On Feb 3, 2014, at 10:51 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Hi all,
I have a 6 machines cluster with a ring of 256 nodes with levelDB as
backend.
I've seen that recently
Hi all,
I have a 6 machines cluster with a ring of 256 nodes with levelDB as
backend.
I've seen that recently in the documentation, this has appeared:
If using LevelDB as the storage backend (which maintains its own I/O thread
pool), the number of async threads in Riak's default pool can be
Also,
Using last_write_wins = true, do I need to always send the vclock while on
a PUT request? In the official documention it says that riak will look only
at the timestamp of the requests.
Best regards,
On 29 January 2014 10:29, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Russel
, Russell Brown russell.br...@me.com wrote:
On 30 Jan 2014, at 10:37, Edgar Veiga edgarmve...@gmail.com wrote:
Also,
Using last_write_wins = true, do I need to always send the vclock while on
a PUT request? In the official documention it says that riak will look only
at the timestamp
, Riak is probably the wrong
tool. It will work, but there is other software that will work much better.
I hope this helps,
Jason Campbell
- Original Message -
From: Edgar Veiga edgarmve...@gmail.com
To: Russell Brown russell.br...@me.com
Cc: riak-users riak-users@lists.basho.com
Yes Eric, I understood :)
On 30 January 2014 23:00, Eric Redmond eredm...@basho.com wrote:
For clarity, I was responding to Jason's assertion that Riak shouldn't be
used as a cache, not to your specific issue, Edgar.
Eric
On Jan 30, 2014, at 2:54 PM, Edgar Veiga edgarmve...@gmail.com
Here's a (bad) mockup of the solution:
https://cloudup.com/cOMhcPry38U
Hope that this time I've made myself a little more clear :)
Regards
On 30 January 2014 23:04, Edgar Veiga edgarmve...@gmail.com wrote:
Yes Eric, I understood :)
On 30 January 2014 23:00, Eric Redmond eredm
). There is a tradeoff between speed and
consistency/reliability, and the whole application has to take advantage of
the extra consistency and reliability for it to make sense.
Sorry again,
Jason Campbell
- Original Message -
From: Edgar Veiga edgarmve...@gmail.com
To: Eric Redmond eredm
tl;dr
If I guarantee that the same key is only written with a 5 second interval,
is last_write_wins=true profitable?
On 27 January 2014 23:25, Edgar Veiga edgarmve...@gmail.com wrote:
Hi there everyone!
I would like to know, if my current application is a good use case to set
Hi Russel,
No, it doesn't depend. It's always a new value.
Best regards
On 29 January 2014 10:10, Russell Brown russell.br...@me.com wrote:
On 29 Jan 2014, at 09:57, Edgar Veiga edgarmve...@gmail.com wrote:
tl;dr
If I guarantee that the same key is only written with a 5 second interval
Hi there everyone!
I would like to know, if my current application is a good use case to set
last_write_wins to true.
Basically I have a cluster of node.js workers reading and writing to riak.
Each node.js worker is responsible for a set of keys, so I can guarantee
some kind of non distributed
{473846233978378680511350941857232385279071879168,'riak@192.168.20.112'}
and {479555224749202520035584085735030365824602865664,'riak@192.168.20.107'}
I have few but consistent lines like this (every two hours, during this
process).
Best regards.
On 2 January 2014 10:05, Edgar Veiga edgarmve...@gmail.com wrote
},
{max_open_files, 20}]},
This might not solve your specific problem, but it will certainly improve
your AAE performance.
Thanks,
Charlie Voiselle
On Dec 31, 2013, at 12:04 PM, Edgar Veiga edgarmve...@gmail.com wrote:
Hey guys!
Nothing on this one
Hey guys!
Nothing on this one?
Btw: Happy new year :)
On 27 December 2013 22:35, Edgar Veiga edgarmve...@gmail.com wrote:
This is a du -hs * of the riak folder:
44G anti_entropy
1.1M kv_vnode
252G leveldb
124K ring
It's a 6 machine cluster, so ~1512G of levelDB.
Thanks for the tip
the on-disk K/V data.
Can you confirm me that this may be the root of the problem and if it's
normal for the action to last for two days?
I'm using riak 1.4.2 on 6 machines, with centOS. The backend is levelDB.
Best Regards,
Edgar Veiga
___
riak-users mailing
.
There are options available in app.config to control how often this occurs
and how many vnodes rehash at once: defaults are every 7 days and two
vnodes per server at a time.
Matthew Von-Maszewski
On Dec 27, 2013, at 13:50, Edgar Veiga edgarmve...@gmail.com wrote:
Hi!
I've been trying
that info available.
Matthew
P.S. Unrelated to your question: Riak 1.4.4 is available for download.
It has a couple of nice bug fixes for leveldb.
On Dec 27, 2013, at 2:08 PM, Edgar Veiga edgarmve...@gmail.com wrote:
Ok, thanks for confirming!
Is it normal, that this action affects
like the nval is set to a binary 3 rather than an integer.
Have you changed it recently and how?
On Sep 20, 2013, at 8:25 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Hello everyone,
Please lend me a hand here... I'm running a riak cluster of 6 machines
(version 1.4.1).
Suddenly all
Hello everyone,
Please lend me a hand here... I'm running a riak cluster of 6 machines
(version 1.4.1).
Suddenly all the nodes in the cluster went down and they are refusing to go
up again. It keeps crashing all the the time, this is just a sample of what
I get when starting a node:
2013-09-20
Problem solved.
The n_val = 3 caused the crash! I had a window of time while starting a
node to send a new PUT command and restore the correct value.
Best regards, thanks Jon
On 20 September 2013 15:42, Edgar Veiga edgarmve...@gmail.com wrote:
Yes I did, via CURL command:
curl -v -XPUT
://github.com/basho/riak_kv/issues/666 will track adding some
validation code to protect against similar incidents.
Jon
On Fri, Sep 20, 2013 at 8:59 AM, Edgar Veiga edgarmve...@gmail.comwrote:
Problem solved.
The n_val = 3 caused the crash! I had a window of time while starting a
node
://github.com/Sereal/Sereal/wiki/Sereal-Comparison-Graphs
[2]: https://github.com/tobyink/php-sereal/tree/master/PHP
On 10 July 2013 10:49, Edgar Veiga edgarmve...@gmail.com wrote:
Hello all!
I have a couple of questions that I would like to address all of you
guys, in order to start
Guido, we'r not using Java and that won't be an option.
The technology stack is php and/or node.js
Thanks anyway :)
Best regards
On 10 July 2013 10:35, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object
):
xxx___0x_00_000_000xx
We are using the php serialize native function!
Best regards
On 10 July 2013 11:43, damien krotkine dkrotk...@gmail.com wrote:
On 10 July 2013 11:03, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Guido.
Thanks for your
53 matches
Mail list logo