Hi,
I'm looking for a work around @RiakIndex annotation to support
multiple values per index name, since the annotation is limited to one
single value per annotated property (no collection support), I would
like to know if there is a way of using the DomainBucketBuilder,
mutation conflict
29, 2012 6:09 PM
To: Guido Medina
Cc: riak-users@lists.basho.com
Subject: Re: Java Client Riak Builders...
Guido -
The real fix is to enhance the client to support a Collection, I'll add an
issue for this in github.
What you would need to do right now is write your own Converter (which would
the other.
Regards,
Guido.
-Original Message-
From: Brian Roach
Sent: Tuesday, May 29, 2012 7:05 PM
To: Guido Medina
Cc: riak-users@lists.basho.com
Subject: Re: Java Client Riak Builders...
Guido -
Thanks, looking forward to it.
Also as an FYI, on Friday I fixed the bug
Give a quick review to git://github.com/guido-medina/riak-java-client.git
And let me know if there are more changes to add/correct, I'm switching
from work to home so I can continue in there, the modifications are fair
simple, I thought about adding bean utils for reflection but managed
Hi,
We have a 4 nodes cluster and it works like a charm, our issue on
development (this is not happening in production but we are afraid it
might) is that when we are doing several concurrent store, if one thread
gives an exception (timeout getting connection from the pool or
connection
All points to 32 bits, at least for the Java client side (indexes can be of
type Integer, not Long which is the 64 bits) Look for RiakIndex.java, that will
give you some answers.
I don’t know the exact answer though.
Regards,
Guido.
From: Alexander Sicular
Sent: Thursday, June 07, 2012
Jun 2012, at 22:55, Guido Medina wrote:
All points to 32 bits, at least for the Java client side (indexes can
be of type Integer, not Long which is the 64 bits) Look for
RiakIndex.java, that will give you some answers.
That's a mistake on the part of the client developer at that time
(me
/ annotations allowing for any of the types to be passed in.
- Roach
On Jun 8, 2012, at 4:18 AM, Guido Medina wrote:
I would say just to add long and Long for indexing (So it that it supports
Long, Integer and their respective natives), using BigNumber subclasses have a
different semantic
Hi,
If I might add up to it, I would like to know specifically what would be
a proper way to have a replication site of a cluster, it seems to me it
is easier to backup the replicated site than the cluster itself.
We replicate using a Java scheduled job but it requires us to mark each
key
Hi,
We use HA proxy on a 4 nodes cluster for a J2EE application, before
using HA proxy we delegated on the Java Riak cluster client config to
handle the 4 nodes, but too many errors were generated, once we switched
to HA proxy and a single Java Riak client config, the errors were
reduced
Hi,
You could index your test data, 2i search that indexed data and
delete after the tests, that will reduce the deletes enormously and make
the clean up way faster IMHO. Maybe this doesn't apply much to your
environment, anyways, it is a very doable solution, the indexing though
would
Hi,
Just an idea, you could have a batch number incrementing once a
minute (Integers won't run out of range for at least 20 years) and have
your marking be an index name delete_batch_number, and have a
scheduled job to delete once a minute or something where it will just
delete the keys
We have done similar things, but it always depends on the available
tools, your requirements and needs, I will give you a short example, our
main application uses a standard SQL, for historical data we use Riak,
data that is not changing, for example, audit trails, daily chunks from
different
Hi,
It is the Java client which to be honest, doesn't handle well one
node going down, so, for example, in my company we use HA proxy for
that, here is a starting configuration: https://gist.github.com/1507077
Once we switched to HA proxy we just use a simple client without
cluster
Deepak,
I think that was fixed on the following pull request but, you would
have to compile your own client from github which is easy; or, did you
get the new 1.0.6? I'm not sure if it is already available at maven
repos, anyways, there is a lot going on in the riak client in java, I
would
, do a maven clean test and if tests pass then you are good to
go with your own version requirements.
Regards,
Guido.
On 24/09/12 20:05, Guido Medina wrote:
Deepak,
I think that was fixed on the following pull request but, you would
have to compile your own client from github which is easy
the issue
I dont see a change to pom.xml during this pull. I tried the
riak-client 1.0.6 and it still depends on 4.1.x. So the upgrade would
be necessary to fix the memory leak.
Thanks
Deepak Bala
On Tue, Sep 25, 2012 at 12:35 AM, Guido Medina
guido.med...@temetra.com mailto:guido.med
Deepak,
That's why in our company we have index annotations at method level,
for us we agreed that an index is not really a POJO property but a
pseudo/computed property, it was a waste for us to add a field just
because an index, so we have our own branch of the Riak java client
where the
Hi Alfonso,
I'm afraid for expire functionality, you would be looking at some
caching solution, in Java for example you could use Guava or EHCache if
running in the same JVM, if you need a distributed cache then memcached
would be the right thing to use, now, if you need search AND expire
Hi,
Is there any particular issue with the Java Riak HttpClient when
using a version other than 4.1.1? I have a pull request that doesn't
pass the Travis CI build, even though I have verified it shouldn't have
issues except for the that specific difference, if so, could it be tried
with
That question has been answered few times, here is my old answer:
Hi,
It is the Java client which to be honest, doesn't handle well one node
going down, so, for example, in my company we use HA proxy for that, here is
a starting configuration:https://gist.github.com/1507077
Once we
, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
That question has been answered few times, here is my old answer:
Hi,
It is the Java client which to be honest, doesn't handle well one node
going down, so, for example, in my company we use HA proxy
.
On 10/10/12 16:28, Guido Medina wrote:
That's why I keep pushing to one answer, Riak is not meant to be in
one cluster, you are removing the external factors and CAP settings
you will be using, and it won't be linear, you could get the same
results with RW=2 with 3, 4 and 5 nodes
be 100% sure he is not wasting time on something. this is maybe
premature optimization but it maybe also good position to understand
library and enter new world of riak
pozdrawiam
Paweł Kamiński
kami...@gmail.com
pkaminski@gmail.com
__
On 10 October 2012 17:30, Guido Medina
timeout if no requests would be done for some period?
Pavel
On Wed, Oct 10, 2012 at 8:57 PM, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
From that perspective, for now it is better to treat the client as
you would treat a JDBC DataSource pool, the tidy
Hi Brian,
I know what you mean about
https://github.com/basho/riak-java-client/issues/175 which by modifying
the current *MutationT* interface will break existing code, what about
adding a *CriteriaMutationT* or *OptionalMutationT* interface that
*extends* the *MutationT* interface?
That
I see I'm not the only one that spot it, I have insisted several times
to upgrade Jackson libraries to version 1.9.9 and later to 1.9.10 with
no avail, I'm not sure how important it is for Riak java client team to
keep the POM healthy. That's why we keep our Riak java client copy,
merge from
Hi,
That's exactly what Riak is designed for, there is no better usage of
Riak than the scenario you are describing, now take into account the
consistency, availability and concurrency of your writes, you might want
to implement/use a sort of locking mechanism combined with in-memory
cache
Riak is all about high availability, if eventually consistent data is
not a problem OR, you can cover those aspects of the CAP concept with an
in-memory caching system and a sort of a locking mechanism to emulate
the core atomic action of your application (put-if-absent) then I would
say, you
will built a
distributed re-entrant locking mechanism base on REST for the sake of it)
For the last quote, the locking mechanism if well designed will always
take care of that.
Regards,
Guido.
On 19/10/12 13:42, Les Mikesell wrote:
On Fri, Oct 19, 2012 at 6:57 AM, Guido Medina guido.med
...
On 19/10/12 16:32, Les Mikesell wrote:
On Fri, Oct 19, 2012 at 8:02 AM, Guido Medina guido.med...@temetra.com wrote:
It depends, if you have siblings enabled at the bucket, then you need to
resolve the conflicts using the object vclock,
How does that work for simultaneous initial inserts
About distributed locking mechanism, you might wanna take a look at
Google services, something called Chubby? Ctrl + F on that link:
http://en.wikipedia.org/wiki/Distributed_lock_manager
Regards,
Guido.
On 19/10/12 16:47, Guido Medina wrote:
Locking mechanism on a single server is easy
Don't overkill it with technology, you could use Riak with a simple 2i
index (integer index MMDD for the message date so you can search day
by day backward), and for the message sequence or identifier you could
either user ANY SQL database sequence or a UUID generator.
HTH,
Guido.
On
this.
An incremental message id would be helpful for me to get last
50 messages and so forth. I will use another db for this.
Thanks for all your help.
Shashwat
On Mon, Oct 22, 2012 at 2:46 PM, Guido Medina
guido.med...@temetra.com mailto:guido.med
for me to get last 50 messages and so forth. I will use
another db for this. Thanks for all your help.
Shashwat
On Mon, Oct 22, 2012 at 2:46 PM, Guido Medina
guido.med...@temetra.com
mailto:guido.med...@temetra.com wrote
HA proxy + Riak + ElasticSearch are your friends, Solr lacks
documentation (way outdated), hard to find stuff done and samples, so if
you have your cluster well setup and your meaning to do only key-value
retrieval with assist of text index search using ElasticSearch, you are
good.
*Note:*
Atomicity is always a delicate subject, but it could be resolved with
either the following solutions:
1. Your application can have a re-entrant or synchronized lock:
/lock(username) { create if not present then exit lock; if present
then return to user asking for another username }/
2. Or
Thats more related to Jackson object mapper, annotate your POJO like this:
@JsonSerialize(include=JsonSerialize.Inclusion.NON_NULL)
public class MyPOJO{
...
...
}
Hope that helps,
Guido.
On 14/12/12 17:16, Ingo Rockel wrote:
Hi all,
Is there a way to tell the riak-java-client to not store
.
On 17/12/12 09:52, Ingo Rockel wrote:
Hi Guido,
thanks, that's what I was looking for and I was looking at the wrong
place.
Ingo
Am 14.12.2012 20:16, schrieb Guido Medina:
Thats more related to Jackson object mapper, annotate your POJO like
this:
@JsonSerialize(include
Hi all,
Just dropping a short note; thanks for the hard work done at the Riak
Java client, specially to /Brian Roach/, the client is becoming more and
more stable and very professional.
Have a nice weekend,
Guido.
___
riak-users mailing list
Neither or both mixed? You could have prefixes per buckets and have N
applications per cluster and add a 5 nodes cluster as needed, so that
you can host as many per cluster, that way you know each bucket prefix
point to an specific app.
Hope that helps,
Guido.
On 19/01/13 17:26, Jeremiah
cluster would make good sense? ie, 2 x 10 node
cluster, buckets prefixed by app ?
Or would you have prefixes with env + app + bucket ?
Jimmy.
On Sat, Jan 19, 2013 at 5:47 PM, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
Neither or both mixed? You could
That would defy the purpose of distributed and high availability; the
short answer is no, you will have to implement at the application level
a blocking policy for the keys, assuming you know your key before hand
/(think if it of your primary key if it was an SQL DB)/, in Riak last
write wins,
I developed a custom IntRangeCollection class in Java for that, we store
list of IDs (That come from a DB sequence), like Sean said; you will
have to do it yourself, I spent a couple of days looking for something
like that and ended up implementing it myself:
Hi,
On our presentation of how Temetra uses Riak on Dublin, we were told
there was a special index which can list me the keys on a bucket using a
2i search, if so, does anyone has any code sample using the Java client
to do that? Something like $[bucket _name] index?
If so, is there any
Hi,
I have doubts about /withoutFetch()/ and /returnBody(boolean)/, I will
put some scenarios:
1. Store object (or overwrite) existing Riak object where I'm 100% I
don't need to fetch from Riak (Last write wins and goes to memory cache)
2. Apply a mutation to an object but this time return
someone out there//,/
Guido.
On 29/01/13 18:44, Guido Medina wrote:
Hi,
On our presentation of how Temetra uses Riak on Dublin, we were told
there was a special index which can list me the keys on a bucket using
a 2i search, if so, does anyone has any code sample using the Java
client to do
Fixed, some process unknown to me was running on that server on 8087,
sorry for the trouble.
Guido.
On 01/02/13 11:42, Guido Medina wrote:
I force removed a Riak node from our development cluster, then wajig
purge riak to delete all remaining files, and after fresh install it
doesn't want
Are you transferring using a single thread? If so, I would recommend you
to use a ThreaPoolExecutor and schedule each write as you, control the
failures (if any) using either an AtomicInteger or a
concurrent/synchronized list where you can track the keys that failed.
No matter how much you
, Guido Medina wrote:
Are you transferring using a single thread? If so, I would recommend
you to use a ThreaPoolExecutor and schedule each write as you, control
the failures (if any) using either an AtomicInteger or a
concurrent/synchronized list where you can track the keys that failed.
No matter
I would say it is totally safe to treat them as singleton (static
reference or just singleton pattern), we have been doing that for a year
with no issues so far.
Hope that helps,
Guido.
On 15/02/13 22:07, Mikhail Tyamin wrote:
Hello guys,
what is the best way to work with Bucket object in
Getting drunk now !!!
Cheers,
Guido.
On 21/02/13 14:26, Dmitry Demeshchuk wrote:
HELL YEAH
On Thu, Feb 21, 2013 at 6:24 PM, Tom Santero tsant...@basho.com
mailto:tsant...@basho.com wrote:
Here Ye! Here Ye!
The Basho Engineering team is thrilled to announced that Riak 1.3
is
We have done so far up to 3.5m keys listing for the same purpose using
2i search over protocol buffers and it seems to be fast enough.
Maybe it is fast because it streams and compress the keys list directly
into protocol buffer I/O stream without leaving a big foot print in
memory? I don't
This is only an idea:
You could mark your related objects with a 2i, when you need them
retrieve the list of IDs from that 2i, fetch them concurrently into some
local memory cache, and then navigate through your object graph by
fetching them one by one from your local cache. It will be hard
At least on two-phase commit enabled environment you can implement the
rollback to undo your action, you expect things to go right and a very
small % to go wrong, so implementing a rollback policy isn't such a bad
idea, I had to do the same years ago for a payment client, when things
went
with the rollback policy implementation.
I know I'm suggesting the opposite (assume things went OK and take
action when they go wrong)
If that makes sense for your application.
Guido.
On 04/04/13 15:33, Guido Medina wrote:
At least on two-phase commit enabled environment you can implement
Rohman,
It is more complicated than that, most big data systems use more than
one DB engine (Including Facebook that uses like 5 different engines),
for example (And we are not as big as Facebook), we use a relational
SQL, a text search engine and Riak, you will have to balance each
Simon,
We use a similar approach, using 2i and batch numbers (every
bucket/key we are interested is stamped with a 2i batch number which
increases once a minute), a Java client that copies to two different
clusters once a minute also, from last batch number copied to current
batch number -
If you are using Java you could store Riak keys as binaries using
Jackson smile format, supposedly it will compress faster and better than
default Java serialization, we use it for very large keys (say a key
with a large collection of entries), the drawback is that you won't be
able to easily
performance.
Best regards
On 10 July 2013 09:58, Guido Medina guido.med...@temetra.com
mailto:guido.med...@temetra.com wrote:
Well, I rushed my answer before, if you want performance, you
probably want Bitcask, if you want compression then LevelDB,
the following links
Release notes: https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
Maybe related to this?
Known Issues
https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md#leveldb-13-to-14-conversionleveldb
1.3 to 1.4 conversion
The first execution of 1.4.0 leveldb using a 1.3.x or
mention in the release notes that the init scripts have been adjusted -
maybe that has something to do with it?
Cheers,
Toby
- Original Message -
From: Guido Medina guido.med...@temetra.com
To: riak-users riak-users@lists.basho.com
Sent: Wednesday, 10 July, 2013 9:05:16 PM
Subject: Re
Hi,
We had an issue in 1.4 where 2i operations were timing out, after going
through support we were suggested to use +zdbbl 16384, on the
Configuring Riak Docs it is strongly suggested (unless doc need to be
re-phrased) it should be higher:
*Source:*
usage penalty.
-Jared
On Mon, Jul 15, 2013 at 8:38 AM, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
Hi,
We had an issue in 1.4 where 2i operations were timing out, after
going through support we were suggested to use +zdbbl 16384
Since the data he is requiring to store is only transient, would it
make sense to set N=2 for performance? Or will N=2 have the opposite
effect due to amount of nodes having such replica?
Guido.
On 18/07/13 16:15, Jared Morrow wrote:
Kumar,
We have a few customers who use the memory
Forgot to mention, with N=2 should he be able to have only 4 nodes and
focus on RAM per node rather than 5?
I know is not recommended but shouldn't N=2 reduce the minimum
recommended nodes to 4?
Guido.
On 18/07/13 16:21, Guido Medina wrote:
Since the data he is requiring to store is only
If what you are describing is happening for 1.4, type riak-admin diag
and see the new recommended kernel parameters, also, on vm.args
uncomment the +zdbbl 32768 parameter, since what you are describing is
similar to what happened to us when we upgraded to 1.4.
HTH,
Guido.
On 18/07/13 19:21,
12 node cluster. So maybe it is because of a 18
node cluster?
I'll try the zdbbl stuff. Any other hint would be cool (if the new
kernel parameters are also good for 1.3.1.. could you provide them?).
Cheers
Simon
On Thu, 18 Jul 2013 19:34:18 +0100
Guido Medina guido.med...@temetra.com wrote
Hi,
Is there a way to get the JSON stats via PBC? This is how we are doing
it now, we would like to get rid of any HTTP call, currently, this is
the only call being made to HTTP:
/private void collectNodeInfo(final PBClientConfig clientConfig)//
// {//
//...//
//RiakClusterStats
wrote:
Guido -
Right now, no.
We've been having some internal discussions around that topic and
whether it's really a client library operation or not.
How are you using stats? Is it for a monitoring app or ... ?
Thanks,
Brian Roach
On Thu, Jul 25, 2013 at 4:25 AM, Guido Medina guido.med
Rohman,
I think the reason for this is that the cluster will have to do the
whole intersection in memory, 2i only provides queries for 1 single
index and then return that result to either the client streaming or not,
intersection indeed will require a MapReduce job to get a hold of both
, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Sat, Jul 27, 2013 at 11:16 AM, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
Rohman,
I think the reason for this is that the cluster will have to do
for Apache Hadoop
On Sat, Jul 27, 2013 at 11:44 AM, Guido Medina
guido.med...@temetra.com mailto:guido.med...@temetra.com wrote:
Are you saying that you can join two 2i indexes? Let's say you
have a 2i named date and another named category so you could
get a list of modified objects
Yes, it is thread safe, you can treat them as singleton instances per
bucket, the following order is the kind of the general usage pattern:
* Fetch bucket.
* Optional: If exists verify it has your application values (N value, etc)
* If doesn't exist create it with your settings.
* Cache it
Hi Massimiliano,
I think your design is very thorough, I wouldn't worry about the
cardinality of such index but its per index size (how many keys a single
2i index will return?) , in that case think of 2i as yet another
relational DB (LevelDB), you should test it with many keys and check its
As a 2nd thought, you could have a key per player on the player's bucket
and a key with the collection of units per player on the unit's bucket.
Guido.
On 07/08/13 15:52, Guido Medina wrote:
Whats the size of each unit JSON wise?, if it is too small, you could
have the player's units inside
Hi Matt,
Like Sean said, you should have a mutator if you are dealing with
conflict resolution in domain objects; a good side effect of using a
mutator is that Riak Java client will fetch-modify-write so your
conflict resolver will be called once(?), if you don't use mutators, you
get the
really a resolver - it simply passes through an
object if there's only one, or throws an exception if there's multiple
(siblings) present.
Thanks,
- Roach
On Sun, Aug 11, 2013 at 5:41 AM, Guido Medina guido.med...@temetra.com wrote:
Hi Matt,
Like Sean said, you should have a mutator if you
, Guido Medina wrote:
Hi Brian,
I probably asked a similar question before, let's say you have an
in-memory cache and a single writer (I know, not the best distributed
design), if you do the following, take into account that we use
mutations but we have no siblings enabled:
/bucket.store(record
doing some tests and if I do store a mutation
returning the body without fetching, I get a new mutated object and not
the one I passed + mutation. So I'm wondering if that was the original
intention.
Thanks,
Guido.
On 11/08/13 18:49, Guido Medina wrote
Hi Brian,
*/New thread for this, sorry for the hijacking./*
Yes, without fetch should be indeed used without mutation or conflict
resolution, originally we had mutations and siblings, but our
application ended up creating too many siblings and made Riak fail
miserably, so we disable the
Same here, except that Riak 1.3.2 did that for me automatically. As
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per node
the first time Riak starts it will take some time upgrading the 2i
indexes storage format, if you see any weirdness then execute
riak-admin
-indexes and tail -f /var/log/riak/console.log which should
be done really fast if there isn't anything to fix.
4. Do 1 to 3 per node.
5. Do 1 and 2 but for for Riak 1.4.1.
HTH,
Guido.
On 13/08/13 13:50, Guido Medina wrote:
Same here, except that Riak 1.3.2 did that for me automatically
For the following information should it be +S 4:4 or +S 4:8?
root@somehost# lshw -C processor
*-cpu
description: CPU
product: Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz
vendor: Intel Corp.
physical id: 4
bus info: cpu@0
version: Intel(R) Core(TM)
threading, so total threads is also 8, would that still be +S
4:4, +S 8:8 or +S 8:0
Thanks,
Guido.
On 14/08/13 15:41, Matthew Von-Maszewski wrote:
threads=8 is the key phrase … +S 4:4
On Aug 14, 2013, at 10:04 AM, Guido Medina guido.med...@temetra.com wrote:
For the following information should
Hi,
I have to say it is nice, we started using it today and it seems to
leave a very low CPU and memory footprint at both cluster and
application using the client, now I have a couple of questions:
1. This one is probably part of Riak 1.4.x but won't hurt to ask: Will
reduce identity (to
Create a pseudo getters for the 2i indexes, valid return types are:
String, Long (And Integer) and Set of any of the mentioned, the benefit
of this is the fact that your 2i indexes are not actual properties, they
are meant to be computation of something, example:
/public class Postcode {//
Alex,
RabbitMQ which is a good high performer, developed in Erlang and scales
just as Riak.
The old saying, the right tool for the right job, I like how fast Riak
is fetching/storing key values on a distributed environment, I don't
like Riak for queues, is it because it wasn't designed for
be the tricky part, and would need to use vector clock
and ensure there are 0 siblings when finished. But it should be
possible? It seems like this is an uber-simplified form of a CRDT data
structure?
Thanks,
Alex
On Thu, Sep 12, 2013 at 2:12 AM, Guido Medina guido.med...@temetra.com wrote
Hi,
Is it possible to have Riak control running at HTTP port on localhost?
Assuming security is provided by SSH tunnels.
If so what is needed to be done at app.config? I enabled Riak control
but it is redirecting me to HTTPS.
Thanks,
Guido.
___
Jared,
Is it possible to elaborate more on meet me in the middle
settings/scenarios?, let me explain, let's say the quorum is configured
with low values, say, R=W=1 and N=3, doesn't that add more work to AAE
background process? Could there be ways to sacrifice some client
performance with
Hi,
Streaming 2i indexes is not timing out, even though the client is
configured to timeout, this coincidentally is causing the writes to fail
(or or the opposite?), is there anything elemental that could lock (I
know the locking concept in Erlang is out of the equation so LevelDB?)
On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina guido.med...@temetra.com wrote:
Hi,
Streaming 2i indexes is not timing out, even though the client is configured
to timeout, this coincidentally is causing the writes to fail (or or the
opposite?), is there anything elemental that could lock (I know
in the Riak logs?
- Roach
On Wed, Sep 25, 2013 at 12:11 PM, Guido Medina guido.med...@temetra.com wrote:
Like this: withConnectionTimeoutMillis(5000).build();
Guido.
On 25/09/13 18:08, Brian Roach wrote:
Guido -
When you say the client is configured to time out do you mean you're
using PB and you
Hi,
I'm trying to tune our Riak cluster using
http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
but I'm still lost on how to use the calculation results, here are my
questions:
1. Does this calculator
Morning,
Is there a way to determine what nodes a key belong to? I'm guessing
that the hash of a key will be computed using the bucket name and key
combined, I'm having some issues with some writes and would like to see
if there is a pattern, knowing what nodes are involved will help me a
Hi,
Is there a way to quick check if a key is present without fetching it
using the Riak Java client? It would be nice to have one for quick
checks without fetching the key:
/interface Bucket {//
// //
// public boolean isKeyPresent(String key);//
// //
//}/
Of course, that
And for ZFS? I wouldn't recommend it, after Riak 1.4 snappy LevelDB
compression does a nice job, why take the risk of yet another not so
enterprise ready compression algorithms.
I could be wrong though,
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors stories too
,
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors stories too, Riak nodes are so
cheap that I don't see the point in even having any mirror on the
node, here my points:
1. Erlang interprocess communication brings some network usage,
why yet
ratio given that ZFS will use compression over the entire
volume not ‘just’ the data in the DB.
That said there is a lot more to ZFS then compression and CRC ;) like
snapshots, cloning, ARC ^^
On 03 Oct 2013, at 9:56, Guido Medina guido.med...@temetra.com
mailto:guido.med...@temetra.com
1 - 100 of 163 matches
Mail list logo