Hi,

All you say is valid and I understand the reasons. However, if you
have limited bandwidth between your servers, say 100mbit, which at the
same time is shared with the Internet connection, things will be slow.

There are cases where you want the client to do the quorum instead of
the server to minimize bandwidth usage. Of course this will sacrify
some availability as some requests might fail.

As Riak targets high availability, I understand that this might not be
something you want to do. Like I said, I am just looking around for
options :)

I have seen that WeedFS seems to do something close to what I want.
Apparently it is modelled after Facebook's Haystack.

Regards,
Daniele

2013/11/5 Tom Santero <[email protected]>:
> Hi Daniele,
>
> responses are inline.
>
> -Tom
>
> On Tue, Oct 29, 2013 at 4:31 PM, Daniele Testa <[email protected]>
> wrote:
>>
>> Hi,
>>
>> I am looking around for different solutions for a good file-store and
>> the posibility to use Riak for that purpose.
>>
>> As far as I understand, if I have a 10 node cluster with 2 replicas,
>> an object will be stored on a total of 3 nodes.
>
>
> Assuming from the rest of your email, I assume "2 replicas" is a typo and
> you mean't to say 3. If this is the case, then yes.
>
>>
>>
>> If I query a node where the object is NOT stored, the node will act as
>> a proxy and fetch the object from one of the 3 resposible nodes and
>> send it to the client. I do not want this. I prefer that the client
>> fetches the object directly from the responsible node(s) without
>> wasting traffic between the Riak nodes.
>
>
> The node you hit with your query will act as a coordinating node, and a
> request will be sent to all 3 nodes where your replica is stored. Whatever
> your R quorum value is set to will determine how many responses the
> coordinating node will wait for before acknowledging your request.
>
> This model is at the core of Riak's promise to provide high-availability and
> low latency requests. You cannot always assume one of your N nodes will be
> available to even accept your request, so randomizing which node accepts any
> given query is quite common for most Riak deployments and helps in providing
> a predictable latency profile. Furthermore, while it might not be intuitive,
> it turns out that redundant requests in distributed systems can have the
> effect of decreased tail latencies. For a good primer of some of the
> research in this field I point you toward a blog post written by Peter
> Bailis [0].
>
> More info about quorums are on the docs [1] and in a series blog posts [2]
> written by John Daily
>
> [0]
> http://www.bailis.org/blog/doing-redundant-work-to-speed-up-distributed-queries/
> [1] http://docs.basho.com/riak/latest/dev/advanced/cap-controls
> [2] http://basho.com/riaks-config-behaviors-epilogue/
>
>>
>>
>> Is there a way to make the node do a "http location" response back to
>> the client with a redirect to one of the responsible nodes, so that
>> the client can fetch direcly from the node where the object resides? I
>> assume not.
>
>
> No.
>
>>
>>
>> However, I was hoping there is a way to query Riak about on which
>> nodes a specific object resides? Maybe there is a "info request" that
>> only returns metadata and location info about an object?
>>
>> Something like "where is object X?" and Riak responds with a list of
>> the 3 responsible nodes. Is this possible?
>
>
> The vnodes responsible for persisting a riak object are known as the
> preference list (or preflist for short). Technically you _can_ attach to a
> console and ask Riak for the preflist for any object, and then determine
> which nodes that key currently resides on, but you don't want to do that.
>
>>
>>
>> Regards,
>> Daniele
>>
>> _______________________________________________
>> riak-users mailing list
>> [email protected]
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to