The DataStax Java driver is based on Netty and is non blocking; if you do
any CQL work you should look into it.  At ProtectWise we use it with high
write volumes from Scala/Akka with great success.

We have a thin Scala wrapper around the Java driver that makes it act more
Scalaish (eg, Scala futures instead of Java futures, string contexts to
construct statements, and so on).  This has also let us do some other cool
things like integrate Zipkin tracing at a driver level, and add other
utility like token aware batches, and concurrent token aware batch selects.

On Sat, Oct 10, 2015 at 2:49 PM Graham Sanderson <gra...@vast.com> wrote:

> Cool - yeah we are still on astyanax mode drivers and our own built from
> scratch 100% non blocking Scala driver that we used in akka like
> environments
>
> Sent from my iPhone
>
> On Oct 10, 2015, at 12:12 AM, Steve Robenalt <sroben...@highwire.org>
> wrote:
>
> Hi Graham,
>
> I've used the Java driver's DowngradingConsistencyRetryPolicy for that in
> cases where it makes sense.
>
> Ref:
> http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html
>
> Steve
>
>
>
> On Fri, Oct 9, 2015 at 6:06 PM, Graham Sanderson <gra...@vast.com> wrote:
>
>> Actually maybe I'll open a JIRA issue for a (local)quorum_or_one
>> consistency level... It should be trivial to implement on server side with
>> exist timeouts ... I'll need to check the CQL protocol to see if there is a
>> good place to indicate you didn't reach quorum (in time)
>>
>> Sent from my iPhone
>>
>> On Oct 9, 2015, at 8:02 PM, Graham Sanderson <gra...@vast.com> wrote:
>>
>> Most of our writes are not user facing so local_quorum is good... We also
>> read at local_quorum because we prefer guaranteed consistency... But we
>> very quickly fall back to local_one in the cases where some data fast is
>> better than a failure. Currently we do that on a per read basis but we
>> could I suppose detect a pattern or just look at the gossip to decide to go
>> en masse into a degraded read mode
>>
>> Sent from my iPhone
>>
>> On Oct 9, 2015, at 5:39 PM, Steve Robenalt <sroben...@highwire.org>
>> wrote:
>>
>> Hi Brice,
>>
>> I agree with your nit-picky comment, particularly with respect to the
>> OP's emphasis, but there are many cases where read at ONE is sufficient and
>> performance is "better enough" to justify the possibility of a wrong
>> result. As with anything Cassandra, it's highly dependent on the nature of
>> the workload.
>>
>> Steve
>>
>>
>> On Fri, Oct 9, 2015 at 12:36 PM, Brice Dutheil <brice.duth...@gmail.com>
>> wrote:
>>
>>> On Fri, Oct 9, 2015 at 2:27 AM, Steve Robenalt <sroben...@highwire.org>
>>> wrote:
>>>
>>> In general, if you write at QUORUM and read at ONE (or LOCAL variants
>>>> thereof if you have multiple data centers), your apps will work well
>>>> despite the theoretical consistency issues.
>>>
>>> Nit-picky comment : if consistency is something important then reading
>>> at QUORUM is important. If read is ONE then the read operation *may*
>>> not see important update. The safest option is QUORUM for both write and
>>> read. Then depending on the business or feature the consistency may be
>>> tuned.
>>>
>>> — Brice
>>> ​
>>>
>>
>>
>>
>> --
>> Steve Robenalt
>> Software Architect
>> sroben...@highwire.org <bza...@highwire.org>
>> (office/cell): 916-505-1785
>>
>> HighWire Press, Inc.
>> 425 Broadway St, Redwood City, CA 94063
>> www.highwire.org
>>
>> Technology for Scholarly Communication
>>
>>
>
>
> --
> Steve Robenalt
> Software Architect
> sroben...@highwire.org <bza...@highwire.org>
> (office/cell): 916-505-1785
>
> HighWire Press, Inc.
> 425 Broadway St, Redwood City, CA 94063
> www.highwire.org
>
> Technology for Scholarly Communication
>
>

Reply via email to