On Dec 5, 2014, at 11:23 AM, Tyler Hobbs ty...@datastax.com wrote:
On Fri, Dec 5, 2014 at 1:15 AM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
Sounds great! By the way, will you create a ticket for this, so we can follow
the updates?
What would the ticket
:
What progress are you trying to be aware of? All of the features Tyler
discussed are implemented and can be used.
On Fri, Dec 5, 2014 at 2:41 PM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
On Dec 5, 2014, at 11:23 AM, Tyler Hobbs ty...@datastax.com
mailto:ty
On Dec 4, 2014, at 11:37 AM, Tyler Hobbs ty...@datastax.com wrote:
On Wed, Dec 3, 2014 at 11:02 PM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
1) except I am using TokenAwarePolicy, the async insert also can not be sent
to
the right coordinator.
Yes
On Dec 4, 2014, at 1:46 PM, Tyler Hobbs ty...@datastax.com wrote:
On Thu, Dec 4, 2014 at 11:50 AM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
As we already did what coordinators do in client side, why don’t we do one
step more:
break the UNLOGGED batch statements
, 2014, at 9:13 AM, Ryan Svihla rsvi...@datastax.com wrote:
On Mon, Dec 1, 2014 at 1:52 PM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
Thanks Ryan, and also thanks for your great blog post.
However, this makes me more confused. Mainly about the coordinators.
Based
:
On Sun, Nov 30, 2014 at 8:44 PM, Dong Dai daidon...@gmail.com
mailto:daidon...@gmail.com wrote:
The question is can I expect a better performance using the BulkLoader this
way comparing with using Batch insert?
You just asked if writing once (via streaming) is likely to be significantly
Hi, all,
I have a performance question about the batch insert and bulk load.
According to the documents, to import large volume of data into Cassandra,
Batch Insert and Bulk Load can both be an option. Using batch insert is pretty
straightforwards, but there have not been an ‘official’ way