"INSERT INTO x (a,b.c) values (1,2,3)"

Doesn't this sometimes turn into a batch mutation if b and c are separate
columns?




On Wed, Mar 5, 2014 at 5:03 AM, Sylvain Lebresne <sylv...@datastax.com>wrote:

> Let me first note that the DataStax Java driver has a dedicated mailing
> list:
> https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user,
> it would better to use that list for driver specific questions in the
> future.
>
> But to answer your question, a SIMPLE write is any write (INSERT, UPDATE,
> DELETE) that is not in a batch. Concretely, if you do:
>   session.execute("INSERT INTO ...");
> it's a SIMPLE write.
>
> --
> Sylvain
>
>
> On Tue, Mar 4, 2014 at 7:21 PM, HAITHEM JARRAYA <a-hjarr...@expedia.com>wrote:
>
>> Hi All,
>>
>> I might be missing something and I would like some clarification on this.
>> We are using the java driver with the Downgrading Retry policy, we see in
>> our logs that are only the reads are retried.
>>
>> In the code and the docs, it says that the write method will retry a
>> maximum of one retry, when the WriteType is UNLOGGED_BATCH or BATCH_LOG.
>> My question is, when a write is considered as SIMPLE?
>>
>> Thanks,
>>
>> Haithem
>>
>>     /**
>>      * Defines whether to retry and at which consistency level on a write
>> timeout.
>>      * <p>
>>      * This method triggers a maximum of one retry. If {@code writeType ==
>>      * WriteType.BATCH_LOG}, the write is retried with the initial
>>      * consistency level. If {@code writeType == WriteType.UNLOGGED_BATCH}
>>      * and at least one replica acknowledged, the write is retried with a
>>      * lower consistency level (with unlogged batch, a write timeout can
>>      * <b>always</b> mean that part of the batch haven't been persisted at
>>      * all, even if {@code receivedAcks > 0}). For other {@code
>> writeType},
>>      * if we know the write has been persisted on at least one replica, we
>>      * ignore the exception. Otherwise, an exception is thrown.
>>      *
>>      * @param statement the original query that timed out.
>>      * @param cl the original consistency level of the write that timed
>> out.
>>      * @param writeType the type of the write that timed out.
>>      * @param requiredAcks the number of acknowledgments that were
>> required to
>>      * achieve the requested consistency level.
>>      * @param receivedAcks the number of acknowledgments that had been
>> received
>>      * by the time the timeout exception was raised.
>>      * @param nbRetry the number of retry already performed for this
>> operation.
>>      * @return a RetryDecision as defined above.
>>      */
>>     @Override
>>     public RetryDecision onWriteTimeout(Statement statement,
>> ConsistencyLevel cl, WriteType writeType, int requiredAcks, int
>> receivedAcks, int nbRetry) {
>>         if (nbRetry != 0)
>>             return RetryDecision.rethrow();
>>
>>         switch (writeType) {
>>             case SIMPLE:
>>             case BATCH:
>>                 // Since we provide atomicity there is no point in
>> retrying
>>                 return RetryDecision.ignore();
>>             case UNLOGGED_BATCH:
>>                 // Since only part of the batch could have been persisted,
>>                 // retry with whatever consistency should allow to
>> persist all
>>                 return maxLikelyToWorkCL(receivedAcks);
>>             case BATCH_LOG:
>>                 return RetryDecision.retry(cl);
>>         }
>>         // We want to rethrow on COUNTER and CAS, because in those case
>> "we don't know" and don't want to guess
>>         return RetryDecision.rethrow();
>>     }
>>
>>
>>
>

Reply via email to