Will do.

On Thu, Oct 29, 2009 at 2:39 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
> can you create a ticket for this?
>
> On Thu, Oct 29, 2009 at 3:39 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
>> I've seen that bug too.  I think we introduced a regression with 
>> CASSANDRA-492.
>>
>> On Thu, Oct 29, 2009 at 3:37 PM, Edmond Lau <edm...@ooyala.com> wrote:
>>> I've updated to trunk, and I'm still hitting the same issue but it's
>>> manifesting itself differently.  Again, I'm running with a freshly
>>> started 3-node cluster with a replication factor of 2.  I then take
>>> down two nodes.
>>>
>>> If I write with a consistency level of ONE on any key, I get an
>>> InvalidRequestException:
>>>
>>> ERROR [pool-1-thread-45] 2009-10-29 21:27:10,120 StorageProxy.java
>>> (line 183) error writing key 1
>>> InvalidRequestException(why:Cannot block for less than one replica)
>>>        at 
>>> org.apache.cassandra.service.QuorumResponseHandler.<init>(QuorumResponseHandler.java:52)
>>>        at 
>>> org.apache.cassandra.locator.AbstractReplicationStrategy.getResponseHandler(AbstractReplicationStrategy.java:64)
>>>        at 
>>> org.apache.cassandra.service.StorageService.getResponseHandler(StorageService.java:869)
>>>        at 
>>> org.apache.cassandra.service.StorageProxy.insertBlocking(StorageProxy.java:162)
>>>        at 
>>> org.apache.cassandra.service.CassandraServer.doInsert(CassandraServer.java:473)
>>>        at 
>>> org.apache.cassandra.service.CassandraServer.insert(CassandraServer.java:424)
>>>        at 
>>> org.apache.cassandra.service.Cassandra$Processor$insert.process(Cassandra.java:819)
>>>        at 
>>> org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:624)
>>>        at 
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>>>        at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:619)
>>>
>>> Oddly, a write with a consistency level of QUORUM succeeds for certain
>>> keys (but fails with others) even though I only have one live node.
>>>
>>> Edmond
>>>
>>> On Thu, Oct 29, 2009 at 1:38 PM, Edmond Lau <edm...@ooyala.com> wrote:
>>>>
>>>> On Thu, Oct 29, 2009 at 1:20 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
>>>>>
>>>>> On Thu, Oct 29, 2009 at 1:18 PM, Edmond Lau <edm...@ooyala.com> wrote:
>>>>> > I have a freshly started 3-node cluster with a replication factor of
>>>>> > 2.  If I take down two nodes, I can no longer do any writes, even with
>>>>> > a consistency level of one.  I tried on a variety of keys to ensure
>>>>> > that I'd get at least one where the live node was responsible for one
>>>>> > of the replicas.  I have not yet tried on trunk.  On cassandra 0.4.1,
>>>>> > I get an UnavailableException.
>>>>>
>>>>> This sounds like the bug we fixed in CASSANDRA-496 on trunk.
>>>>
>>>> Excellent - thanks.  Time to start using trunk.
>>>>
>>>>>
>>>>> > Along the same lines, how does Cassandra handle network partitioning
>>>>> > where 2 writes for the same keys hit 2 different partitions, neither
>>>>> > of which are able to form a quorum?  Dynamo maintained version vectors
>>>>> > and put the burden on the client to resolve conflicts, but there's no
>>>>> > similar interface in the thrift api.
>>>>>
>>>>> If you use QUORUM or ALL consistency, neither write will succeed.  If
>>>>> you use ONE, both will, and the one with the higher timestamp will
>>>>> "win" when the partition heals.
>>>>
>>>> Got it.
>>>>
>>>>>
>>>>> -Jonathan
>>>>
>>>> Edmond
>>>
>>
>

Reply via email to