Hi Ben
Thanks a lot. From my analysis of the code it looks like you are right.
When global read repair kicks in all live endpoints are queried for data,
regardless of consistency level. Only EACH_QUORUM is treated differently.
Cheers
Grzegorz
2018-04-22 1:45 GMT+02:00 Ben Slater
I haven't checked the code to make sure this is still the case but last
time I checked:
- For any read, if an inconsistency between replicas is detected then this
inconsistency will be repaired. This obviously wouldn’t apply with CL=ONE
because you’re not reading multiple replicas to find
I haven't asked about "regular" repairs. I just wanted to know how read
repair behaves in my configuration (or is it doing anything at all).
2018-04-21 14:04 GMT+02:00 Rahul Singh :
> Read repairs are one anti-entropy measure. Continuous repairs is another.
> If you
Read repairs are one anti-entropy measure. Continuous repairs is another. If
you do repairs via Reaper or your own method it will resolve your discrepencies.
On Apr 21, 2018, 3:16 AM -0400, Grzegorz Pietrusza ,
wrote:
> Hi all
>
> I'm a bit confused with how read repair
On Wed, Jul 8, 2015 at 2:07 PM, Saladi Naidu naidusp2...@yahoo.com wrote:
Suppose I have a row of existing data with set of values for attributes I
call this State1, and issue an update to some columns with Quorum
consistency. If the write is succeeded in one node, Node1 and and failed
on
-0700
Subject: Re: Read Repair
From: rc...@eventbrite.com
To: user@cassandra.apache.org; naidusp2...@yahoo.com
On Wed, Jul 8, 2015 at 2:07 PM, Saladi Naidu naidusp2...@yahoo.com wrote:
Suppose I have a row of existing data with set of values for attributes I call
this State1, and issue an update
The request would return with the latest data.
The read request would fire against node 1 and node 3. The coordinator would
get answers from both and would merge the answers and return the latest.
Then read repair might run to update node 3.
QUORUM does not take into consideration whether an
On Wed, Nov 19, 2014 at 4:51 PM, Jimmy Lin y2klyf+w...@gmail.com wrote:
#
When you said send read digest request to the rest of the replica, do
you mean all replica(s) in current and other DC? or just the one last
replica in my current DC and one of the co-ordinate node in other DC?
(our
Tyler,
thanks for the detail explanation.
Still have few questions in my mind
#
When you said send read digest request to the rest of the replica, do you
mean all replica(s) in current and other DC? or just the one last replica
in my current DC and one of the co-ordinate node in other DC?
On Sun, Nov 16, 2014 at 5:13 PM, Jimmy Lin y2klyf+w...@gmail.com wrote:
I have read that read repair suppose to be running as background, but
does the co-ordinator node need to wait for the response(along with other
normal read tasks) before return the entire result back to the caller?
For
, there is no quorum until failed rack comes back up.
Hope this explains the scenario.
From: Aaron Morton
Sent: 10/28/2013 2:42 AM
To: Cassandra User
Subject: Re: Read repair
As soon as it came back up, due to some human error, rack1 goes down. Now
for some rows it is possible that Quorum cannot
Yes, it helps. Thanks
--- Original Message ---
From: Aaron Morton aa...@thelastpickle.com
Sent: October 31, 2013 3:51 AM
To: Cassandra User user@cassandra.apache.org
Subject: Re: Read repair
(assuming RF 3 and NTS is putting a replica in each rack)
Rack1 goes down and some writes happen
hour and 30 mins,
there is no quorum until failed rack comes back up.
Hope this explains the scenario.
From: Aaron Mortonmailto:aa...@thelastpickle.com
Sent: 10/28/2013 2:42 AM
To: Cassandra Usermailto:user@cassandra.apache.org
Subject: Re: Read repair
As soon
As soon as it came back up, due to some human error, rack1 goes down. Now for
some rows it is possible that Quorum cannot be established.
Not sure I follow here.
if the first rack has come up I assume all nodes are available, if you then
lose a different rack I assume you have 2/3 of the
Hi Aaron,
Many thanks for your reply - answers below.
Cheers,
Brian
What CL are you using for reads and writes?
I would first build a test case to ensure correct operation when using strong
consistency. i.e. QUOURM write and read. Because you are using RF 2 per DC I
assume you are
CL.ONE : this is primarily for performance reasons …
This makes reasoning about correct behaviour a little harder.
If there is anyway you can run some tests with R + W N strong consistency I
would encourage you to do so. You will then have a baseline of what works.
(say I make 100 requests
I’d request data, nothing would be returned, I would then re-request the data
and it would correctly be returned:
What CL are you using for reads and writes?
I see a number of dropped ‘MUTATION’ operations : just under 5% of the total
‘MutationStage’ count.
Dropped mutations in a multi
the 10 days is actually configurable... look into gc_grace.
Basically, you always need to run repair once per/gc_grace period.
You won't see empty/deleted rows go away until they're compacted away.
On Mon, Oct 1, 2012 at 6:32 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
I know there is a 10 day
Thanks, (actually new it was configurable) BUT what I don't get is why I
have to run a repair. IF all nodes became consistent on the delete, it
should not be possible to get a forgotten delete, correct. The forgotten
delete will only occur if I have a node down and out for 10 days and it
comes
Oh, and I have been reading Aaron Mortan's article here
http://thelastpickle.com/2011/05/15/Deletes-and-Tombstones/
On 10/1/12 12:46 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Thanks, (actually new it was configurable) BUT what I don't get is why I
have to run a repair. IF all nodes became
inline...
On Mon, Oct 1, 2012 at 7:46 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Thanks, (actually new it was configurable) BUT what I don't get is why I
have to run a repair. IF all nodes became consistent on the delete, it
should not be possible to get a forgotten delete, correct. The
sorry to be dense, but which is it? do i get the old version or the new
version? or is it indeterminate?
On 02/02/2012 01:42, Peter Schuller wrote:
i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
a timed-out write at quorum) node 1 has a 'new' version of the
sorry to be dense, but which is it? do i get the old version or the new
version? or is it indeterminate?
Indeterminate, depending on which nodes happen to be participating in
the read. Eventually you should get the new version, unless the node
that took the new version permanently crashed
i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
a timed-out write at quorum) node 1 has a 'new' version of the row/column
(eg clock = 10), but node 2 and 3 have 'old' versions (clock = 5), when i
try to read my row/column at quorum, what do i get back?
You either
Peter Schuller peter.schul...@infidyne.com wrote:
i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
a timed-out write at quorum) node 1 has a 'new' version of the row/column
(eg clock = 10), but node 2 and 3 have 'old' versions (clock = 5), when i
try to read my
The digest is based on the results of the same query as applied on
different replicas. See the following for more details:
http://wiki.apache.org/cassandra/ReadRepair
http://www.datastax.com/docs/1.0/dml/data_consistency
On Wed, Nov 30, 2011 at 11:38 PM, Thorsten von Eicken
t...@rightscale.com
: Monday, December 27, 2010 6:59 PM
To: user
Subject: Re: read repair across datacenters?
https://issues.apache.org/jira/browse/CASSANDRA-982
On Mon, Dec 27, 2010 at 5:55 PM, Shu Zhang szh...@mediosystems.com wrote:
Brandon, for a read with quorum CL, a response is returned to the client
after half
28 matches
Mail list logo