Yup, all repairs are complete. I'm reading at a CL of ONE pretty much
everywhere.
Caleb Rackliffe | Software Developer
M 949.981.0159 | ca...@steelhouse.com
[cid:47487E9A-F738-4BAE-9A15-E6824E9D1834]
From: aaron morton aa...@thelastpickle.commailto:aa...@thelastpickle.com
Reply-To:
Yes, continued deletions of the same columns/rows will prevent removing them
from final sstable upon compaction due to new timestamp.
You're getting sliding tombstone gc grace period in that case.
During compaction of selected sstables Cassandra checks the whole Column Family
for the latest
During compaction of selected sstables Cassandra checks the whole Column
Family for the latest timestamp of the column/row, including other
sstables and memtable.
You are explaining that if i have expired row tombstone and there exists
later timestamp on this row that tombstone is not
Hello
I am new to Cassandra and when I run tpstats on my node (Cassandra 1.0.7) I get
following output:
Pool NameActive Pending Completed Blocked All
time blocked
ReadStage 0 0 12 0
0
I wonder why are memtable estimations so bad.
1. its not possible to run them more often? There should be some limit -
run live/serialized calculation at least once per hour. They took just
few seconds.
2. Why not use data from FlusherWriter to update estimations? Flusher
knows number of ops
You are explaining that if i have expired row tombstone and there exists
later timestamp on this row that tombstone is not deleted? If this works that
way, it will be never deleted.
Exactly. It is merged with new one.
Example 1: a row with 1 column in sstable. delete a row, not a column.
Example:
T1 T2 T3
at T1 write column
at T2 delete row
at T3 tombstone expiration do compact ( T1 + T2 ) and drop expired
tombstone
column from T1 will be alive again?
Should not.
Scenario 1, write delete in one memtable
T1 write column
T2 delete row
T3 flush memtable, sstable 1 contains empty row tombstone
T4 row tombstone expires
T5 compaction/cleanup, row disappears from sstable 2
Scenario 2, write delete different sstables
T1 write column
T2 flush
I'm dealing with a similar issue, with an additional complication. We are
collecting time series data, and the amount of data per time period varies
greatly. We will collect and query event data by account, but the biggest
account will accumulate about 10,000 times as much data per time period as
The main issue turned out to be a bug in our code whereby we were writing a
lot of new columns to the same row key instead of a new row key, turning
what we expected to be a skinny rowed CF into a CF with one very, very wide
row. These writes on the single key were putting pressure on the 3 nodes
I think you want
assume UserDetails validator as bytes;
On 03/23/2012 08:09 PM, Drew Kutcharian wrote:
Hi Everyone,
I'm having an issue with cassandra-cli's assume command with a custom type. I
tried it with the built-in BytesType and got the same error:
[default@test] assume UserDetails
I actually have a custom type, I put the BytesType in the example to
demonstrate the issue is not with my custom type.
-- Drew
On Mar 23, 2012, at 6:46 PM, Dave Brosius wrote:
I think you want
assume UserDetails validator as bytes;
On 03/23/2012 08:09 PM, Drew Kutcharian wrote:
Hi
12 matches
Mail list logo