Thanks for the reply Rob.
Date: Thu, 16 Oct 2014 11:46:52 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Thu, Oct 16, 2014 at 6:41 AM, S C wrote:
Bob,
Bob is my father's name. Unless you need a gastrointestinal consult, you
probably don't
On Thu, Oct 16, 2014 at 3:57 PM, Ben Chobot wrote:
> We're wiping the commit logs because that's what the datastax instructions
> say to do. (Also the old Cassandra Ops wiki.) I assume it's so that changes
> that no longer apply to the node aren't replayed when it's restarted with
> old sstables.
On Thu, Oct 16, 2014 at 4:17 PM, Bosung Seo wrote:
> I upgraded my Cassandra ring and restored data(copying snapshots) from the
> old ring. I am currently running the nodetool repair.
> I count the tables to check every rows is in the table, but counts have
> different values.
> It contains 571 r
I upgraded my Cassandra ring and restored data(copying snapshots) from the
old ring. I am currently running the nodetool repair.
I count the tables to check every rows is in the table, but counts have
different values.
It contains 571 rows, and counts are 500, 530, 501, and so on. Should I
wait unt
We're wiping the commit logs because that's what the datastax instructions say
to do. (Also the old Cassandra Ops wiki.) I assume it's so that changes that no
longer apply to the node aren't replayed when it's restarted with old sstables.
Of course, my question is about when you have multiple ke
Thank you very much, Erick.
Yes, we are using NTP. But your other suggestions and links are very
helpful. I tried to grep MigrationStage from system.log and found "Can't
send migration request: node /201.20.32.54 is down." around the time I ran
the CQL, although that server is actually up running
On Thu, Oct 16, 2014 at 5:32 PM, Rahul Neelakantan wrote:
> So this would need me to know the partition keys, what if I simply wanted
> to say delete all rows where the timestamp was older than 123456789?
You can't. You'll need to loop over the table and collect the keys.
--
Tyler Hobbs
Data
So this would need me to know the partition keys, what if I simply wanted to
say delete all rows where the timestamp was older than 123456789?
Rahul Neelakantan
> On Oct 16, 2014, at 6:27 PM, Tyler Hobbs wrote:
>
> For each partition in the table, run:
>
> DELETE FROM mytable WHERE partitio
Hello Cassandra users
We are happy to announce the release of Achilles 3.0.7, an advanced object
mapper built upon the Java driver. Apart from usual bug fixes and perf
improvement:
- support for multi-keyspaces
- support for naming strategies (snake case, case sensitive & lower case
for schema
For each partition in the table, run:
DELETE FROM mytable WHERE partitionkey=? USING TIMESTAMP 123456789
And it will delete everything older than or equal to 123456789 (in
microseconds since the epoch, if you're using standard timestamps).
On Thu, Oct 16, 2014 at 5:09 PM, Rahul Neelakantan wrot
Does anyone know of a way to delete rows from C* 1.2.8 based on the timestamp
(time from epoch) that is present on each column in the triplet of name, value
and timestamp? (I do not have a separate date/timestamp column that I insert)
Rahul Neelakantan
On Wed, Oct 15, 2014 at 3:25 PM, Donald Smith <
donald.sm...@audiencescience.com> wrote:
> So, my point is that to avoid the need to bootstrap and to cleanup, it's
> better to bring all nodes up at about the same time. If this is wrong,
> please explain why.
>
Oh, sure. As you say, you avoid hav
On Wed, Oct 15, 2014 at 10:07 PM, Peter Haggerty wrote:
> The node wrote gigs of data to various CFs during the bootstrap so it
> was clearly "writing" in some sense and it has the expected behavior
> after the bootstrap. Is cfstats correct when it reports that there
> were no writes during a boo
On Wed, Oct 15, 2014 at 10:46 PM, Umang Shah wrote:
> I am facing many problem after storing certain limit of records in
> cassandra, and giving outofmemoryerror.
>
This description is too vague. Be more specific.
=Rob
http://twitter.com/rcolidba
On Thu, Oct 16, 2014 at 6:41 AM, S C wrote:
> Bob,
>
Bob is my father's name. Unless you need a gastrointestinal consult, you
probably don't want to ask "Bob Coli" a question... ;P
> Default compression is Snappy compression and I have seen compression
> ranging between 2-4% (just as the doc s
On Thu, Oct 16, 2014 at 10:40 AM, Tyler Hobbs wrote:
> The summary files are immutable, but can be replaced periodically. See
> https://issues.apache.org/jira/browse/CASSANDRA-5519 for more details.
>
> The summary files aren't particularly important, they're primarily an
> optimization for star
Thanks Tyler, that explains it.
Sean
On Thu, Oct 16, 2014 at 10:40 AM, Tyler Hobbs wrote:
> The summary files are immutable, but can be replaced periodically. See
> https://issues.apache.org/jira/browse/CASSANDRA-5519 for more details.
>
> The summary files aren't particularly important, they'
The summary files are immutable, but can be replaced periodically. See
https://issues.apache.org/jira/browse/CASSANDRA-5519 for more details.
The summary files aren't particularly important, they're primarily an
optimization for startup time.
On Thu, Oct 16, 2014 at 12:20 PM, Sean Bridges
wrote
Hello,
I thought an sstable was immutable once written to disk. Before upgrading
from 1.2.18 to 2.0.10 we took a snapshot of our sstables. Now when I
compare the files in the snaphot dir and the original files, the Summary.db
files have a newer modified date, and the file sizes have changed.
Th
Quorum reads and writes in Cassandra guarantee sequential consistency.
The reason this doesn't satisfy linearizability is because
resurrections of unacknowledged writes can occur. A read of a
half-committed write will trigger synchronous read repair and the
order will be stable from that point forw
Bob,
Default compression is Snappy compression and I have seen compression ranging
between 2-4% (just as the doc says). I got the storage part. Does it mean that
as a result of compaction/repair SSTables are decompressed? Is it the reason
for CPU utilization spiking up a little?
-SR
From: as...@
To the best of my knowledge, only guaranteed way is with an ACID compliant
system.
The examples other have already provided should give you a decent idea. If
that's not enough, you would need to read papers on CRDT's and how they
compare to ACID systems.
http://highscalability.com/blog/2010/12/23
Hello,
The fact that things can always change immediately is not an obstacle to
linearizability. The lack of linearizability manifests in inconsistency, i.e.
you read from multiple nodes and get different results.
What does Cassandra do in the case of inconsistent reads? Wait or repo
23 matches
Mail list logo