On Mon, Dec 14, 2015 at 10:53 PM, Vladimir Prudnikov
wrote:
> Save money. I don’t have huge enterprise behind me nor investor’s money on
> my bank account. I just created an app and want to launch it and see if it
> is what users will use and pay for. Once I get users
Yes... I agree with Rob here. I don't see much benchmarking required for
versions of Cassandra that aren't actively supported by the committers.
On Tue, Dec 15, 2015 at 10:52 AM Robert Coli wrote:
> On Tue, Dec 15, 2015 at 6:28 AM, Andy Kruth wrote:
>
We are encountering a situation in our environment (a 6-node Cassandra
ring) where we are trying to insert a row and then immediately update it,
using LOCAL_QUORUM consistency (replication factor = 3). I have replicated
the issue using the following code:
On Mon, Dec 14, 2015 at 10:53 PM, Vladimir Prudnikov
wrote:
> Is it hard to start with 3 nodes on one server running in docker and then
> just move 2 nodes to the separate servers?
>
FWIW, if you *absolutely knew* that you were going to need the scale and
for some reason
Philip,
I don't see the benefit to have a multi-DC C* cluster in this case. What
you need is two separate C* clusters and use Kafka record/replay writes to
DR. DR only receives writes from Kafka consumer. You won't need to deal
with "Removing everything from Cassandra that -isn't- in Kafka".
On
What cassandra and driver versions are you running?
It may be that the second update is getting the same timestamp as the
first, or even a lower timestamp if it's being processed by another server
with unsynced clock, so that update may be getting lost.
If you have high frequency updates in the
On Tue, Dec 15, 2015 at 2:57 PM Paulo Motta
wrote:
> What cassandra and driver versions are you running?
>
>
We are using 2.1.7.1
> It may be that the second update is getting the same timestamp as the
> first, or even a lower timestamp if it's being processed by
On Tue, Dec 15, 2015 at 6:28 AM, Andy Kruth wrote:
> We are trying to decide how to proceed with development and support of
> YCSB bindings for older versions of Cassandra, namely Cassandra 7, 8, and
> 10.
>
> We would like to continue dev and support on these if the use of
On Tue, Dec 15, 2015 at 11:15 AM, Jonathan Haddad wrote:
> If I had to choose between running 3x docker instances and 1x instance on
> a single server, I'd choose the single one. Instead of dealing with RF
> changing nonsense I'd just set up a 2nd data center w/ 3 nodes and
If I had to choose between running 3x docker instances and 1x instance on a
single server, I'd choose the single one. Instead of dealing with RF
changing nonsense I'd just set up a 2nd data center w/ 3 nodes and move to
that when you're ready. No downtime, easy.
With that said - Starting off
I agree with Jon. It's almost a statistical certainty that such updates
will be processed out of order some of the time because the clock sync
between machines will never be perfect.
Depending on how your actual code that shows this problem is structured,
there are ways to reduce or eliminate
High volume updates to a single key in a distributed system that relies on
a timestamp for conflict resolution is not a particularly great idea. If
you ever do this from multiple clients you'll find unexpected results at
least some of the time.
On Tue, Dec 15, 2015 at 12:41 PM Paulo Motta
It should all just work as expected, as if by magic. That's the whole point
of having MV, so that Cassandra does all the bookkeeping for you. Yes, the
partition key can change, so an update to the base table can cause one (or
more) MV rows to be deleted and one (or more) new MV rows to be created.
Can a core Cassandra committer verify if removing the compactions_in_progress
folder is indeed to desired and recommended solution to this problem, or
whether it might in fact be a bug that this workaround is needed at all?
Thanks!
-- Jack Krupansky
On Thu, Dec 10, 2015 at 5:34 PM, Mikhail
On Tue, Dec 15, 2015 at 4:41 PM, Jack Krupansky
wrote:
> Can a core Cassandra committer verify if removing the compactions_in_progress
> folder is indeed to desired and recommended solution to this problem, or
> whether it might in fact be a bug that this workaround is
why don't you just try it?
On Tue, Dec 15, 2015 at 6:30 PM, Will Zhang
wrote:
> Hi all,
>
> I originally raised this on SO, but not really getting any answer there,
> thought I give it a try here.
>
>
> Just thinking about this so please correct my understanding if any
In the case of an update to the source table where data is changed, a
tombstone will be generated for the old value and an insert will be
generated for the new value. This happens serially for the source
partition, so if there are multiple updates to the same partition, a
tombstone will be
We are trying to decide how to proceed with development and support of YCSB
bindings for older versions of Cassandra, namely Cassandra 7, 8, and 10.
We would like to continue dev and support on these if the use of those
versions of Cassandra is still prevalent. If not, then a deprecation cycle
I assume you mean Cassandra 0.7, 0.8, and 1.0? I think most users are on
2.x now, but I don't have any stats.
--
Michael Mior
michael.m...@gmail.com
2015-12-15 9:28 GMT-05:00 Andy Kruth :
> We are trying to decide how to proceed with development and support of
> YCSB bindings
19 matches
Mail list logo