There’s a company using TWCS in this config - I’m not going to out them, but I
think they do it (or used to) with aggressive tombstone sub properties. They
may have since extended/enhanced it somewhat.
--
Jeff Jirsa
> On Feb 16, 2018, at 2:24 PM, Carl Mueller
An even MORE complicated version could address the case where the TTLs are
at the column key rather than the row key. That would divide the row across
sstables by the rowkey, in essence the opposite of what most compaction
strategies try to do: eventually centralize the data for a rowkey in one
Oh and as a further refinement outside of our use case.
If we could group/organize the sstables by the rowkey time value or
inherent TTL value, the naive version would be evenly distributed buckets
into the future.
But many/most data patterns like this have "busy" data in the near term.
Far out
We have a scheduler app here at smartthings, where we track per-second
tasks to be executed.
These are all TTL'd to be destroyed after the second the event was
registered with has passed.
If the scheduling window was sufficiently small, say, 1 day, we could
probably use a time window compaction
re: the tombstone sstables being read-only inputs to compaction, there
would be one case the non-tombstone sstables would input to the compaction
of the row tombstones: when the row no longer exists in any of the data
sstables with respect to the row tombstone timestamp.
There may be other
Hi,
I created https://issues.apache.org/jira/browse/CASSANDRA-14241 for this issue.
You are right there is a solid chunk of failing tests on Apache infrastructure
that don't fail on CircleCI. I'll find someone to get it done.
I think that fix before commit is only going to happen if we go all
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
--
The maximum expiration timestamp that can be represented by the storage
engine is 2038-01-19T03:14:06+00:00, which means that inserts with TTL
thatl expire after
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
--
The maximum expiration timestamp that can be represented by the storage
engine is 2038-01-19T03:14:06+00:00, which means that inserts with TTL
thatl expire after
With 10 binding +1, 1 non-binding +1, and no other votes, this vote for
2.2.12 passes. I'll upload the artifacts today.
--
Kind regards,
Michael
On 02/12/2018 02:30 PM, Michael Shuler wrote:
> I propose the following artifacts for release as 2.2.12.
>
> sha1:
With 9 binding +1 and no other votes, the 2.1.20 release passes. I will
get the artifacts uploaded today.
--
Kind regards,
Michael
On 02/12/2018 02:30 PM, Michael Shuler wrote:
> I propose the following artifacts for release as 2.1.20.
>
> sha1: b2949439ec62077128103540e42570238520f4ee
> Git:
Hi,
I'm ecstatic others are now running the tests and, more importantly, that
we're having the conversation.
I've become convinced we cannot always have 100% green tests. I am reminded
of this [1] blog post from Google when thinking about flaky tests.
The TL;DR is "flakiness happens", to the
+1
On 2018-02-14 21:40, Michael Shuler wrote:
I propose the following artifacts for release as 3.0.16.
sha1: 890f319142ddd3cf2692ff45ff28e71001365e96
Git:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/3.0.16-tentative
Artifacts:
+1
On 2018-02-14 22:09, Michael Shuler wrote:
I propose the following artifacts for release as 3.11.2.
sha1: 1d506f9d09c880ff2b2693e3e27fa58c02ecf398
Git:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/3.11.2-tentative
Artifacts:
I'm new to this project and here are my two cents.
If there are tests that are constantly failing or flaky and you have had
releases despite their failures, then they're not useful and can be disabled.
They can always be reenabled if they are in fact valuable. Having 100% blue
dashboard is not
14 matches
Mail list logo