ed it out, just in case :-).
Good luck with this all,
C*heers,
---Alain Rodriguez - alain@thelastpickle.comFrance
The Last Pickle - Apache Cassandra Consultinghttp://www.thelastpickle.com
2016-06-01 17:52 GMT+01:00 Dongfeng Lu :
Alain,
Thanks for responding to my question.
1 & 2: I think it
he Last Pickle - Apache Cassandra Consultinghttp://www.thelastpickle.com
2016-05-17 0:06 GMT+01:00 Dongfeng Lu :
Forgive me if that has been answered somewhere, but I could not find a concise
or clear answer.
I am using Cassandra 2.0.6 on a 3 node cluster. I don't usually run manual
compaction, and
Forgive me if that has been answered somewhere, but I could not find a concise
or clear answer.
I am using Cassandra 2.0.6 on a 3 node cluster. I don't usually run manual
compaction, and relied completely on Cassandra to automatically do it. A couple
of days ago in preparation for an upgrade to
This should be straight forward, but I would like to have a confirmation from
the experts. I have the following 2 tables,
CREATE TABLE event (
event_id uuid,
... 38 attributes ...
PRIMARY KEY (event_id)
)
CREATE TABLE event_index (
index_key text,
time_token timeuuid,
event_id uuid
If you can construct unique primary keys from the data you have, I'd suggest
you create your own custom primary keys instead of using UUIDs. It will be
easier for you to retrieve the records.
If you use UUIDs as your primary keys for a table, you need to have some kind
of index so that you can
You can use long java.util.UUID.timestamp().
On Sunday, November 15, 2015 9:20 AM, Marlon Patrick
wrote:
Hi guys,
Is there any way to convert a timeuuid in timestamp (dateOf) programmatically
using DataStax java driver?
--
Atenciosamente,
Marlon Patrick
by doing so, and enabling
unchecked_tombstone_compaction, you could encourage cassandra to compact just
one single large sstable to purge tombstones.
From: on behalf of Erick Ramirez
Reply-To: "user@cassandra.apache.org"
Date: Sunday, September 27, 2015 at 11:59 PM
To: "user@cassandra.ap
Hi I have a table where I set TTL to only 7 days for all records and we keep
pumping records in every day. In general, I would expect all data files for
that table to have timestamps less than, say 8 or 9 days old, giving the system
some time to work its magic. However, I see some files more tha