Re: Question about compaction strategy changes

2016-10-23 Thread kurt Greaves
​More compactions meaning "actual number of compaction tasks". A compaction
task generally operates on many SSTables (how many depends on the chosen
compaction strategy). The number of pending tasks does not line up with the
number of SSTables that will be compacted. 1 task may compact many SSTables.
If your pending tasks are jumping "into the thousands" you're quite
possibly flushing data from memtables faster than you can compact them.
Ideally your pending compactions shouldn't really go above 10 (or 5 even),
and if they are you're possibly overloading the cluster.


CommitLogReadHandler$CommitLogReadException: Unexpected error deserializing mutation

2016-10-23 Thread Ali Akhtar
I have a single node cassandra installation on my dev laptop, which is used
just for dev / testing.

Recently, whenever I restart my laptop, Cassandra fails to start when I run
it via 'sudo service cassandra start'.

Doing a tail on /var/log/cassandra/system.log gives this log:

*INFO  [main] 2016-10-24 07:08:02,950 CommitLog.java:166 - Replaying
/var/lib/cassandra/commitlog/CommitLog-6-1476907676969.log,
/var/lib/cassandra/commitlog/CommitLog-6-1476907676970.log,
/var/lib/cassandra/commitlog/CommitLog-6-1477269052845.log*
*ERROR [main] 2016-10-24 07:08:03,357 JVMStabilityInspector.java:82 -
Exiting due to error while processing commit log during initialization.*
*org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
Unexpected error deserializing mutation; saved to
/tmp/mutation9186356142128811141dat.  This may be caused by replaying a
mutation against a table with the same name but incompatible schema.
Exception follows: org.apache.cassandra.serializers.MarshalException: Not
enough bytes to read 0th field board_id*
* at
org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:410)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:343)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:202)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:85)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:135)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:187)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:167)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:323)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
[apache-cassandra-3.9.jar:3.9]*
* at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730)
[apache-cassandra-3.9.jar:3.9]*


I then have to do 'sudo rm -rf /var/lib/cassandra/commitlog/*' which fixes
the problem, but then I lose all of my data.

It looks like its saying there wasn't enough data to read the field
'board_id', any ideas why that would be?


Re: Question about compaction strategy changes

2016-10-23 Thread Seth Edwards
More compactions meaning "rows to be compacted" or actual number of pending
compactions? I assumed when I run nodetool compactionstats the number of
pending tasks would line up with number of sstables that will be compacted.
Most of the time this is idle, then we hit spots when it could jump into
the thousands and we and up being short of a few hundred GB of disk space.

On Sun, Oct 23, 2016 at 5:49 PM, kurt Greaves  wrote:

>
> On 22 October 2016 at 03:37, Seth Edwards  wrote:
>
>> We're using TWCS and we notice that if we make changes to the options to
>> the window unit or size, it seems to implicitly start recompacting all
>> sstables.
>
>
> If you increase the window unit or size you potentially increase the
> number of SSTable candidates for compaction inside each window, which is
> why you would see more compactions. If you decrease the window you
> shouldn't see any new compactions kicked off, however be aware that you
> will have SSTables covering multiple windows, so until a full cycle of your
> TTL passes your read queries won't benefit from the smaller window size.
>
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>


Re: Question about compaction strategy changes

2016-10-23 Thread kurt Greaves
On 22 October 2016 at 03:37, Seth Edwards  wrote:

> We're using TWCS and we notice that if we make changes to the options to
> the window unit or size, it seems to implicitly start recompacting all
> sstables.


If you increase the window unit or size you potentially increase the number
of SSTable candidates for compaction inside each window, which is why you
would see more compactions. If you decrease the window you shouldn't see
any new compactions kicked off, however be aware that you will have
SSTables covering multiple windows, so until a full cycle of your TTL
passes your read queries won't benefit from the smaller window size.

Kurt Greaves
k...@instaclustr.com
www.instaclustr.com


Re: failure node rejoin

2016-10-23 Thread Ben Slater
Definitely sounds to me like something is not working as expected but I
don’t really have any idea what would cause that (other than the fairly
extreme failure scenario). A couple of things I can think of to try to
narrow it down:
1) Run nodetool flush on all nodes after step 2 - that will make sure all
data is written to sstables rather than relying on commit logs
2) Run the test with consistency level quorom rather than serial (shouldn’t
be any different but quorom is more widely used so maybe there is a bug
that’s specific to serial)

Cheers
Ben

On Mon, 24 Oct 2016 at 10:29 Yuji Ito  wrote:

> Hi Ben,
>
> The test without killing nodes has been working well without data lost.
> I've repeated my test about 200 times after removing data and
> rebuild/repair.
>
> Regards,
>
>
> On Fri, Oct 21, 2016 at 3:14 PM, Yuji Ito  wrote:
>
> > Just to confirm, are you saying:
> > a) after operation 2, you select all and get 1000 rows
> > b) after operation 3 (which only does updates and read) you select and
> only get 953 rows?
>
> That's right!
>
> I've started the test without killing nodes.
> I'll report the result to you next Monday.
>
> Thanks
>
>
> On Fri, Oct 21, 2016 at 3:05 PM, Ben Slater 
> wrote:
>
> Just to confirm, are you saying:
> a) after operation 2, you select all and get 1000 rows
> b) after operation 3 (which only does updates and read) you select and
> only get 953 rows?
>
> If so, that would be very unexpected. If you run your tests without
> killing nodes do you get the expected (1,000) rows?
>
> Cheers
> Ben
>
> On Fri, 21 Oct 2016 at 17:00 Yuji Ito  wrote:
>
> > Are you certain your tests don’t generate any overlapping inserts (by
> PK)?
>
> Yes. The operation 2) also checks the number of rows just after all
> insertions.
>
>
> On Fri, Oct 21, 2016 at 2:51 PM, Ben Slater 
> wrote:
>
> OK. Are you certain your tests don’t generate any overlapping inserts (by
> PK)? Cassandra basically treats any inserts with the same primary key as
> updates (so 1000 insert operations may not necessarily result in 1000 rows
> in the DB).
>
> On Fri, 21 Oct 2016 at 16:30 Yuji Ito  wrote:
>
> thanks Ben,
>
> > 1) At what stage did you have (or expect to have) 1000 rows (and have
> the mismatch between actual and expected) - at that end of operation (2) or
> after operation (3)?
>
> after operation 3), at operation 4) which reads all rows by cqlsh with
> CL.SERIAL
>
> > 2) What replication factor and replication strategy is used by the test
> keyspace? What consistency level is used by your operations?
>
> - create keyspace testkeyspace WITH REPLICATION =
> {'class':'SimpleStrategy','replication_factor':3};
> - consistency level is SERIAL
>
>
> On Fri, Oct 21, 2016 at 12:04 PM, Ben Slater 
> wrote:
>
>
> A couple of questions:
> 1) At what stage did you have (or expect to have) 1000 rows (and have the
> mismatch between actual and expected) - at that end of operation (2) or
> after operation (3)?
> 2) What replication factor and replication strategy is used by the test
> keyspace? What consistency level is used by your operations?
>
>
> Cheers
> Ben
>
> On Fri, 21 Oct 2016 at 13:57 Yuji Ito  wrote:
>
> Thanks Ben,
>
> I tried to run a rebuild and repair after the failure node rejoined the
> cluster as a "new" node with -Dcassandra.replace_address_first_boot.
> The failure node could rejoined and I could read all rows successfully.
> (Sometimes a repair failed because the node cannot access other node. If
> it failed, I retried a repair)
>
> But some rows were lost after my destructive test repeated (after about
> 5-6 hours).
> After the test inserted 1000 rows, there were only 953 rows at the end of
> the test.
>
> My destructive test:
> - each C* node is killed & restarted at the random interval (within about
> 5 min) throughout this test
> 1) truncate all tables
> 2) insert initial rows (check if all rows are inserted successfully)
> 3) request a lot of read/write to random rows for about 30min
> 4) check all rows
> If operation 1), 2) or 4) fail due to C* failure, the test retry the
> operation.
>
> Does anyone have the similar problem?
> What causes data lost?
> Does the test need any operation when C* node is restarted? (Currently, I
> just restarted C* process)
>
> Regards,
>
>
> On Tue, Oct 18, 2016 at 2:18 PM, Ben Slater 
> wrote:
>
> OK, that’s a bit more unexpected (to me at least) but I think the solution
> of running a rebuild or repair still applies.
>
> On Tue, 18 Oct 2016 at 15:45 Yuji Ito  wrote:
>
> Thanks Ben, Jeff
>
> Sorry that my explanation confused you.
>
> Only node1 is the seed node.
> Node2 whose C* data is deleted is NOT a seed.
>
> I restarted the failure node(node2) after restarting the seed node(node1).
> The restarting node2 succeeded 

Re: Speeding up schema generation during tests

2016-10-23 Thread horschi
You have to manually do "nodetool flush && nodetool flush system" before
shutdown, otherwise Cassandra might break. With that it is working nicely.

On Sun, Oct 23, 2016 at 3:40 PM, Ali Akhtar  wrote:

> I'm using https://github.com/jsevellec/cassandra-unit and haven't come
> across any race issues or problems. Cassandra-unit takes care of creating
> the schema before it runs the tests.
>
> On Sun, Oct 23, 2016 at 6:17 PM, DuyHai Doan  wrote:
>
>> Ok I have added -Dcassandra.unsafesystem=true and my tests are broken.
>>
>> The reason is that I create some schemas before executing tests.
>>
>> When unable unsafesystem, Cassandra does not block for schema flush so
>> you man run into race conditions where the test start using the created
>> schema but it has not been fully flushed yet to disk:
>>
>> See C* source code here: https://github.com/apach
>> e/cassandra/blob/trunk/src/java/org/apache/cassandra/sche
>> ma/SchemaKeyspace.java#L278-L282
>>
>> static void flush()
>> {
>> if (!DatabaseDescriptor.isUnsafeSystem())
>> ALL.forEach(table -> FBUtilities.waitOnFuture(getSc
>> hemaCFS(table).forceFlush()));
>> }
>>
>> I don't know how it worked out for you but it didn't for me...
>>
>> On Wed, Oct 19, 2016 at 9:45 AM, DuyHai Doan 
>> wrote:
>>
>>> Ohh didn't know such system property exist, nice idea!
>>>
>>> On Wed, Oct 19, 2016 at 9:40 AM, horschi  wrote:
>>>
 Have you tried starting Cassandra with -Dcassandra.unsafesystem=true ?


 On Wed, Oct 19, 2016 at 9:31 AM, DuyHai Doan 
 wrote:

> As I said, when I bootstrap the server and create some keyspace,
> sometimes the schema is not fully initialized and when the test code tried
> to insert data, it fails.
>
> I did not have time to dig into the source code to find the root
> cause, maybe it's something really stupid and simple to fix. If you want 
> to
> investigate and try out my CassandraDaemon server, I'd be happy to get
> feedbacks
>
> On Wed, Oct 19, 2016 at 9:22 AM, Ali Akhtar 
> wrote:
>
>> Thanks. I've disabled durable writes but this is still pretty slow
>> (about 10 seconds).
>>
>> What issues did you run into with your impl?
>>
>> On Wed, Oct 19, 2016 at 12:15 PM, DuyHai Doan 
>> wrote:
>>
>>> There is a lot of pre-flight checks when starting the cassandra
>>> server and they took time.
>>>
>>> For integration testing, I have developped a modified
>>> CassandraDeamon here that remove pretty most of those checks:
>>>
>>> https://github.com/doanduyhai/Achilles/blob/master/achilles-
>>> embedded/src/main/java/info/archinnov/achilles/embedded/Achi
>>> llesCassandraDaemon.java
>>>
>>> The problem is that I felt into weird scenarios where creating a
>>> keyspace wasn't created in timely manner so I just stop using this impl 
>>> for
>>> the moment, just look at it and do whatever you want.
>>>
>>> Another idea for testing is to disable durable write to speed up
>>> mutation (CREATE KEYSPACE ... WITH durable_write=false)
>>>
>>> On Wed, Oct 19, 2016 at 3:24 AM, Ali Akhtar 
>>> wrote:
>>>
 Is there a way to speed up the creation of keyspace + tables during
 integration tests? I am using an RF of 1, with SimpleStrategy, but it 
 still
 takes upto 10-15 seconds.

>>>
>>>
>>
>

>>>
>>
>


Re: is there any problem having too many clustering columns?

2016-10-23 Thread Kant Kodali
That helps! thanks! I assume you meant "updating one the columns in the
PRIMARY KEY would require DELETE + INSERT".

since we don't do updates or deletes on this table I believe could leverage
this!



On Sun, Oct 23, 2016 at 12:44 PM, DuyHai Doan  wrote:

> There is nothing wrong with your schema, but just remember that because to
> set everything except one as clustering columns, updating them is no longer
> possible. To "update" the value of one of those columns you'll need to do a
> DELETE + INSERT.
>
> Example:
>
> with normal schema: UPDATE hello SET e =  WHERE a = xxx AND b =
> yyy
>
> with all clustering schema:
>
> DELETE hello WHERE a = xxx AND b = yyy AND c = ... AND e =  AND
> f =  AND i = ...
> INSERT INTO hello(a,b,..e,...i) VALUES(.., ..., ,...)
>
> In term of storage engine, you'll create a bunch of tombstones and
> duplicates of values
>
>
>
> On Sun, Oct 23, 2016 at 9:35 PM, Kant Kodali  wrote:
>
>> Hi All,
>>
>> Is there any problem having too many clustering columns? My goal is to
>> store data by columns in order and for any given partition (primary key)
>> each of its non-clustering column (columns that are not part of primary
>> key) can lead to a new column underneath or the CQL equivalent would be a
>> new row in a partition and from the other thread I heard the sweet spot is
>> about 100MB per partition in which case I would like to include all minus
>> one columns as clustering columns and the one that is left out as a regular
>> non-clustering column.
>>
>> In short I would do something like this
>>
>> create table hello(
>> a int,
>> b text,
>> c int,
>> d text,
>> e int,
>> f  bigint,
>> g text,
>> h text,
>> i  int,
>> body blob
>> primary key(a, b, c, d, e, f, g, h, i)
>> )
>>
>> instead of say doing something like the one below
>>
>> create table hello(
>> a int,
>> b text,
>> c int,
>> d text,
>> e int,
>> f  bigint,
>> g text,
>> h text,
>> i  int,
>> body blob
>> primary key(a, b)
>> )
>>
>> These are just example tables(not my actual ones) but I hope you get the
>> idea. please let me know if you see something wrong with my approach?
>>
>> Thanks!
>>
>>
>


Re: is there any problem having too many clustering columns?

2016-10-23 Thread DuyHai Doan
There is nothing wrong with your schema, but just remember that because to
set everything except one as clustering columns, updating them is no longer
possible. To "update" the value of one of those columns you'll need to do a
DELETE + INSERT.

Example:

with normal schema: UPDATE hello SET e =  WHERE a = xxx AND b =
yyy

with all clustering schema:

DELETE hello WHERE a = xxx AND b = yyy AND c = ... AND e =  AND
f =  AND i = ...
INSERT INTO hello(a,b,..e,...i) VALUES(.., ..., ,...)

In term of storage engine, you'll create a bunch of tombstones and
duplicates of values



On Sun, Oct 23, 2016 at 9:35 PM, Kant Kodali  wrote:

> Hi All,
>
> Is there any problem having too many clustering columns? My goal is to
> store data by columns in order and for any given partition (primary key)
> each of its non-clustering column (columns that are not part of primary
> key) can lead to a new column underneath or the CQL equivalent would be a
> new row in a partition and from the other thread I heard the sweet spot is
> about 100MB per partition in which case I would like to include all minus
> one columns as clustering columns and the one that is left out as a regular
> non-clustering column.
>
> In short I would do something like this
>
> create table hello(
> a int,
> b text,
> c int,
> d text,
> e int,
> f  bigint,
> g text,
> h text,
> i  int,
> body blob
> primary key(a, b, c, d, e, f, g, h, i)
> )
>
> instead of say doing something like the one below
>
> create table hello(
> a int,
> b text,
> c int,
> d text,
> e int,
> f  bigint,
> g text,
> h text,
> i  int,
> body blob
> primary key(a, b)
> )
>
> These are just example tables(not my actual ones) but I hope you get the
> idea. please let me know if you see something wrong with my approach?
>
> Thanks!
>
>


is there any problem having too many clustering columns?

2016-10-23 Thread Kant Kodali
Hi All,

Is there any problem having too many clustering columns? My goal is to
store data by columns in order and for any given partition (primary key)
each of its non-clustering column (columns that are not part of primary
key) can lead to a new column underneath or the CQL equivalent would be a
new row in a partition and from the other thread I heard the sweet spot is
about 100MB per partition in which case I would like to include all minus
one columns as clustering columns and the one that is left out as a regular
non-clustering column.

In short I would do something like this

create table hello(
a int,
b text,
c int,
d text,
e int,
f  bigint,
g text,
h text,
i  int,
body blob
primary key(a, b, c, d, e, f, g, h, i)
)

instead of say doing something like the one below

create table hello(
a int,
b text,
c int,
d text,
e int,
f  bigint,
g text,
h text,
i  int,
body blob
primary key(a, b)
)

These are just example tables(not my actual ones) but I hope you get the
idea. please let me know if you see something wrong with my approach?

Thanks!


Re: Speeding up schema generation during tests

2016-10-23 Thread Ali Akhtar
I'm using https://github.com/jsevellec/cassandra-unit and haven't come
across any race issues or problems. Cassandra-unit takes care of creating
the schema before it runs the tests.

On Sun, Oct 23, 2016 at 6:17 PM, DuyHai Doan  wrote:

> Ok I have added -Dcassandra.unsafesystem=true and my tests are broken.
>
> The reason is that I create some schemas before executing tests.
>
> When unable unsafesystem, Cassandra does not block for schema flush so you
> man run into race conditions where the test start using the created schema
> but it has not been fully flushed yet to disk:
>
> See C* source code here: https://github.com/apache/cassandra/blob/trunk/
> src/java/org/apache/cassandra/schema/SchemaKeyspace.java#L278-L282
>
> static void flush()
> {
> if (!DatabaseDescriptor.isUnsafeSystem())
> ALL.forEach(table -> FBUtilities.waitOnFuture(
> getSchemaCFS(table).forceFlush()));
> }
>
> I don't know how it worked out for you but it didn't for me...
>
> On Wed, Oct 19, 2016 at 9:45 AM, DuyHai Doan  wrote:
>
>> Ohh didn't know such system property exist, nice idea!
>>
>> On Wed, Oct 19, 2016 at 9:40 AM, horschi  wrote:
>>
>>> Have you tried starting Cassandra with -Dcassandra.unsafesystem=true ?
>>>
>>>
>>> On Wed, Oct 19, 2016 at 9:31 AM, DuyHai Doan 
>>> wrote:
>>>
 As I said, when I bootstrap the server and create some keyspace,
 sometimes the schema is not fully initialized and when the test code tried
 to insert data, it fails.

 I did not have time to dig into the source code to find the root cause,
 maybe it's something really stupid and simple to fix. If you want to
 investigate and try out my CassandraDaemon server, I'd be happy to get
 feedbacks

 On Wed, Oct 19, 2016 at 9:22 AM, Ali Akhtar 
 wrote:

> Thanks. I've disabled durable writes but this is still pretty slow
> (about 10 seconds).
>
> What issues did you run into with your impl?
>
> On Wed, Oct 19, 2016 at 12:15 PM, DuyHai Doan 
> wrote:
>
>> There is a lot of pre-flight checks when starting the cassandra
>> server and they took time.
>>
>> For integration testing, I have developped a modified CassandraDeamon
>> here that remove pretty most of those checks:
>>
>> https://github.com/doanduyhai/Achilles/blob/master/achilles-
>> embedded/src/main/java/info/archinnov/achilles/embedded/Achi
>> llesCassandraDaemon.java
>>
>> The problem is that I felt into weird scenarios where creating a
>> keyspace wasn't created in timely manner so I just stop using this impl 
>> for
>> the moment, just look at it and do whatever you want.
>>
>> Another idea for testing is to disable durable write to speed up
>> mutation (CREATE KEYSPACE ... WITH durable_write=false)
>>
>> On Wed, Oct 19, 2016 at 3:24 AM, Ali Akhtar 
>> wrote:
>>
>>> Is there a way to speed up the creation of keyspace + tables during
>>> integration tests? I am using an RF of 1, with SimpleStrategy, but it 
>>> still
>>> takes upto 10-15 seconds.
>>>
>>
>>
>

>>>
>>
>


Re: Speeding up schema generation during tests

2016-10-23 Thread DuyHai Doan
Ok I have added -Dcassandra.unsafesystem=true and my tests are broken.

The reason is that I create some schemas before executing tests.

When unable unsafesystem, Cassandra does not block for schema flush so you
man run into race conditions where the test start using the created schema
but it has not been fully flushed yet to disk:

See C* source code here:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/schema/SchemaKeyspace.java#L278-L282

static void flush()
{
if (!DatabaseDescriptor.isUnsafeSystem())
ALL.forEach(table ->
FBUtilities.waitOnFuture(getSchemaCFS(table).forceFlush()));
}

I don't know how it worked out for you but it didn't for me...

On Wed, Oct 19, 2016 at 9:45 AM, DuyHai Doan  wrote:

> Ohh didn't know such system property exist, nice idea!
>
> On Wed, Oct 19, 2016 at 9:40 AM, horschi  wrote:
>
>> Have you tried starting Cassandra with -Dcassandra.unsafesystem=true ?
>>
>>
>> On Wed, Oct 19, 2016 at 9:31 AM, DuyHai Doan 
>> wrote:
>>
>>> As I said, when I bootstrap the server and create some keyspace,
>>> sometimes the schema is not fully initialized and when the test code tried
>>> to insert data, it fails.
>>>
>>> I did not have time to dig into the source code to find the root cause,
>>> maybe it's something really stupid and simple to fix. If you want to
>>> investigate and try out my CassandraDaemon server, I'd be happy to get
>>> feedbacks
>>>
>>> On Wed, Oct 19, 2016 at 9:22 AM, Ali Akhtar 
>>> wrote:
>>>
 Thanks. I've disabled durable writes but this is still pretty slow
 (about 10 seconds).

 What issues did you run into with your impl?

 On Wed, Oct 19, 2016 at 12:15 PM, DuyHai Doan 
 wrote:

> There is a lot of pre-flight checks when starting the cassandra server
> and they took time.
>
> For integration testing, I have developped a modified CassandraDeamon
> here that remove pretty most of those checks:
>
> https://github.com/doanduyhai/Achilles/blob/master/achilles-
> embedded/src/main/java/info/archinnov/achilles/embedded/Achi
> llesCassandraDaemon.java
>
> The problem is that I felt into weird scenarios where creating a
> keyspace wasn't created in timely manner so I just stop using this impl 
> for
> the moment, just look at it and do whatever you want.
>
> Another idea for testing is to disable durable write to speed up
> mutation (CREATE KEYSPACE ... WITH durable_write=false)
>
> On Wed, Oct 19, 2016 at 3:24 AM, Ali Akhtar 
> wrote:
>
>> Is there a way to speed up the creation of keyspace + tables during
>> integration tests? I am using an RF of 1, with SimpleStrategy, but it 
>> still
>> takes upto 10-15 seconds.
>>
>
>

>>>
>>
>


Re: Cannot restrict clustering columns by IN relations when a collection is selected by the query

2016-10-23 Thread Samba
please see CASSANDRA-12654

On Sat, Oct 22, 2016 at 3:12 AM, DuyHai Doan  wrote:

> So the commit on this restriction dates back to 2.2.0 (CASSANDRA-7981).
>
> Maybe Benjamin Lerer can shed some light on it.
>
> On Fri, Oct 21, 2016 at 11:05 PM, Jeff Carpenter <
> jeff.carpen...@choicehotels.com> wrote:
>
>> Hello
>>
>> Consider the following schema:
>>
>> CREATE TABLE rates_by_code (
>>   hotel_id text,
>>   rate_code text,
>>   rates set,
>>   description text,
>>   PRIMARY KEY ((hotel_id), rate_code)
>> );
>>
>> When executing the query:
>>
>> select rates from rates_by_code where hotel_id='AZ123' and rate_code IN
>> ('ABC', 'DEF', 'GHI');
>>
>> I receive the response message:
>>
>> Cannot restrict clustering columns by IN relations when a collection is
>> selected by the query.
>>
>> If I select a non-collection column such as "description", no error
>> occurs.
>>
>> Why does this restriction exist? Is this a restriction that is still
>> necessary given the new storage engine? (I have verified this on both 2.2.5
>> and 3.0.9.)
>>
>> I looked for a Jira issue related to this topic, but nothing obvious
>> popped up. I'd be happy to create one, though.
>>
>> Thanks
>> Jeff Carpenter
>>
>>
>>
>>
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Welly Tambunan
Another thing is,

Let's say that we already have a structure data, the way we load that to
HDFS is to turn that one into a files ?

Cheers

On Sun, Oct 23, 2016 at 6:18 PM, Welly Tambunan  wrote:

> So basically you will store that files to HDFS and use Spark to process it
> ?
>
> On Sun, Oct 23, 2016 at 6:03 PM, Joaquin Alzola  > wrote:
>
>>
>>
>> I think what Ali mentions is correct:
>>
>> If you need a lot of queries that require joins, or complex analytics of
>> the kind that Cassandra isn't suited for, then HDFS / HBase may be better.
>>
>>
>>
>> We have files in which one line contains 500 fields (separated by pipe)
>> and each of this fields is particularly important.
>>
>> Cassandra will not manage that since you will need 500 indexes. HDFS is
>> the proper way.
>>
>>
>>
>>
>>
>> *From:* Welly Tambunan [mailto:if05...@gmail.com]
>> *Sent:* 23 October 2016 10:19
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Hadoop vs Cassandra
>>
>>
>>
>> I like muti data centre resillience in cassandra.
>>
>> I think thats plus one for cassandra.
>>
>> Ali, complex analytics can be done in spark right?
>>
>> On 23 Oct 2016 4:08 p.m., "Ali Akhtar"  wrote:
>>
>> >
>>
>> > I would say it depends on your use case.
>> >
>> > If you need a lot of queries that require joins, or complex analytics
>> of the kind that Cassandra isn't suited for, then HDFS / HBase may be
>> better.
>> >
>> > If you can work with the cassandra way of doing things (creating new
>> tables for each query you'll need to do, duplicating data - doing extra
>> writes for faster reads) , then Cassandra should work for you. It is easier
>> to setup and do dev ops with, in my experience.
>> >
>> > On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan 
>> wrote:
>>
>> >>
>>
>> >> I mean. HDFS and HBase.
>> >>
>> >> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar 
>> wrote:
>>
>> >>>
>>
>> >>> By Hadoop do you mean HDFS?
>> >>>
>> >>>
>> >>>
>> >>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
>> wrote:
>>
>> 
>>
>>  Hi All,
>> 
>>  I read the following comparison between hadoop and cassandra. Seems
>> the conclusion that we use hadoop for data lake ( cold data ) and Cassandra
>> for hot data (real time data).
>> 
>>  http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>> 
>> 
>>  My question is, can we just use cassandra to rule them all ?
>> 
>>  What we are trying to achieve is to minimize the moving part on our
>> system.
>> 
>>  Any response would be really appreciated.
>> 
>> 
>>  Cheers
>> 
>>  --
>>  Welly Tambunan
>>  Triplelands
>> 
>>  http://weltam.wordpress.com 
>>  http://www.triplelands.com 
>> >>>
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Welly Tambunan
>> >> Triplelands
>> >>
>> >> http://weltam.wordpress.com 
>> >> http://www.triplelands.com 
>> >
>> >
>> This email is confidential and may be subject to privilege. If you are
>> not the intended recipient, please do not copy or disclose its content but
>> contact the sender immediately upon receipt.
>>
>
>
>
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com 
>



-- 
Welly Tambunan
Triplelands

http://weltam.wordpress.com
http://www.triplelands.com 


Re: Hadoop vs Cassandra

2016-10-23 Thread Welly Tambunan
So basically you will store that files to HDFS and use Spark to process it
?

On Sun, Oct 23, 2016 at 6:03 PM, Joaquin Alzola 
wrote:

>
>
> I think what Ali mentions is correct:
>
> If you need a lot of queries that require joins, or complex analytics of
> the kind that Cassandra isn't suited for, then HDFS / HBase may be better.
>
>
>
> We have files in which one line contains 500 fields (separated by pipe)
> and each of this fields is particularly important.
>
> Cassandra will not manage that since you will need 500 indexes. HDFS is
> the proper way.
>
>
>
>
>
> *From:* Welly Tambunan [mailto:if05...@gmail.com]
> *Sent:* 23 October 2016 10:19
> *To:* user@cassandra.apache.org
> *Subject:* Re: Hadoop vs Cassandra
>
>
>
> I like muti data centre resillience in cassandra.
>
> I think thats plus one for cassandra.
>
> Ali, complex analytics can be done in spark right?
>
> On 23 Oct 2016 4:08 p.m., "Ali Akhtar"  wrote:
>
> >
>
> > I would say it depends on your use case.
> >
> > If you need a lot of queries that require joins, or complex analytics of
> the kind that Cassandra isn't suited for, then HDFS / HBase may be better.
> >
> > If you can work with the cassandra way of doing things (creating new
> tables for each query you'll need to do, duplicating data - doing extra
> writes for faster reads) , then Cassandra should work for you. It is easier
> to setup and do dev ops with, in my experience.
> >
> > On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan 
> wrote:
>
> >>
>
> >> I mean. HDFS and HBase.
> >>
> >> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar 
> wrote:
>
> >>>
>
> >>> By Hadoop do you mean HDFS?
> >>>
> >>>
> >>>
> >>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
> wrote:
>
> 
>
>  Hi All,
> 
>  I read the following comparison between hadoop and cassandra. Seems
> the conclusion that we use hadoop for data lake ( cold data ) and Cassandra
> for hot data (real time data).
> 
>  http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
> 
> 
>  My question is, can we just use cassandra to rule them all ?
> 
>  What we are trying to achieve is to minimize the moving part on our
> system.
> 
>  Any response would be really appreciated.
> 
> 
>  Cheers
> 
>  --
>  Welly Tambunan
>  Triplelands
> 
>  http://weltam.wordpress.com 
>  http://www.triplelands.com 
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Welly Tambunan
> >> Triplelands
> >>
> >> http://weltam.wordpress.com 
> >> http://www.triplelands.com 
> >
> >
> This email is confidential and may be subject to privilege. If you are not
> the intended recipient, please do not copy or disclose its content but
> contact the sender immediately upon receipt.
>



-- 
Welly Tambunan
Triplelands

http://weltam.wordpress.com
http://www.triplelands.com 


RE: Hadoop vs Cassandra

2016-10-23 Thread Joaquin Alzola

I think what Ali mentions is correct:
If you need a lot of queries that require joins, or complex analytics of the 
kind that Cassandra isn't suited for, then HDFS / HBase may be better.

We have files in which one line contains 500 fields (separated by pipe) and 
each of this fields is particularly important.
Cassandra will not manage that since you will need 500 indexes. HDFS is the 
proper way.


From: Welly Tambunan [mailto:if05...@gmail.com]
Sent: 23 October 2016 10:19
To: user@cassandra.apache.org
Subject: Re: Hadoop vs Cassandra


I like muti data centre resillience in cassandra.

I think thats plus one for cassandra.

Ali, complex analytics can be done in spark right?

On 23 Oct 2016 4:08 p.m., "Ali Akhtar" 
> wrote:

>

> I would say it depends on your use case.
>
> If you need a lot of queries that require joins, or complex analytics of the 
> kind that Cassandra isn't suited for, then HDFS / HBase may be better.
>
> If you can work with the cassandra way of doing things (creating new tables 
> for each query you'll need to do, duplicating data - doing extra writes for 
> faster reads) , then Cassandra should work for you. It is easier to setup and 
> do dev ops with, in my experience.
>
> On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan 
> > wrote:

>>

>> I mean. HDFS and HBase.
>>
>> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar 
>> > wrote:

>>>

>>> By Hadoop do you mean HDFS?
>>>
>>>
>>>
>>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
>>> > wrote:



 Hi All,

 I read the following comparison between hadoop and cassandra. Seems the 
 conclusion that we use hadoop for data lake ( cold data ) and Cassandra 
 for hot data (real time data).

 http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop

 My question is, can we just use cassandra to rule them all ?

 What we are trying to achieve is to minimize the moving part on our system.

 Any response would be really appreciated.


 Cheers

 --
 Welly Tambunan
 Triplelands

 http://weltam.wordpress.com
 http://www.triplelands.com
>>>
>>>
>>
>>
>>
>> --
>> Welly Tambunan
>> Triplelands
>>
>> http://weltam.wordpress.com
>> http://www.triplelands.com
>
>

This email is confidential and may be subject to privilege. If you are not the 
intended recipient, please do not copy or disclose its content but contact the 
sender immediately upon receipt.


Re: Hadoop vs Cassandra

2016-10-23 Thread Ali Akhtar
"from a particular query" should be " from a particular country"

On Sun, Oct 23, 2016 at 2:36 PM, Ali Akhtar  wrote:

> They can be, but I would assume that if your Cassandra data model is
> inefficient for the kind of queries you want to do, Spark won't magically
> take that way.
>
> For example, say you have a users table. Each user has a country, which
> isn't a partitioning key or clustering key.
>
> If you wanted to calculate the number of all users from a particular
> query, there's no way to do that in the previous data model other than to
> do a full table scan and count the users from that country.
>
> Spark can do this full table scan for you and return the number of
> records. May be it can spread the work across multiple servers. But it
> can't reduce the amount of work that has to be done.
>
> Otoh, if you were okay with creating a new table in which the country is
> part of the primary key, and for each user that signed up, you created a
> record in this user_by_country table, then it would be a very fast query to
> look up the users in a particular country, as country is then the primary
> key.
>
>
>
> On Sun, Oct 23, 2016 at 2:18 PM, Welly Tambunan  wrote:
>
>> I like muti data centre resillience in cassandra.
>>
>> I think thats plus one for cassandra.
>>
>> Ali, complex analytics can be done in spark right?
>>
>> On 23 Oct 2016 4:08 p.m., "Ali Akhtar"  wrote:
>>
>> >
>>
>> > I would say it depends on your use case.
>> >
>> > If you need a lot of queries that require joins, or complex analytics
>> of the kind that Cassandra isn't suited for, then HDFS / HBase may be
>> better.
>> >
>> > If you can work with the cassandra way of doing things (creating new
>> tables for each query you'll need to do, duplicating data - doing extra
>> writes for faster reads) , then Cassandra should work for you. It is easier
>> to setup and do dev ops with, in my experience.
>> >
>> > On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan 
>> wrote:
>>
>> >>
>>
>> >> I mean. HDFS and HBase.
>> >>
>> >> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar 
>> wrote:
>>
>> >>>
>>
>> >>> By Hadoop do you mean HDFS?
>> >>>
>> >>>
>> >>>
>> >>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
>> wrote:
>>
>> 
>>
>>  Hi All,
>> 
>>  I read the following comparison between hadoop and cassandra. Seems
>> the conclusion that we use hadoop for data lake ( cold data ) and Cassandra
>> for hot data (real time data).
>> 
>>  http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>> 
>> 
>>  My question is, can we just use cassandra to rule them all ?
>> 
>>  What we are trying to achieve is to minimize the moving part on our
>> system.
>> 
>>  Any response would be really appreciated.
>> 
>> 
>>  Cheers
>> 
>>  --
>>  Welly Tambunan
>>  Triplelands
>> 
>>  http://weltam.wordpress.com 
>>  http://www.triplelands.com 
>> >>>
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Welly Tambunan
>> >> Triplelands
>> >>
>> >> http://weltam.wordpress.com 
>> >> http://www.triplelands.com 
>> >
>> >
>>
>
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Ali Akhtar
They can be, but I would assume that if your Cassandra data model is
inefficient for the kind of queries you want to do, Spark won't magically
take that way.

For example, say you have a users table. Each user has a country, which
isn't a partitioning key or clustering key.

If you wanted to calculate the number of all users from a particular query,
there's no way to do that in the previous data model other than to do a
full table scan and count the users from that country.

Spark can do this full table scan for you and return the number of records.
May be it can spread the work across multiple servers. But it can't reduce
the amount of work that has to be done.

Otoh, if you were okay with creating a new table in which the country is
part of the primary key, and for each user that signed up, you created a
record in this user_by_country table, then it would be a very fast query to
look up the users in a particular country, as country is then the primary
key.



On Sun, Oct 23, 2016 at 2:18 PM, Welly Tambunan  wrote:

> I like muti data centre resillience in cassandra.
>
> I think thats plus one for cassandra.
>
> Ali, complex analytics can be done in spark right?
>
> On 23 Oct 2016 4:08 p.m., "Ali Akhtar"  wrote:
>
> >
>
> > I would say it depends on your use case.
> >
> > If you need a lot of queries that require joins, or complex analytics of
> the kind that Cassandra isn't suited for, then HDFS / HBase may be better.
> >
> > If you can work with the cassandra way of doing things (creating new
> tables for each query you'll need to do, duplicating data - doing extra
> writes for faster reads) , then Cassandra should work for you. It is easier
> to setup and do dev ops with, in my experience.
> >
> > On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan 
> wrote:
>
> >>
>
> >> I mean. HDFS and HBase.
> >>
> >> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar 
> wrote:
>
> >>>
>
> >>> By Hadoop do you mean HDFS?
> >>>
> >>>
> >>>
> >>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
> wrote:
>
> 
>
>  Hi All,
> 
>  I read the following comparison between hadoop and cassandra. Seems
> the conclusion that we use hadoop for data lake ( cold data ) and Cassandra
> for hot data (real time data).
> 
>  http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
> 
> 
>  My question is, can we just use cassandra to rule them all ?
> 
>  What we are trying to achieve is to minimize the moving part on our
> system.
> 
>  Any response would be really appreciated.
> 
> 
>  Cheers
> 
>  --
>  Welly Tambunan
>  Triplelands
> 
>  http://weltam.wordpress.com 
>  http://www.triplelands.com 
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Welly Tambunan
> >> Triplelands
> >>
> >> http://weltam.wordpress.com 
> >> http://www.triplelands.com 
> >
> >
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Welly Tambunan
I like muti data centre resillience in cassandra.

I think thats plus one for cassandra.

Ali, complex analytics can be done in spark right?

On 23 Oct 2016 4:08 p.m., "Ali Akhtar"  wrote:

>

> I would say it depends on your use case.
>
> If you need a lot of queries that require joins, or complex analytics of
the kind that Cassandra isn't suited for, then HDFS / HBase may be better.
>
> If you can work with the cassandra way of doing things (creating new
tables for each query you'll need to do, duplicating data - doing extra
writes for faster reads) , then Cassandra should work for you. It is easier
to setup and do dev ops with, in my experience.
>
> On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan  wrote:

>>

>> I mean. HDFS and HBase.
>>
>> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar  wrote:

>>>

>>> By Hadoop do you mean HDFS?
>>>
>>>
>>>
>>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
wrote:



 Hi All,

 I read the following comparison between hadoop and cassandra. Seems
the conclusion that we use hadoop for data lake ( cold data ) and Cassandra
for hot data (real time data).

 http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop


 My question is, can we just use cassandra to rule them all ?

 What we are trying to achieve is to minimize the moving part on our
system.

 Any response would be really appreciated.


 Cheers

 --
 Welly Tambunan
 Triplelands

 http://weltam.wordpress.com 
 http://www.triplelands.com 
>>>
>>>
>>
>>
>>
>> --
>> Welly Tambunan
>> Triplelands
>>
>> http://weltam.wordpress.com 
>> http://www.triplelands.com 
>
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Ali Akhtar
I would say it depends on your use case.

If you need a lot of queries that require joins, or complex analytics of
the kind that Cassandra isn't suited for, then HDFS / HBase may be better.

If you can work with the cassandra way of doing things (creating new tables
for each query you'll need to do, duplicating data - doing extra writes for
faster reads) , then Cassandra should work for you. It is easier to setup
and do dev ops with, in my experience.

On Sun, Oct 23, 2016 at 2:05 PM, Welly Tambunan  wrote:

> I mean. HDFS and HBase.
>
> On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar  wrote:
>
>> By Hadoop do you mean HDFS?
>>
>>
>>
>> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan 
>> wrote:
>>
>>> Hi All,
>>>
>>> I read the following comparison between hadoop and cassandra. Seems the
>>> conclusion that we use hadoop for data lake ( cold data ) and Cassandra for
>>> hot data (real time data).
>>>
>>> http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>>>
>>> My question is, can we just use cassandra to rule them all ?
>>>
>>> What we are trying to achieve is to minimize the moving part on our
>>> system.
>>>
>>> Any response would be really appreciated.
>>>
>>>
>>> Cheers
>>>
>>> --
>>> Welly Tambunan
>>> Triplelands
>>>
>>> http://weltam.wordpress.com
>>> http://www.triplelands.com 
>>>
>>
>>
>
>
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com 
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Ben Slater
It’s reasonably common to use Cassandra to cover both online and analytics
requirements, particularly using it in conjunction with Spark. You can use
Cassandra’s multi-DC functionality to have online and analytics DCs for a
reasonable degree of workload separation without having to build ETL (or
some other replication) to get data between two environments.

On Sun, 23 Oct 2016 at 20:00 Ali Akhtar  wrote:

> By Hadoop do you mean HDFS?
>
>
>
> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan  wrote:
>
> Hi All,
>
> I read the following comparison between hadoop and cassandra. Seems the
> conclusion that we use hadoop for data lake ( cold data ) and Cassandra for
> hot data (real time data).
>
> http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>
> My question is, can we just use cassandra to rule them all ?
>
> What we are trying to achieve is to minimize the moving part on our
> system.
>
> Any response would be really appreciated.
>
>
> Cheers
>
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com 
>
>
>


Re: Hadoop vs Cassandra

2016-10-23 Thread Welly Tambunan
I mean. HDFS and HBase.

On Sun, Oct 23, 2016 at 4:00 PM, Ali Akhtar  wrote:

> By Hadoop do you mean HDFS?
>
>
>
> On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan  wrote:
>
>> Hi All,
>>
>> I read the following comparison between hadoop and cassandra. Seems the
>> conclusion that we use hadoop for data lake ( cold data ) and Cassandra for
>> hot data (real time data).
>>
>> http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>>
>> My question is, can we just use cassandra to rule them all ?
>>
>> What we are trying to achieve is to minimize the moving part on our
>> system.
>>
>> Any response would be really appreciated.
>>
>>
>> Cheers
>>
>> --
>> Welly Tambunan
>> Triplelands
>>
>> http://weltam.wordpress.com
>> http://www.triplelands.com 
>>
>
>


-- 
Welly Tambunan
Triplelands

http://weltam.wordpress.com
http://www.triplelands.com 


Re: Hadoop vs Cassandra

2016-10-23 Thread Ali Akhtar
By Hadoop do you mean HDFS?



On Sun, Oct 23, 2016 at 1:56 PM, Welly Tambunan  wrote:

> Hi All,
>
> I read the following comparison between hadoop and cassandra. Seems the
> conclusion that we use hadoop for data lake ( cold data ) and Cassandra for
> hot data (real time data).
>
> http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop
>
> My question is, can we just use cassandra to rule them all ?
>
> What we are trying to achieve is to minimize the moving part on our
> system.
>
> Any response would be really appreciated.
>
>
> Cheers
>
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com 
>


Hadoop vs Cassandra

2016-10-23 Thread Welly Tambunan
Hi All,

I read the following comparison between hadoop and cassandra. Seems the
conclusion that we use hadoop for data lake ( cold data ) and Cassandra for
hot data (real time data).

http://www.datastax.com/nosql-databases/nosql-cassandra-and-hadoop

My question is, can we just use cassandra to rule them all ?

What we are trying to achieve is to minimize the moving part on our system.

Any response would be really appreciated.


Cheers

-- 
Welly Tambunan
Triplelands

http://weltam.wordpress.com
http://www.triplelands.com 


Re: What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread DuyHai Doan
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/CounterSerializer.java

public class CounterSerializer extends LongSerializer

On Sun, Oct 23, 2016 at 10:16 AM, Ben Slater 
wrote:

> http://cassandra.apache.org/doc/latest/cql/types.html?
> highlight=counter#counters
>
> On Sun, 23 Oct 2016 at 19:15 Kant Kodali  wrote:
>
>> where does it say counter is implemented as long?
>>
>> On Sun, Oct 23, 2016 at 1:13 AM, Ali Akhtar  wrote:
>>
>> Probably: https://docs.oracle.com/javase/8/docs/api/java/
>> lang/Long.html#MAX_VALUE
>>
>> On Sun, Oct 23, 2016 at 1:12 PM, Kant Kodali  wrote:
>>
>> What is the maximum value of Cassandra Counter Column?
>>
>>
>>
>>


Re: What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread Ben Slater
http://cassandra.apache.org/doc/latest/cql/types.html?highlight=counter#counters

On Sun, 23 Oct 2016 at 19:15 Kant Kodali  wrote:

> where does it say counter is implemented as long?
>
> On Sun, Oct 23, 2016 at 1:13 AM, Ali Akhtar  wrote:
>
> Probably:
> https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html#MAX_VALUE
>
> On Sun, Oct 23, 2016 at 1:12 PM, Kant Kodali  wrote:
>
> What is the maximum value of Cassandra Counter Column?
>
>
>
>


Re: What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread Ali Akhtar
It seems obvious.

On Sun, Oct 23, 2016 at 1:15 PM, Kant Kodali  wrote:

> where does it say counter is implemented as long?
>
> On Sun, Oct 23, 2016 at 1:13 AM, Ali Akhtar  wrote:
>
>> Probably: https://docs.oracle.com/javase/8/docs/api/java/lan
>> g/Long.html#MAX_VALUE
>>
>> On Sun, Oct 23, 2016 at 1:12 PM, Kant Kodali  wrote:
>>
>>> What is the maximum value of Cassandra Counter Column?
>>>
>>
>>
>


Re: What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread Kant Kodali
where does it say counter is implemented as long?

On Sun, Oct 23, 2016 at 1:13 AM, Ali Akhtar  wrote:

> Probably: https://docs.oracle.com/javase/8/docs/api/java/
> lang/Long.html#MAX_VALUE
>
> On Sun, Oct 23, 2016 at 1:12 PM, Kant Kodali  wrote:
>
>> What is the maximum value of Cassandra Counter Column?
>>
>
>


Re: What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread Ali Akhtar
Probably:
https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html#MAX_VALUE

On Sun, Oct 23, 2016 at 1:12 PM, Kant Kodali  wrote:

> What is the maximum value of Cassandra Counter Column?
>


What is the maximum value of Cassandra Counter Column?

2016-10-23 Thread Kant Kodali
What is the maximum value of Cassandra Counter Column?