RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Ok, got it. Many thanks for your help.

Regards,
Rajeswari
From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 11:30
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

When you run a cql query like select Json from table where pk=?  , you will get 
the value which is a full Json .but if you have a requirement to query the Json 
by using some fields inside Json ,you have to create additional columns for 
that fields and create a secondary index on it .
Then you can query
select Json from table where address=?,assuming you created a secondary index 
on address column .
Regards
Hari

Sent from my iPhone

On Feb 1, 2017, at 9:18 PM, Rajeswari Menon 
> wrote:
Could you please help me on this. I am a newbie in Cassandra. So If I need to 
add json as a String, I can define the table as below.

create table data
(
  id int primary key,
  json text
);

The insert query will be as follows:

insert into data (id, json) values (1, '{
   "address":"127.0.0.1",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "Id":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}');

Now how can I query the value field?


From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:18
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can create additional columns and create secondary index based on fields 
you want to query .
Best option is store full Json in Cassandra and index fields you want to query 
on in solr .

Sent from my iPhone

On Feb 1, 2017, at 8:41 PM, Rajeswari Menon 
> wrote:
Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but 

Re: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Harikrishnan Pillai
When you run a cql query like select Json from table where pk=?  , you will get 
the value which is a full Json .but if you have a requirement to query the Json 
by using some fields inside Json ,you have to create additional columns for 
that fields and create a secondary index on it .
Then you can query
select Json from table where address=?,assuming you created a secondary index 
on address column .
Regards
Hari

Sent from my iPhone

On Feb 1, 2017, at 9:18 PM, Rajeswari Menon 
> wrote:

Could you please help me on this. I am a newbie in Cassandra. So If I need to 
add json as a String, I can define the table as below.

create table data
(
  id int primary key,
  json text
);

The insert query will be as follows:

insert into data (id, json) values (1, '{
   "address":"127.0.0.1",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "Id":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}');

Now how can I query the value field?


From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:18
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can create additional columns and create secondary index based on fields 
you want to query .
Best option is store full Json in Cassandra and index fields you want to query 
on in solr .

Sent from my iPhone

On Feb 1, 2017, at 8:41 PM, Rajeswari Menon 
> wrote:
Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I 

RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Ok, got it. Many thanks for your help.

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 02 February 2017 11:09
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

This has to be done in your app. You can store your data as JSON in a text 
column. You can use your favourite serializer. You can cast floats to strings. 
You can even build a custom type. You can store it serialized as blob. But 
there is no all purpose store all data in a magic way field.

Am 02.02.2017 05:30 schrieb "Rajeswari Menon" 
>:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don’t want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 
304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Benjamin Roth
This has to be done in your app. You can store your data as JSON in a text
column. You can use your favourite serializer. You can cast floats to
strings. You can even build a custom type. You can store it serialized as
blob. But there is no all purpose store all data in a magic way field.

Am 02.02.2017 05:30 schrieb "Rajeswari Menon" :

> Yes. Is there any way to define value to accept any data type as the json
> value data may vary? Or is there any way to do the same without defining a
> schema?
>
>
>
> Regards,
>
> Rajeswari
>
>
>
> *From:* Benjamin Roth [mailto:benjamin.r...@jaumo.com]
> *Sent:* 01 February 2017 15:36
> *To:* user@cassandra.apache.org
> *Subject:* RE: Is it possible to have a column which can hold any data
> type (for inserting as json)
>
>
>
> Value is defined as text column and you try to insert a double. That's
> simply not allowed
>
>
>
> Am 01.02.2017 09:02 schrieb "Rajeswari Menon" :
>
> Given below is the sql query I executed.
>
>
>
> *insert* *into* data JSON'{
>
>   "id": 1,
>
>"address":"",
>
>"datatype":"DOUBLE",
>
>"name":"Longitude",
>
>"attributes":{
>
>   "ID":"1"
>
>},
>
>"category":"REAL",
>
>"value":1.390692,
>
>"timestamp":1485923271718,
>
>"quality":"GOOD"
>
> }';
>
>
>
> Regards,
>
> Rajeswari
>
>
>
> *From:* Benjamin Roth [mailto:benjamin.r...@jaumo.com]
> *Sent:* 01 February 2017 12:35
> *To:* user@cassandra.apache.org
> *Subject:* Re: Is it possible to have a column which can hold any data
> type (for inserting as json)
>
>
>
> You should post the whole CQL query you try to execute! Why don't you use
> a native JSON type for your JSON data?
>
>
>
> 2017-02-01 7:51 GMT+01:00 Rajeswari Menon :
>
> Hi,
>
>
>
> I have a json data as shown below.
>
>
>
> {
>
> "address":"127.0.0.1",
>
> "datatype":"DOUBLE",
>
> "name":"Longitude",
>
>  "attributes":{
>
> "Id":"1"
>
> },
>
> "category":"REAL",
>
> "value":1.390692,
>
> "timestamp":1485923271718,
>
> "quality":"GOOD"
>
> }
>
>
>
> To store the above json to Cassandra, I defined a table as shown below
>
>
>
> *create* *table* data
>
> (
>
>   id *int* *primary* *key*,
>
>   address text,
>
>   datatype text,
>
>   name text,
>
>   *attributes* *map* < text, text >,
>
>   category text,
>
>   value text,
>
>   "timestamp" *timestamp*,
>
>   quality text
>
> );
>
>
>
> When I try to insert the data as JSON I got the error : *Error decoding
> JSON value for value: Expected a UTF-8 string, but got a Double: 1.390692*.
> The message is clear that a double value cannot be inserted to text column.
> The real issue is that the value can be of any data type, so the schema
> cannot be predefined. Is there a way to create a column which can hold
> value of any data type. (I don’t want to hold the entire json as string. My
> preferred way is to define a schema.)
>
>
>
> Regards,
>
> Rajeswari
>
>
>
>
>
> --
>
> Benjamin Roth
> Prokurist
>
> Jaumo GmbH · www.jaumo.com
> Wehrstraße 46 · 73035 Göppingen · Germany
> Phone +49 7161 304880-6 <07161%203048806> · Fax +49 7161 304880-1
> <07161%203048801>
> AG Ulm · HRB 731058 · Managing Director: Jens Kammerer
>
>


FW: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Sorry for the red color in my question. It happened by mistake.

Regards,
Rajeswari.

From: Rajeswari Menon
Sent: 02 February 2017 10:47
To: 'user@cassandra.apache.org'
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Could you please help me on this. I am a newbie in Cassandra. So If I need to 
add json as a String, I can define the table as below.

create table data
(
  id int primary key,
  json text
);

The insert query will be as follows:

insert into data (id, json) values (1, '{
   "address":"127.0.0.1",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "Id":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}');

Now how can I query the value field?


From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:18
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can create additional columns and create secondary index based on fields 
you want to query .
Best option is store full Json in Cassandra and index fields you want to query 
on in solr .

Sent from my iPhone

On Feb 1, 2017, at 8:41 PM, Rajeswari Menon 
> wrote:
Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don't want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 

RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Could you please help me on this. I am a newbie in Cassandra. So If I need to 
add json as a String, I can define the table as below.

create table data
(
  id int primary key,
  json text
);

The insert query will be as follows:

insert into data (id, json) values (1, '{
   "address":"127.0.0.1",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "Id":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}');

Now how can I query the value field?


From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:18
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can create additional columns and create secondary index based on fields 
you want to query .
Best option is store full Json in Cassandra and index fields you want to query 
on in solr .

Sent from my iPhone

On Feb 1, 2017, at 8:41 PM, Rajeswari Menon 
> wrote:
Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don't want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 
304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


Re: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Harikrishnan Pillai
You can create additional columns and create secondary index based on fields 
you want to query .
Best option is store full Json in Cassandra and index fields you want to query 
on in solr .

Sent from my iPhone

On Feb 1, 2017, at 8:41 PM, Rajeswari Menon 
> wrote:

Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don't want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH * www.jaumo.com
Wehrstra?e 46 * 73035 G?ppingen * Germany
Phone +49 7161 304880-6 * Fax +49 7161 
304880-1
AG Ulm * HRB 731058 * Managing Director: Jens Kammerer


RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Yes. I know that. My intension is to do an aggregate query on value field (in 
json). Will that be possible if I store the entire json as String? I will have 
to parse it according to my need right?

Regards,
Rajeswari

From: Harikrishnan Pillai [mailto:hpil...@walmartlabs.com]
Sent: 02 February 2017 10:08
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don't want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 
304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


Re: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Harikrishnan Pillai
You can use text type in Cassandra and store the full Json  string .

Sent from my iPhone

On Feb 1, 2017, at 8:30 PM, Rajeswari Menon 
> wrote:

Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don't want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH * www.jaumo.com
Wehrstra?e 46 * 73035 G?ppingen * Germany
Phone +49 7161 304880-6 * Fax +49 7161 
304880-1
AG Ulm * HRB 731058 * Managing Director: Jens Kammerer


RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Yes. Is there any way to define value to accept any data type as the json value 
data may vary? Or is there any way to do the same without defining a schema?

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 15:36
To: user@cassandra.apache.org
Subject: RE: Is it possible to have a column which can hold any data type (for 
inserting as json)

Value is defined as text column and you try to insert a double. That's simply 
not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" 
>:
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don’t want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 
304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


Re: Global TTL vs Insert TTL

2017-02-01 Thread Carlos Rolo
Awsome to know this!

Thanks Jon and DuyHai!

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
*linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 918 918 100
www.pythian.com

On Wed, Feb 1, 2017 at 6:57 PM, Jonathan Haddad  wrote:

> The optimization is there.  The entire sstable can be dropped but it's not
> because of the default TTL.  The default TTL only applies if a TTL isn't
> specified explicitly.  The default TTL can't be used to drop a table
> automatically since it can be overridden at insert time.  Check out this
> example.  The first insert uses the default TTL.  The second insert
> overrides the default.  Using the default TTL to drop the sstable would be
> pretty terrible in this case:
>
> CREATE TABLE test.b (
> k int PRIMARY KEY,
> v int
> ) WITH default_time_to_live = 1;
>
> insert into b (k, v) values (1, 1);
> cqlsh:test> select k, v, TTL(v) from b  where k = 1;
>
>  k | v | ttl(v)
> ---+---+
>  1 | 1 |   9943
>
> (1 rows)
>
> cqlsh:test> insert into b (k, v) values (2, 1) USING TTL ;
> cqlsh:test> select k, v, TTL(v) from b  where k = 2;
>
>  k | v | ttl(v)
> ---+---+--
>  2 | 1 | 9995
>
> (1 rows)
>
> TL;DR: The default TTL is there as a convenience so you don't have to keep
> the TTL in your code.  From a performance perspective it does not matter.
>
> Jon
>
>
> On Wed, Feb 1, 2017 at 10:39 AM DuyHai Doan  wrote:
>
>> I was referring to this JIRA https://issues.apache.
>> org/jira/browse/CASSANDRA-3974 when talking about dropping entire
>> SSTable at compaction time
>>
>> But the JIRA is pretty old and it is very possible that the optimization
>> is no longer there
>>
>>
>>
>> On Wed, Feb 1, 2017 at 6:53 PM, Jonathan Haddad 
>> wrote:
>>
>> This is incorrect, there's no optimization used that references the table
>> level TTL setting.   The max local deletion time is stored in table
>> metadata.  See 
>> org.apache.cassandra.io.sstable.metadata.StatsMetadata#maxLocalDeletionTime
>> in the Cassandra 3.0 branch.The default ttl is stored
>> here: org.apache.cassandra.schema.TableParams#defaultTimeToLive and is
>> never referenced during compaction.
>>
>> Here's an example from a table I created without a default TTL, you can
>> use the sstablemetadata tool to see:
>>
>> jhaddad@rustyrazorblade ~/dev/cassandra/data/data/test$
>> ../../../tools/bin/sstablemetadata a-7bca6b50e8a511e6869a5596edf4dd
>> 35/mc-1-big-Data.db
>> .
>> SSTable max local deletion time: 1485980862
>>
>> On Wed, Feb 1, 2017 at 6:59 AM DuyHai Doan  wrote:
>>
>> Global TTL is better than dynamic runtime TTL
>>
>> Why ?
>>
>>  Because Global TTL is a table property and Cassandra can perform
>> optimization when compacting.
>>
>> For example if it can see than the maxTimestamp of an SSTable is older
>> than the table Global TTL, the SSTable can be entirely dropped during
>> compaction
>>
>> Using dynamic TTL at runtime, since Cassandra doesn't how and cannot
>> track each individual TTL value, the previous optimization is not possible
>> (even if you always use the SAME TTL for all query, Cassandra is not
>> supposed to know that)
>>
>>
>>
>> On Wed, Feb 1, 2017 at 3:01 PM, Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt> wrote:
>>
>> Thank you all, for your answers.
>>
>> On 02/01/2017 01:06 PM, Carlos Rolo wrote:
>>
>> To reinforce Alain statement:
>>
>> "I would say that the unsafe part is more about using C* 3.9" this is
>> key. You would be better on 3.0.x unless you need features on the 3.x
>> series.
>>
>> Regards,
>>
>> Carlos Juzarte Rolo
>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>
>> Pythian - Love your data
>>
>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>> *linkedin.com/in/carlosjuzarterolo
>> *
>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>> www.pythian.com
>>
>> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ 
>> wrote:
>>
>> Is it safe to use TWCS in C* 3.9?
>>
>>
>> I would say that the unsafe part is more about using C* 3.9 than using
>> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
>> in C*3.9, but I might be missing something.
>>
>> Going from STCS to TWCS is often smooth, from LCS you might expect an
>> extra load compacting a lot (all?) of the SSTable from what we saw from the
>> field. In this case, be sure that your compaction options are safe enough
>> to handle this.
>>
>> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
>> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
>> any other default compaction strategy.
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - 

Re: quick question

2017-02-01 Thread Kant Kodali
Adding dev only for this thread.

On Wed, Feb 1, 2017 at 4:39 AM, Kant Kodali  wrote:

> What is the difference between accepting a value and committing a value?
>
>
>
> On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali  wrote:
>
>> Hi,
>>
>> Thanks for the response. I finished watching this video but I still got
>> few questions.
>>
>> 1) The speaker seems to suggest that there are different consistency
>> levels being used in different phases of paxos protocol. If so, what is
>> right consistency level to set on these phases?
>>
>> 2) Right now, we just set consistency level as QUORUM at the global level
>> and I dont think we ever change it so in this case what would be the
>> consistency levels being used in different phases.
>>
>> 3) The fact that one should think about reading before the commit phase
>> or after the commit phase (but not any other phase) sounds like there is
>> something special about commit phase and what is that? when I set the
>> QUORUM level consistency at global level Does the commit phase happen right
>> after accept  phase or no? or irrespective of the consistency level when
>> does the commit phase happen anyway? and what happens during the commit
>> phase?
>>
>>
>> Thanks,
>> kant
>>
>>
>> On Wed, Feb 1, 2017 at 3:30 AM, Alain RODRIGUEZ 
>> wrote:
>>
>>> Hi,
>>>
>>> I believe that this talk from Christopher Batey at the Cassandra Summit
>>> 2016 might answer most of your questions around LWT:
>>> https://www.youtube.com/watch?v=wcxQM3ZN20c
>>>
>>> He explains a lot of stuff including consistency considerations. My
>>> understanding is that the quorum read can only see the data written using
>>> LWT after the commit phase. A SERIAL Read would see it (video, around
>>> 23:40).
>>>
>>> Here are the slides as well: http://fr.slideshare.net
>>> /DataStax/light-weight-transactions-under-stress-christopher
>>> -batey-the-last-pickle-cassandra-summit-2016
>>>
>>> Let us know if you still have questions after watching this (about 35
>>> minutes).
>>>
>>> C*heers,
>>> ---
>>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>>> France
>>>
>>> The Last Pickle - Apache Cassandra Consulting
>>> http://www.thelastpickle.com
>>>
>>> 2017-02-01 10:57 GMT+01:00 Kant Kodali :
>>>
 When you initiate a LWT(write) and do a QUORUM read is there a chance
 that one might not see the LWT write ? If so, can someone explain a bit
 more?

 Thanks!

>>>
>>>
>>
>


Re: Global TTL vs Insert TTL

2017-02-01 Thread Jonathan Haddad
The optimization is there.  The entire sstable can be dropped but it's not
because of the default TTL.  The default TTL only applies if a TTL isn't
specified explicitly.  The default TTL can't be used to drop a table
automatically since it can be overridden at insert time.  Check out this
example.  The first insert uses the default TTL.  The second insert
overrides the default.  Using the default TTL to drop the sstable would be
pretty terrible in this case:

CREATE TABLE test.b (
k int PRIMARY KEY,
v int
) WITH default_time_to_live = 1;

insert into b (k, v) values (1, 1);
cqlsh:test> select k, v, TTL(v) from b  where k = 1;

 k | v | ttl(v)
---+---+
 1 | 1 |   9943

(1 rows)

cqlsh:test> insert into b (k, v) values (2, 1) USING TTL ;
cqlsh:test> select k, v, TTL(v) from b  where k = 2;

 k | v | ttl(v)
---+---+--
 2 | 1 | 9995

(1 rows)

TL;DR: The default TTL is there as a convenience so you don't have to keep
the TTL in your code.  From a performance perspective it does not matter.

Jon


On Wed, Feb 1, 2017 at 10:39 AM DuyHai Doan  wrote:

> I was referring to this JIRA
> https://issues.apache.org/jira/browse/CASSANDRA-3974 when talking about
> dropping entire SSTable at compaction time
>
> But the JIRA is pretty old and it is very possible that the optimization
> is no longer there
>
>
>
> On Wed, Feb 1, 2017 at 6:53 PM, Jonathan Haddad  wrote:
>
> This is incorrect, there's no optimization used that references the table
> level TTL setting.   The max local deletion time is stored in table
> metadata.  See 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata#maxLocalDeletionTime
> in the Cassandra 3.0 branch.The default ttl is stored
> here: org.apache.cassandra.schema.TableParams#defaultTimeToLive and is
> never referenced during compaction.
>
> Here's an example from a table I created without a default TTL, you can
> use the sstablemetadata tool to see:
>
> jhaddad@rustyrazorblade ~/dev/cassandra/data/data/test$
> ../../../tools/bin/sstablemetadata
> a-7bca6b50e8a511e6869a5596edf4dd35/mc-1-big-Data.db
> .
> SSTable max local deletion time: 1485980862
>
> On Wed, Feb 1, 2017 at 6:59 AM DuyHai Doan  wrote:
>
> Global TTL is better than dynamic runtime TTL
>
> Why ?
>
>  Because Global TTL is a table property and Cassandra can perform
> optimization when compacting.
>
> For example if it can see than the maxTimestamp of an SSTable is older
> than the table Global TTL, the SSTable can be entirely dropped during
> compaction
>
> Using dynamic TTL at runtime, since Cassandra doesn't how and cannot track
> each individual TTL value, the previous optimization is not possible (even
> if you always use the SAME TTL for all query, Cassandra is not supposed to
> know that)
>
>
>
> On Wed, Feb 1, 2017 at 3:01 PM, Cogumelos Maravilha <
> cogumelosmaravi...@sapo.pt> wrote:
>
> Thank you all, for your answers.
>
> On 02/01/2017 01:06 PM, Carlos Rolo wrote:
>
> To reinforce Alain statement:
>
> "I would say that the unsafe part is more about using C* 3.9" this is key.
> You would be better on 3.0.x unless you need features on the 3.x series.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +351 918 918 100 <+351%20918%20918%20100>
> www.pythian.com
>
> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ 
> wrote:
>
> Is it safe to use TWCS in C* 3.9?
>
>
> I would say that the unsafe part is more about using C* 3.9 than using
> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
> in C*3.9, but I might be missing something.
>
> Going from STCS to TWCS is often smooth, from LCS you might expect an
> extra load compacting a lot (all?) of the SSTable from what we saw from the
> field. In this case, be sure that your compaction options are safe enough
> to handle this.
>
> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
> any other default compaction strategy.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha  >:
>
> Hi Alain,
>
> Thanks for your response and the links.
>
> I've also checked "Time series data model and tombstones".
>
> Is it safe to use TWCS in C* 3.9?
>
> Thanks in advance.
>
> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>
> Is there a overhead using line by line option or wasted disk space?
>
>  There is a very recent topic about that in the mailing list, look for "Time
> series data model and 

Re: Global TTL vs Insert TTL

2017-02-01 Thread DuyHai Doan
I was referring to this JIRA
https://issues.apache.org/jira/browse/CASSANDRA-3974 when talking about
dropping entire SSTable at compaction time

But the JIRA is pretty old and it is very possible that the optimization is
no longer there



On Wed, Feb 1, 2017 at 6:53 PM, Jonathan Haddad  wrote:

> This is incorrect, there's no optimization used that references the table
> level TTL setting.   The max local deletion time is stored in table
> metadata.  See 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata#maxLocalDeletionTime
> in the Cassandra 3.0 branch.The default ttl is stored
> here: org.apache.cassandra.schema.TableParams#defaultTimeToLive and is
> never referenced during compaction.
>
> Here's an example from a table I created without a default TTL, you can
> use the sstablemetadata tool to see:
>
> jhaddad@rustyrazorblade ~/dev/cassandra/data/data/test$
> ../../../tools/bin/sstablemetadata a-7bca6b50e8a511e6869a5596edf4dd
> 35/mc-1-big-Data.db
> .
> SSTable max local deletion time: 1485980862
>
> On Wed, Feb 1, 2017 at 6:59 AM DuyHai Doan  wrote:
>
>> Global TTL is better than dynamic runtime TTL
>>
>> Why ?
>>
>>  Because Global TTL is a table property and Cassandra can perform
>> optimization when compacting.
>>
>> For example if it can see than the maxTimestamp of an SSTable is older
>> than the table Global TTL, the SSTable can be entirely dropped during
>> compaction
>>
>> Using dynamic TTL at runtime, since Cassandra doesn't how and cannot
>> track each individual TTL value, the previous optimization is not possible
>> (even if you always use the SAME TTL for all query, Cassandra is not
>> supposed to know that)
>>
>>
>>
>> On Wed, Feb 1, 2017 at 3:01 PM, Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt> wrote:
>>
>> Thank you all, for your answers.
>>
>> On 02/01/2017 01:06 PM, Carlos Rolo wrote:
>>
>> To reinforce Alain statement:
>>
>> "I would say that the unsafe part is more about using C* 3.9" this is
>> key. You would be better on 3.0.x unless you need features on the 3.x
>> series.
>>
>> Regards,
>>
>> Carlos Juzarte Rolo
>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>
>> Pythian - Love your data
>>
>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>> *linkedin.com/in/carlosjuzarterolo
>> *
>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>> www.pythian.com
>>
>> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ 
>> wrote:
>>
>> Is it safe to use TWCS in C* 3.9?
>>
>>
>> I would say that the unsafe part is more about using C* 3.9 than using
>> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
>> in C*3.9, but I might be missing something.
>>
>> Going from STCS to TWCS is often smooth, from LCS you might expect an
>> extra load compacting a lot (all?) of the SSTable from what we saw from the
>> field. In this case, be sure that your compaction options are safe enough
>> to handle this.
>>
>> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
>> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
>> any other default compaction strategy.
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt>:
>>
>> Hi Alain,
>>
>> Thanks for your response and the links.
>>
>> I've also checked "Time series data model and tombstones".
>>
>> Is it safe to use TWCS in C* 3.9?
>>
>> Thanks in advance.
>>
>> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>>
>> Is there a overhead using line by line option or wasted disk space?
>>
>>  There is a very recent topic about that in the mailing list, look for "Time
>> series data model and tombstones". I believe DuyHai answer your question
>> there with more details :).
>>
>> *tl;dr:*
>>
>> Yes, if you know the TTL in advance, and it is fixed, you might want to
>> go with the table option instead of adding the TTL in each insert. Also you
>> might want consider using TWCS compaction strategy.
>>
>> Here are some blogposts my coworkers recently wrote about TWCS, it might
>> be useful:
>>
>> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>>
>>
>> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt>:
>>
>> Hi I'm just wondering what option is fastest:
>>
>> Global:*create table xxx (.**AND **default_time_to_live = **XXX**;**
>> and**UPDATE xxx USING TTL XXX;*
>>
>> Line by line:
>> *INSERT INTO xxx (...** USING TTL xxx;*
>>
>> Is 

Re: Global TTL vs Insert TTL

2017-02-01 Thread Jonathan Haddad
This is incorrect, there's no optimization used that references the table
level TTL setting.   The max local deletion time is stored in table
metadata.
See org.apache.cassandra.io.sstable.metadata.StatsMetadata#maxLocalDeletionTime
in the Cassandra 3.0 branch.The default ttl is stored
here: org.apache.cassandra.schema.TableParams#defaultTimeToLive and is
never referenced during compaction.

Here's an example from a table I created without a default TTL, you can use
the sstablemetadata tool to see:

jhaddad@rustyrazorblade ~/dev/cassandra/data/data/test$
../../../tools/bin/sstablemetadata
a-7bca6b50e8a511e6869a5596edf4dd35/mc-1-big-Data.db
.
SSTable max local deletion time: 1485980862

On Wed, Feb 1, 2017 at 6:59 AM DuyHai Doan  wrote:

> Global TTL is better than dynamic runtime TTL
>
> Why ?
>
>  Because Global TTL is a table property and Cassandra can perform
> optimization when compacting.
>
> For example if it can see than the maxTimestamp of an SSTable is older
> than the table Global TTL, the SSTable can be entirely dropped during
> compaction
>
> Using dynamic TTL at runtime, since Cassandra doesn't how and cannot track
> each individual TTL value, the previous optimization is not possible (even
> if you always use the SAME TTL for all query, Cassandra is not supposed to
> know that)
>
>
>
> On Wed, Feb 1, 2017 at 3:01 PM, Cogumelos Maravilha <
> cogumelosmaravi...@sapo.pt> wrote:
>
> Thank you all, for your answers.
>
> On 02/01/2017 01:06 PM, Carlos Rolo wrote:
>
> To reinforce Alain statement:
>
> "I would say that the unsafe part is more about using C* 3.9" this is key.
> You would be better on 3.0.x unless you need features on the 3.x series.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +351 918 918 100 <+351%20918%20918%20100>
> www.pythian.com
>
> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ 
> wrote:
>
> Is it safe to use TWCS in C* 3.9?
>
>
> I would say that the unsafe part is more about using C* 3.9 than using
> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
> in C*3.9, but I might be missing something.
>
> Going from STCS to TWCS is often smooth, from LCS you might expect an
> extra load compacting a lot (all?) of the SSTable from what we saw from the
> field. In this case, be sure that your compaction options are safe enough
> to handle this.
>
> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
> any other default compaction strategy.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha  >:
>
> Hi Alain,
>
> Thanks for your response and the links.
>
> I've also checked "Time series data model and tombstones".
>
> Is it safe to use TWCS in C* 3.9?
>
> Thanks in advance.
>
> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>
> Is there a overhead using line by line option or wasted disk space?
>
>  There is a very recent topic about that in the mailing list, look for "Time
> series data model and tombstones". I believe DuyHai answer your question
> there with more details :).
>
> *tl;dr:*
>
> Yes, if you know the TTL in advance, and it is fixed, you might want to go
> with the table option instead of adding the TTL in each insert. Also you
> might want consider using TWCS compaction strategy.
>
> Here are some blogposts my coworkers recently wrote about TWCS, it might
> be useful:
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
>
> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha  >:
>
> Hi I'm just wondering what option is fastest:
>
> Global:*create table xxx (.**AND **default_time_to_live = **XXX**;**
> and**UPDATE xxx USING TTL XXX;*
>
> Line by line:
> *INSERT INTO xxx (...** USING TTL xxx;*
>
> Is there a overhead using line by line option or wasted disk space?
>
> Thanks in advance.
>
>
>
>
>
>


Re: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Peter Reilly
Example:
cqlsh> use dc_god_emperor ;
cqlsh:dc_god_emperor> create table data ( id int primary key, value text ) ;
cqlsh:dc_god_emperor> insert into data JSON'{"id": 1, "value": "hello
world"}'
  ... ;
cqlsh:dc_god_emperor> select * from data;

 id | value
+-
  1 | hello world

(1 rows)

Peter

On Wed, Feb 1, 2017 at 3:06 AM, Benjamin Roth 
wrote:

> Value is defined as text column and you try to insert a double. That's
> simply not allowed
>
> Am 01.02.2017 09:02 schrieb "Rajeswari Menon" :
>
>> Given below is the sql query I executed.
>>
>>
>>
>> *insert* *into* data JSON'{
>>
>>   "id": 1,
>>
>>"address":"",
>>
>>"datatype":"DOUBLE",
>>
>>"name":"Longitude",
>>
>>"attributes":{
>>
>>   "ID":"1"
>>
>>},
>>
>>"category":"REAL",
>>
>>"value":1.390692,
>>
>>"timestamp":1485923271718,
>>
>>"quality":"GOOD"
>>
>> }';
>>
>>
>>
>> Regards,
>>
>> Rajeswari
>>
>>
>>
>> *From:* Benjamin Roth [mailto:benjamin.r...@jaumo.com]
>> *Sent:* 01 February 2017 12:35
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Is it possible to have a column which can hold any data
>> type (for inserting as json)
>>
>>
>>
>> You should post the whole CQL query you try to execute! Why don't you use
>> a native JSON type for your JSON data?
>>
>>
>>
>> 2017-02-01 7:51 GMT+01:00 Rajeswari Menon :
>>
>> Hi,
>>
>>
>>
>> I have a json data as shown below.
>>
>>
>>
>> {
>>
>> "address":"127.0.0.1",
>>
>> "datatype":"DOUBLE",
>>
>> "name":"Longitude",
>>
>>  "attributes":{
>>
>> "Id":"1"
>>
>> },
>>
>> "category":"REAL",
>>
>> "value":1.390692,
>>
>> "timestamp":1485923271718,
>>
>> "quality":"GOOD"
>>
>> }
>>
>>
>>
>> To store the above json to Cassandra, I defined a table as shown below
>>
>>
>>
>> *create* *table* data
>>
>> (
>>
>>   id *int* *primary* *key*,
>>
>>   address text,
>>
>>   datatype text,
>>
>>   name text,
>>
>>   *attributes* *map* < text, text >,
>>
>>   category text,
>>
>>   value text,
>>
>>   "timestamp" *timestamp*,
>>
>>   quality text
>>
>> );
>>
>>
>>
>> When I try to insert the data as JSON I got the error : *Error decoding
>> JSON value for value: Expected a UTF-8 string, but got a Double: 1.390692*.
>> The message is clear that a double value cannot be inserted to text column.
>> The real issue is that the value can be of any data type, so the schema
>> cannot be predefined. Is there a way to create a column which can hold
>> value of any data type. (I don’t want to hold the entire json as string. My
>> preferred way is to define a schema.)
>>
>>
>>
>> Regards,
>>
>> Rajeswari
>>
>>
>>
>>
>>
>> --
>>
>> Benjamin Roth
>> Prokurist
>>
>> Jaumo GmbH · www.jaumo.com
>> Wehrstraße 46 · 73035 Göppingen · Germany
>> Phone +49 7161 304880-6 <07161%203048806> · Fax +49 7161 304880-1
>> <07161%203048801>
>> AG Ulm · HRB 731058 · Managing Director: Jens Kammerer
>>
>


Re: Global TTL vs Insert TTL

2017-02-01 Thread DuyHai Doan
Global TTL is better than dynamic runtime TTL

Why ?

 Because Global TTL is a table property and Cassandra can perform
optimization when compacting.

For example if it can see than the maxTimestamp of an SSTable is older than
the table Global TTL, the SSTable can be entirely dropped during compaction

Using dynamic TTL at runtime, since Cassandra doesn't how and cannot track
each individual TTL value, the previous optimization is not possible (even
if you always use the SAME TTL for all query, Cassandra is not supposed to
know that)



On Wed, Feb 1, 2017 at 3:01 PM, Cogumelos Maravilha <
cogumelosmaravi...@sapo.pt> wrote:

> Thank you all, for your answers.
>
> On 02/01/2017 01:06 PM, Carlos Rolo wrote:
>
> To reinforce Alain statement:
>
> "I would say that the unsafe part is more about using C* 3.9" this is key.
> You would be better on 3.0.x unless you need features on the 3.x series.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +351 918 918 100 <+351%20918%20918%20100>
> www.pythian.com
>
> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ 
> wrote:
>
>> Is it safe to use TWCS in C* 3.9?
>>
>>
>> I would say that the unsafe part is more about using C* 3.9 than using
>> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
>> in C*3.9, but I might be missing something.
>>
>> Going from STCS to TWCS is often smooth, from LCS you might expect an
>> extra load compacting a lot (all?) of the SSTable from what we saw from the
>> field. In this case, be sure that your compaction options are safe enough
>> to handle this.
>>
>> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
>> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
>> any other default compaction strategy.
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt>:
>>
>>> Hi Alain,
>>>
>>> Thanks for your response and the links.
>>>
>>> I've also checked "Time series data model and tombstones".
>>>
>>> Is it safe to use TWCS in C* 3.9?
>>>
>>> Thanks in advance.
>>>
>>> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>>>
>>> Is there a overhead using line by line option or wasted disk space?

  There is a very recent topic about that in the mailing list, look for 
 "Time
>>> series data model and tombstones". I believe DuyHai answer your question
>>> there with more details :).
>>>
>>> *tl;dr:*
>>>
>>> Yes, if you know the TTL in advance, and it is fixed, you might want to
>>> go with the table option instead of adding the TTL in each insert. Also you
>>> might want consider using TWCS compaction strategy.
>>>
>>> Here are some blogposts my coworkers recently wrote about TWCS, it might
>>> be useful:
>>>
>>> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>>> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>>>
>>> C*heers,
>>> ---
>>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>>> France
>>>
>>> The Last Pickle - Apache Cassandra Consulting
>>> http://www.thelastpickle.com
>>>
>>>
>>>
>>> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha <
>>> cogumelosmaravi...@sapo.pt>:
>>>
 Hi I'm just wondering what option is fastest:

 Global:*create table xxx (.**AND **default_time_to_live = **XXX**;**
 and**UPDATE xxx USING TTL XXX;*

 Line by line:
 *INSERT INTO xxx (...** USING TTL xxx;*

 Is there a overhead using line by line option or wasted disk space?

 Thanks in advance.


>>>
>>>
>


Re: Global TTL vs Insert TTL

2017-02-01 Thread Cogumelos Maravilha
Thank you all, for your answers.


On 02/01/2017 01:06 PM, Carlos Rolo wrote:
> To reinforce Alain statement:
>
> "I would say that the unsafe part is more about using C* 3.9" this is
> key. You would be better on 3.0.x unless you need features on the 3.x
> series.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>  
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> _linkedin.com/in/carlosjuzarterolo
> _
> Mobile: +351 918 918 100
> www.pythian.com 
>
> On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ  > wrote:
>
> Is it safe to use TWCS in C* 3.9?
>
>
> I would say that the unsafe part is more about using C* 3.9 than
> using TWCS in C*3.9 :-). I see no reason to say 3.9 would be
> specifically unsafe in C*3.9, but I might be missing something.
>
> Going from STCS to TWCS is often smooth, from LCS you might expect
> an extra load compacting a lot (all?) of the SSTable from what we
> saw from the field. In this case, be sure that your compaction
> options are safe enough to handle this.
>
> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the
> new default replacing TWCS, so no extra jar is needed, you can
> enable TWCS as any other default compaction strategy.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> 
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha
> >:
>
> Hi Alain,
>
> Thanks for your response and the links.
>
> I've also checked "Time series data model and tombstones".
>
> Is it safe to use TWCS in C* 3.9?
>
> Thanks in advance.
>
>
> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>>
>> Is there a overhead using line by line option or wasted disk 
>> space?
>>
>>  There is a very recent topic about that in the mailing list,
>> look for "Time series data model and tombstones". I believe
>> DuyHai answer your question there with more details :).
>>
>> *tl;dr:*
>>
>> Yes, if you know the TTL in advance, and it is fixed, you
>> might want to go with the table option instead of adding the
>> TTL in each insert. Also you might want consider using TWCS
>> compaction strategy.
>>
>> Here are some blogposts my coworkers recently wrote about
>> TWCS, it might be useful:
>>
>> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>> 
>> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>> 
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> 
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>>
>>
>> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha
>> >:
>>
>> Hi I'm just wondering what option is fastest:
>>
>> Global:***create table xxx (.|AND 
>> |**|default_time_to_live = |**|XXX|**|;|**||and**UPDATE xxx USING 
>> TTL XXX;*
>>
>> Line by line:
>>
>> *INSERT INTO xxx (...USING TTL xxx;*
>>
>> Is there a overhead using line by line option or wasted disk 
>> space?
>>
>> Thanks in advance.
>>
>>
>



Re: Global TTL vs Insert TTL

2017-02-01 Thread Carlos Rolo
To reinforce Alain statement:

"I would say that the unsafe part is more about using C* 3.9" this is key.
You would be better on 3.0.x unless you need features on the 3.x series.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
*linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 918 918 100
www.pythian.com

On Wed, Feb 1, 2017 at 8:32 AM, Alain RODRIGUEZ  wrote:

> Is it safe to use TWCS in C* 3.9?
>
>
> I would say that the unsafe part is more about using C* 3.9 than using
> TWCS in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe
> in C*3.9, but I might be missing something.
>
> Going from STCS to TWCS is often smooth, from LCS you might expect an
> extra load compacting a lot (all?) of the SSTable from what we saw from the
> field. In this case, be sure that your compaction options are safe enough
> to handle this.
>
> TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
> default replacing TWCS, so no extra jar is needed, you can enable TWCS as
> any other default compaction strategy.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha  >:
>
>> Hi Alain,
>>
>> Thanks for your response and the links.
>>
>> I've also checked "Time series data model and tombstones".
>>
>> Is it safe to use TWCS in C* 3.9?
>>
>> Thanks in advance.
>>
>> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>>
>> Is there a overhead using line by line option or wasted disk space?
>>>
>>>  There is a very recent topic about that in the mailing list, look for "Time
>> series data model and tombstones". I believe DuyHai answer your question
>> there with more details :).
>>
>> *tl;dr:*
>>
>> Yes, if you know the TTL in advance, and it is fixed, you might want to
>> go with the table option instead of adding the TTL in each insert. Also you
>> might want consider using TWCS compaction strategy.
>>
>> Here are some blogposts my coworkers recently wrote about TWCS, it might
>> be useful:
>>
>> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
>> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>>
>>
>> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha <
>> cogumelosmaravi...@sapo.pt>:
>>
>>> Hi I'm just wondering what option is fastest:
>>>
>>> Global:*create table xxx (.**AND **default_time_to_live = **XXX**;**
>>> and**UPDATE xxx USING TTL XXX;*
>>>
>>> Line by line:
>>> *INSERT INTO xxx (...** USING TTL xxx;*
>>>
>>> Is there a overhead using line by line option or wasted disk space?
>>>
>>> Thanks in advance.
>>>
>>>
>>
>>
>

-- 


--





Re: quick question

2017-02-01 Thread Kant Kodali
What is the difference between accepting a value and committing a value?



On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali  wrote:

> Hi,
>
> Thanks for the response. I finished watching this video but I still got
> few questions.
>
> 1) The speaker seems to suggest that there are different consistency
> levels being used in different phases of paxos protocol. If so, what is
> right consistency level to set on these phases?
>
> 2) Right now, we just set consistency level as QUORUM at the global level
> and I dont think we ever change it so in this case what would be the
> consistency levels being used in different phases.
>
> 3) The fact that one should think about reading before the commit phase or
> after the commit phase (but not any other phase) sounds like there is
> something special about commit phase and what is that? when I set the
> QUORUM level consistency at global level Does the commit phase happen right
> after accept  phase or no? or irrespective of the consistency level when
> does the commit phase happen anyway? and what happens during the commit
> phase?
>
>
> Thanks,
> kant
>
>
> On Wed, Feb 1, 2017 at 3:30 AM, Alain RODRIGUEZ 
> wrote:
>
>> Hi,
>>
>> I believe that this talk from Christopher Batey at the Cassandra Summit
>> 2016 might answer most of your questions around LWT:
>> https://www.youtube.com/watch?v=wcxQM3ZN20c
>>
>> He explains a lot of stuff including consistency considerations. My
>> understanding is that the quorum read can only see the data written using
>> LWT after the commit phase. A SERIAL Read would see it (video, around
>> 23:40).
>>
>> Here are the slides as well: http://fr.slideshare.net
>> /DataStax/light-weight-transactions-under-stress-christopher
>> -batey-the-last-pickle-cassandra-summit-2016
>>
>> Let us know if you still have questions after watching this (about 35
>> minutes).
>>
>> C*heers,
>> ---
>> Alain Rodriguez - @arodream - al...@thelastpickle.com
>> France
>>
>> The Last Pickle - Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>> 2017-02-01 10:57 GMT+01:00 Kant Kodali :
>>
>>> When you initiate a LWT(write) and do a QUORUM read is there a chance
>>> that one might not see the LWT write ? If so, can someone explain a bit
>>> more?
>>>
>>> Thanks!
>>>
>>
>>
>


Re: quick question

2017-02-01 Thread Kant Kodali
Hi,

Thanks for the response. I finished watching this video but I still got few
questions.

1) The speaker seems to suggest that there are different consistency levels
being used in different phases of paxos protocol. If so, what is right
consistency level to set on these phases?

2) Right now, we just set consistency level as QUORUM at the global level
and I dont think we ever change it so in this case what would be the
consistency levels being used in different phases.

3) The fact that one should think about reading before the commit phase or
after the commit phase (but not any other phase) sounds like there is
something special about commit phase and what is that? when I set the
QUORUM level consistency at global level Does the commit phase happen right
after accept  phase or no? or irrespective of the consistency level when
does the commit phase happen anyway? and what happens during the commit
phase?


Thanks,
kant


On Wed, Feb 1, 2017 at 3:30 AM, Alain RODRIGUEZ  wrote:

> Hi,
>
> I believe that this talk from Christopher Batey at the Cassandra Summit
> 2016 might answer most of your questions around LWT:
> https://www.youtube.com/watch?v=wcxQM3ZN20c
>
> He explains a lot of stuff including consistency considerations. My
> understanding is that the quorum read can only see the data written using
> LWT after the commit phase. A SERIAL Read would see it (video, around
> 23:40).
>
> Here are the slides as well: http://fr.slideshare.
> net/DataStax/light-weight-transactions-under-stress-
> christopher-batey-the-last-pickle-cassandra-summit-2016
>
> Let us know if you still have questions after watching this (about 35
> minutes).
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2017-02-01 10:57 GMT+01:00 Kant Kodali :
>
>> When you initiate a LWT(write) and do a QUORUM read is there a chance
>> that one might not see the LWT write ? If so, can someone explain a bit
>> more?
>>
>> Thanks!
>>
>
>


Re: quick question

2017-02-01 Thread Alain RODRIGUEZ
Hi,

I believe that this talk from Christopher Batey at the Cassandra Summit
2016 might answer most of your questions around LWT:
https://www.youtube.com/watch?v=wcxQM3ZN20c

He explains a lot of stuff including consistency considerations. My
understanding is that the quorum read can only see the data written using
LWT after the commit phase. A SERIAL Read would see it (video, around
23:40).

Here are the slides as well:
http://fr.slideshare.net/DataStax/light-weight-transactions-under-stress-christopher-batey-the-last-pickle-cassandra-summit-2016

Let us know if you still have questions after watching this (about 35
minutes).

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-02-01 10:57 GMT+01:00 Kant Kodali :

> When you initiate a LWT(write) and do a QUORUM read is there a chance that
> one might not see the LWT write ? If so, can someone explain a bit more?
>
> Thanks!
>


RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Benjamin Roth
Value is defined as text column and you try to insert a double. That's
simply not allowed

Am 01.02.2017 09:02 schrieb "Rajeswari Menon" :

> Given below is the sql query I executed.
>
>
>
> *insert* *into* data JSON'{
>
>   "id": 1,
>
>"address":"",
>
>"datatype":"DOUBLE",
>
>"name":"Longitude",
>
>"attributes":{
>
>   "ID":"1"
>
>},
>
>"category":"REAL",
>
>"value":1.390692,
>
>"timestamp":1485923271718,
>
>"quality":"GOOD"
>
> }';
>
>
>
> Regards,
>
> Rajeswari
>
>
>
> *From:* Benjamin Roth [mailto:benjamin.r...@jaumo.com]
> *Sent:* 01 February 2017 12:35
> *To:* user@cassandra.apache.org
> *Subject:* Re: Is it possible to have a column which can hold any data
> type (for inserting as json)
>
>
>
> You should post the whole CQL query you try to execute! Why don't you use
> a native JSON type for your JSON data?
>
>
>
> 2017-02-01 7:51 GMT+01:00 Rajeswari Menon :
>
> Hi,
>
>
>
> I have a json data as shown below.
>
>
>
> {
>
> "address":"127.0.0.1",
>
> "datatype":"DOUBLE",
>
> "name":"Longitude",
>
>  "attributes":{
>
> "Id":"1"
>
> },
>
> "category":"REAL",
>
> "value":1.390692,
>
> "timestamp":1485923271718,
>
> "quality":"GOOD"
>
> }
>
>
>
> To store the above json to Cassandra, I defined a table as shown below
>
>
>
> *create* *table* data
>
> (
>
>   id *int* *primary* *key*,
>
>   address text,
>
>   datatype text,
>
>   name text,
>
>   *attributes* *map* < text, text >,
>
>   category text,
>
>   value text,
>
>   "timestamp" *timestamp*,
>
>   quality text
>
> );
>
>
>
> When I try to insert the data as JSON I got the error : *Error decoding
> JSON value for value: Expected a UTF-8 string, but got a Double: 1.390692*.
> The message is clear that a double value cannot be inserted to text column.
> The real issue is that the value can be of any data type, so the schema
> cannot be predefined. Is there a way to create a column which can hold
> value of any data type. (I don’t want to hold the entire json as string. My
> preferred way is to define a schema.)
>
>
>
> Regards,
>
> Rajeswari
>
>
>
>
>
> --
>
> Benjamin Roth
> Prokurist
>
> Jaumo GmbH · www.jaumo.com
> Wehrstraße 46 · 73035 Göppingen · Germany
> Phone +49 7161 304880-6 <07161%203048806> · Fax +49 7161 304880-1
> <07161%203048801>
> AG Ulm · HRB 731058 · Managing Director: Jens Kammerer
>


quick question

2017-02-01 Thread Kant Kodali
When you initiate a LWT(write) and do a QUORUM read is there a chance that
one might not see the LWT write ? If so, can someone explain a bit more?

Thanks!


Re: Global TTL vs Insert TTL

2017-02-01 Thread Alain RODRIGUEZ
>
> Is it safe to use TWCS in C* 3.9?


I would say that the unsafe part is more about using C* 3.9 than using TWCS
in C*3.9 :-). I see no reason to say 3.9 would be specifically unsafe in
C*3.9, but I might be missing something.

Going from STCS to TWCS is often smooth, from LCS you might expect an extra
load compacting a lot (all?) of the SSTable from what we saw from the
field. In this case, be sure that your compaction options are safe enough
to handle this.

TWCS is even easier to use on C*3.0.8+ and C*3.8+ as it became the new
default replacing TWCS, so no extra jar is needed, you can enable TWCS as
any other default compaction strategy.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-01-31 23:29 GMT+01:00 Cogumelos Maravilha :

> Hi Alain,
>
> Thanks for your response and the links.
>
> I've also checked "Time series data model and tombstones".
>
> Is it safe to use TWCS in C* 3.9?
>
> Thanks in advance.
>
> On 31-01-2017 11:27, Alain RODRIGUEZ wrote:
>
> Is there a overhead using line by line option or wasted disk space?
>>
>>  There is a very recent topic about that in the mailing list, look for "Time
> series data model and tombstones". I believe DuyHai answer your question
> there with more details :).
>
> *tl;dr:*
>
> Yes, if you know the TTL in advance, and it is fixed, you might want to go
> with the table option instead of adding the TTL in each insert. Also you
> might want consider using TWCS compaction strategy.
>
> Here are some blogposts my coworkers recently wrote about TWCS, it might
> be useful:
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
> http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
>
> 2017-01-31 10:43 GMT+01:00 Cogumelos Maravilha  >:
>
>> Hi I'm just wondering what option is fastest:
>>
>> Global:*create table xxx (.**AND **default_time_to_live = **XXX**;**
>> and**UPDATE xxx USING TTL XXX;*
>>
>> Line by line:
>> *INSERT INTO xxx (...** USING TTL xxx;*
>>
>> Is there a overhead using line by line option or wasted disk space?
>>
>> Thanks in advance.
>>
>>
>
>


RE: Is it possible to have a column which can hold any data type (for inserting as json)

2017-02-01 Thread Rajeswari Menon
Given below is the sql query I executed.

insert into data JSON'{
  "id": 1,
   "address":"",
   "datatype":"DOUBLE",
   "name":"Longitude",
   "attributes":{
  "ID":"1"
   },
   "category":"REAL",
   "value":1.390692,
   "timestamp":1485923271718,
   "quality":"GOOD"
}';

Regards,
Rajeswari

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: 01 February 2017 12:35
To: user@cassandra.apache.org
Subject: Re: Is it possible to have a column which can hold any data type (for 
inserting as json)

You should post the whole CQL query you try to execute! Why don't you use a 
native JSON type for your JSON data?

2017-02-01 7:51 GMT+01:00 Rajeswari Menon 
>:
Hi,

I have a json data as shown below.

{
"address":"127.0.0.1",
"datatype":"DOUBLE",
"name":"Longitude",
 "attributes":{
"Id":"1"
},
"category":"REAL",
"value":1.390692,
"timestamp":1485923271718,
"quality":"GOOD"
}

To store the above json to Cassandra, I defined a table as shown below

create table data
(
  id int primary key,
  address text,
  datatype text,
  name text,
  attributes map < text, text >,
  category text,
  value text,
  "timestamp" timestamp,
  quality text
);

When I try to insert the data as JSON I got the error : Error decoding JSON 
value for value: Expected a UTF-8 string, but got a Double: 1.390692. The 
message is clear that a double value cannot be inserted to text column. The 
real issue is that the value can be of any data type, so the schema cannot be 
predefined. Is there a way to create a column which can hold value of any data 
type. (I don’t want to hold the entire json as string. My preferred way is to 
define a schema.)

Regards,
Rajeswari



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer