Free webinar on Times series IoT data ingestion into Cassandra using Kaa

2015-09-01 Thread Oleh Rozvadovskyy
[image: Time series IoT data ingestion into Cassandra using Kaa]


We are glad to announce that a new webinar on using Kaa with Cassandra is
due on September 10. During this webinar, we will build a solution that
ingests real-time data from a temperature sensor connected to Raspberry Pi
into Cassandra for further processing and analytics.

We will also review some best practices on Cassandra data modeling and
demonstrate how easy it is to reuse them in Kaa. Since both Kaa and
Cassandra are 100% open-source, the solution described during the webinar
can be used as a prototype even for a commercial product.

For hands-on experience at the webinar, come equipped with a Raspberry Pi
board, a PC/notebook with the Kaa Sandbox installed, jumper wires, and a
digital temperature sensor (we will use DHT11). The expected duration is
about 60 minutes, including the Q session. If you have any questions
regarding this webinar, write us an email

and
we’ll answer shortly.

The webinar takes place on September 10, 11:00 a.m. PDT and you can
register via the following link: https://goo.gl/WsYmjm



Re: Adding New Nodes/Data Center to an existing Cluster.

2015-09-01 Thread Sebastian Estevez
DSE 4.7 ships with Cassandra 2.1 for stability.

All the best,


[image: datastax_logo.png] 

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png]  [image:
facebook.png]  [image: twitter.png]
 [image: g+.png]





DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Tue, Sep 1, 2015 at 12:53 PM, Sachin Nikam  wrote:

> @Neha,
> We are using DSE 4.7 & Cassandra 2.2
>
> @Alain,
> I will check with out OPS team about repair vs rebuild and get back to you.
> Regards
> Sachin
>
> On Tue, Sep 1, 2015 at 5:59 AM, Alain RODRIGUEZ 
> wrote:
>
>> Hi Sachin,
>>
>> You are speaking about a repair, when the proper command to do this is
>> "rebuild" ?
>>
>> Did you tried adding your DC this way:
>> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
>>  ?
>>
>>
>> 2015-09-01 5:32 GMT+02:00 Neha Trivedi :
>>
>>> Hi,
>>> Can you specify which version of Cassandra you are using?
>>> Can you provide the Error Stack ?
>>>
>>> regards
>>> Neha
>>>
>>> On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
>>> sebastian.este...@datastax.com> wrote:
>>>
 or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps

 All the best,


 [image: datastax_logo.png] 

 Sebastián Estévez

 Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

 [image: linkedin.png]  [image:
 facebook.png]  [image: twitter.png]
  [image: g+.png]
 
 


 

 DataStax is the fastest, most scalable distributed database
 technology, delivering Apache Cassandra to the world’s most innovative
 enterprises. Datastax is built to be agile, always-on, and predictably
 scalable to any size. With more than 500 customers in 45 countries, 
 DataStax
 is the database technology and transactional backbone of choice for the
 worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

 On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans 
 wrote:

>
> On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam 
> wrote:
>
>> When we add 3 more nodes in Data Center B, the repair tool starts
>> syncing the data between 2 data centers and then gives up after ~2 days.
>>
>> Has anybody run in to similar issue before? If so what is the
>> solution?
>>
>
> https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?
>
>
> --
> Eric Evans
> eev...@wikimedia.org
>


>>>
>>
>


Point upgrades

2015-09-01 Thread Stan Lemon
I am wondering if when doing a point upgrade, such as 2.0.11 to 2.0.16
do I need to stop all repairs before performing an upgrade on node IF
that node is NOT the one running the repair? Basically I would like to
upgrade the other nodes, and then when the repair is done I can
upgrade that node.

Thanks,
Stan


Amounts of ??CLOSE_WAIT?? status connection occur at cassandra servers

2015-09-01 Thread ??????
Hi,

I encountered an issue recently when I built spark-cassandra applications 
envirenment.


Problem description:
  1. I run spark applications to process data then write result to cassandra.
  2. In the beginning, all going well. But several hours(or serval days)  later 
amounts of ??CLOSE_WAIT?? status connection occur at cassandra servers.
  It leads spark-application connect cassandra  faild, also execute command 
'nodetool status' show 'DN' status on the trouble node.
  3. CLOSE_WAIT connections include communication with other cassandra 
nodes(use port 7000, a small part) and communication with 
spark-applications(use port 9042, a large part)
  4. Restart the trouble cassandra node, recovery. 


Envirenment:
   Versions: spark 1.4.1 (spark-core 2.10)   cassandra: 2.1.8   
spark-cassandra-connector:1.4.0-M3  
   
   8 spark-applications run on 6 spark servers, and has 3 cassandra nodes.


   Dose anybody have the samilar problem? or knows the root cause.
   Thanks.




   Regards.




   dandykang

Re: Network / GC / Latency spike

2015-09-01 Thread Fabien Rousseau
Hi Alain,

Maybe it's possible to confirm this by testing on a small cluster:
- create a cluster of 2 nodes (using https://github.com/pcmanus/ccm for
example)
- create a fake wide row of a few mb (using the python driver for example)
- drain and stop one of the two nodes
- remove the sstables of the stopped node (to provoke inconsistencies)
- start it again
- select a small portion of the wide row (many times, use nodetool tpstats
to know when a read repair has been triggered)
- nodetool flush (on the previously stopped node)
- check the size of the sstable (if a few kb, then only the selected slice
was repaired, but if a few mb then the whole row was repaired)

The wild guess was: if a read repair was triggered when reading a small
portion of a wide row and if it resulted in streaming the whole wide row,
it could explain a network burst. (But, on a second thought it make more
sense to only repair the small portion being read...)



2015-09-01 12:05 GMT+02:00 Alain RODRIGUEZ :

> Hi Fabien, thanks for your help.
>
> I did not mention it but I indeed saw a correlation between latency and
> read repairs spikes. Though this is like going from 5 RR per second to 10
> per sec cluster wide according to opscenter: http://img42.com/L6gx1
>
> I have indeed some wide rows and this explanation looks reasonable to me,
> I mean this makes sense. Yet isn't this amount of Read Repair too low to
> induce such a "shitstorm" (even if it spikes x2, I got network x10) ? Also
> wide rows are present on heavy used tables (sadly...), so I should be using
> more network all the time (why only a few spikes per day (like 2 / 3 max) ?
>
> How could I confirm this, without removing RR and waiting a week I mean,
> is there a way to see the size of the data being repaired through this
> mechanism ?
>
> C*heers
>
> Alain
>
> 2015-09-01 0:11 GMT+02:00 Fabien Rousseau :
>
>> Hi Alain,
>>
>> Could it be wide rows + read repair ? (Let's suppose the "read repair"
>> repairs the full row, and it may not be subject to stream throughput limit)
>>
>> Best Regards
>> Fabien
>>
>> 2015-08-31 15:56 GMT+02:00 Alain RODRIGUEZ :
>>
>>> I just realised that I have no idea about how this mailing list handle
>>> attached files.
>>>
>>> Please find screenshots there --> http://img42.com/collection/y2KxS
>>>
>>> Alain
>>>
>>> 2015-08-31 15:48 GMT+02:00 Alain RODRIGUEZ :
>>>
 Hi,

 Running a 2.0.16 C* on AWS (private VPC, 2 DC).

 I am facing an issue on our EU DC where I have a network burst
 (alongside with GC and latency increase).

 My first thought was a sudden application burst, though, I see no
 corresponding evolution on reads / write or even CPU.

 So I thought that this might come from the node themselves as IN almost
 equal OUT Network. I tried lowering stream throughput on the whole DC to 1
 Mbps, with ~30 nodes --> 30 Mbps --> ~4 MB/s max. My network went a lot
 higher about 30 M in both sides (see screenshots attached).

 I have tried to use iftop to see where this network is headed too, but
 I was not able to do it because burst are very shorts.

 So, questions are:

 - Did someone experienced something similar already ? If so, any clue
 would be appreciated :).
 - How can I know (monitor, capture) where this big amount of network is
 headed to or due to ?
 - Am I right trying to figure out what this network is or should I
 follow an other lead ?

 Notes: I also noticed that CPU does not spike nor does R, but disk
 reads also spikes !

 C*heers,

 Alain

>>>
>>>
>>
>


Re: Network / GC / Latency spike

2015-09-01 Thread Alain RODRIGUEZ
Hi Fabien, thanks for your help.

I did not mention it but I indeed saw a correlation between latency and
read repairs spikes. Though this is like going from 5 RR per second to 10
per sec cluster wide according to opscenter: http://img42.com/L6gx1

I have indeed some wide rows and this explanation looks reasonable to me, I
mean this makes sense. Yet isn't this amount of Read Repair too low to
induce such a "shitstorm" (even if it spikes x2, I got network x10) ? Also
wide rows are present on heavy used tables (sadly...), so I should be using
more network all the time (why only a few spikes per day (like 2 / 3 max) ?

How could I confirm this, without removing RR and waiting a week I mean, is
there a way to see the size of the data being repaired through this
mechanism ?

C*heers

Alain

2015-09-01 0:11 GMT+02:00 Fabien Rousseau :

> Hi Alain,
>
> Could it be wide rows + read repair ? (Let's suppose the "read repair"
> repairs the full row, and it may not be subject to stream throughput limit)
>
> Best Regards
> Fabien
>
> 2015-08-31 15:56 GMT+02:00 Alain RODRIGUEZ :
>
>> I just realised that I have no idea about how this mailing list handle
>> attached files.
>>
>> Please find screenshots there --> http://img42.com/collection/y2KxS
>>
>> Alain
>>
>> 2015-08-31 15:48 GMT+02:00 Alain RODRIGUEZ :
>>
>>> Hi,
>>>
>>> Running a 2.0.16 C* on AWS (private VPC, 2 DC).
>>>
>>> I am facing an issue on our EU DC where I have a network burst
>>> (alongside with GC and latency increase).
>>>
>>> My first thought was a sudden application burst, though, I see no
>>> corresponding evolution on reads / write or even CPU.
>>>
>>> So I thought that this might come from the node themselves as IN almost
>>> equal OUT Network. I tried lowering stream throughput on the whole DC to 1
>>> Mbps, with ~30 nodes --> 30 Mbps --> ~4 MB/s max. My network went a lot
>>> higher about 30 M in both sides (see screenshots attached).
>>>
>>> I have tried to use iftop to see where this network is headed too, but I
>>> was not able to do it because burst are very shorts.
>>>
>>> So, questions are:
>>>
>>> - Did someone experienced something similar already ? If so, any clue
>>> would be appreciated :).
>>> - How can I know (monitor, capture) where this big amount of network is
>>> headed to or due to ?
>>> - Am I right trying to figure out what this network is or should I
>>> follow an other lead ?
>>>
>>> Notes: I also noticed that CPU does not spike nor does R, but disk
>>> reads also spikes !
>>>
>>> C*heers,
>>>
>>> Alain
>>>
>>
>>
>


Data Size on each node

2015-09-01 Thread Sachin Nikam
We currently have a Cassandra Cluster spread over 2 DC. The data size on
each node of the cluster is 1.2TB with spinning disk. Minor and Major
compactions are slowing down our Read queries. It has been suggested that
replacing Spinning disks with SSD might help. Has anybody done something
similar? If so what has been the results?
Also if we go with SSD, how big can each node get for commercially
available SSDs?
Regards
Sachin


Re: Rebuild new DC nodes against new DC?

2015-09-01 Thread Alain RODRIGUEZ
Hi Bryan,

I have no clear answer to you yet I can give you some insights, my
understanding of this.

First, I am not sure that nodetool will let you "rebuild" from the DC the
node is in.
Then this would only work properly (if it works) because you have 3 nodes
and a RF of 2 or 3 and so all the data is already present in your new DC,
else you will rebuild from an incomplete DC. BTW Consistency Level - quorum
- has no impact as CL is for clients and you are on server operations, what
matters here is the RF and what data each node "hold". Using 'repair' or
copy SSTable directly instead of rebuild are options you might want to
consider (in the case all your data is already present in DC2 with only 2
nodes loaded).

That was to answer to your question, but I would say you should stick with
the procedure, it should definitely work, you just did it twice... "as we'd
like- streaming in the new DC would make things faster and ease some
headaches." being creative and deviate from standard procedure sometimes
work great... But often increase headaches and make things slower, take
care, be sure of what you're doing or follow procedures, imho.

"Our DC's are linked by a VPN that doesn't have as big of a pipe" --> you
rather should try to solve this as much as possible, you will need to
repair your cluster which can be quite bandwidth consuming.

As global advices for new DC: you might also want to disable
read_repair_chance on your tables to avoid cross DC at read time
(use dclocal_read_repair_chance instead), use "Local_Quorum" instead of
quorum and have your clients sticking the local (to them) DC.

Hope this will help, even if I can't answer precisely the "would it work"
question.

C*heers,

Alain


2015-09-01 2:10 GMT+02:00 Bryan Cheng :

> Hi list,
>
> We're bringing up a second DC, and following the procedure outlined here:
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
>
> We have three nodes in the new DC that are members of the cluster and
> indicate that they are running normally. We have begun the process of
> altering the keyspaces for multi-DC and are streaming over data via
> nodetool rebuild on a keyspace-by-keyspace basis.
>
> I couldn't find a clear answer for this: at what point is it safe to
> rebuild from the new dc versus the old?
>
> In other words, I have machines a, b, and c in DC2 (the new DC). I build a
> and b by specifying DC1 on the rebuild command line. Can I safely rebuild
> against DC2 for machine c? Is this at all dependent on quorum settings?
>
> Our DC's are linked by a VPN that doesn't have as big of a pipe as we'd
> like- streaming in the new DC would make things faster and ease some
> headaches.
>
> Thanks for any help!
>
> --Bryan
>


Re: Adding New Nodes/Data Center to an existing Cluster.

2015-09-01 Thread Alain RODRIGUEZ
Hi Sachin,

You are speaking about a repair, when the proper command to do this is
"rebuild" ?

Did you tried adding your DC this way:
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
 ?


2015-09-01 5:32 GMT+02:00 Neha Trivedi :

> Hi,
> Can you specify which version of Cassandra you are using?
> Can you provide the Error Stack ?
>
> regards
> Neha
>
> On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
> sebastian.este...@datastax.com> wrote:
>
>> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>>
>> All the best,
>>
>>
>> [image: datastax_logo.png] 
>>
>> Sebastián Estévez
>>
>> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>>
>> [image: linkedin.png]  [image:
>> facebook.png]  [image: twitter.png]
>>  [image: g+.png]
>> 
>> 
>>
>>
>> 
>>
>> DataStax is the fastest, most scalable distributed database technology,
>> delivering Apache Cassandra to the world’s most innovative enterprises.
>> Datastax is built to be agile, always-on, and predictably scalable to any
>> size. With more than 500 customers in 45 countries, DataStax is the
>> database technology and transactional backbone of choice for the worlds
>> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>
>> On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans  wrote:
>>
>>>
>>> On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam  wrote:
>>>
 When we add 3 more nodes in Data Center B, the repair tool starts
 syncing the data between 2 data centers and then gives up after ~2 days.

 Has anybody run in to similar issue before? If so what is the solution?

>>>
>>> https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?
>>>
>>>
>>> --
>>> Eric Evans
>>> eev...@wikimedia.org
>>>
>>
>>
>


Re: Adding New Nodes/Data Center to an existing Cluster.

2015-09-01 Thread Neha Trivedi
Sachin,
Hope you are not using Cassandra 2.2 in production?
regards
Neha

On Tue, Sep 1, 2015 at 11:20 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> DSE 4.7 ships with Cassandra 2.1 for stability.
>
> All the best,
>
>
> [image: datastax_logo.png] 
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>
> [image: linkedin.png]  [image:
> facebook.png]  [image: twitter.png]
>  [image: g+.png]
> 
> 
>
>
> 
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Tue, Sep 1, 2015 at 12:53 PM, Sachin Nikam  wrote:
>
>> @Neha,
>> We are using DSE 4.7 & Cassandra 2.2
>>
>> @Alain,
>> I will check with out OPS team about repair vs rebuild and get back to
>> you.
>> Regards
>> Sachin
>>
>> On Tue, Sep 1, 2015 at 5:59 AM, Alain RODRIGUEZ 
>> wrote:
>>
>>> Hi Sachin,
>>>
>>> You are speaking about a repair, when the proper command to do this is
>>> "rebuild" ?
>>>
>>> Did you tried adding your DC this way:
>>> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
>>>  ?
>>>
>>>
>>> 2015-09-01 5:32 GMT+02:00 Neha Trivedi :
>>>
 Hi,
 Can you specify which version of Cassandra you are using?
 Can you provide the Error Stack ?

 regards
 Neha

 On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
 sebastian.este...@datastax.com> wrote:

> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>
> All the best,
>
>
> [image: datastax_logo.png] 
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>
> [image: linkedin.png]  [image:
> facebook.png]  [image: twitter.png]
>  [image: g+.png]
> 
> 
>
>
> 
>
> DataStax is the fastest, most scalable distributed database
> technology, delivering Apache Cassandra to the world’s most innovative
> enterprises. Datastax is built to be agile, always-on, and predictably
> scalable to any size. With more than 500 customers in 45 countries, 
> DataStax
> is the database technology and transactional backbone of choice for the
> worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans 
> wrote:
>
>>
>> On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam 
>> wrote:
>>
>>> When we add 3 more nodes in Data Center B, the repair tool starts
>>> syncing the data between 2 data centers and then gives up after ~2 days.
>>>
>>> Has anybody run in to similar issue before? If so what is the
>>> solution?
>>>
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?
>>
>>
>> --
>> Eric Evans
>> eev...@wikimedia.org
>>
>
>

>>>
>>
>


test mail please ignore

2015-09-01 Thread Asit KAUSHIK



[RELEASE] Apache Cassandra 2.2.1 released

2015-09-01 Thread Jake Luciani
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.2.1.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.2 series. As always, please
pay
attention to the release notes[2] and Let us know[3] if you were to
encounter
any problem.

Enjoy!

[1]: http://goo.gl/x6ilHu (CHANGES.txt)
[2]: http://goo.gl/FHwYLN (NEWS.txt)
[3]: https://issues.apache.org/jira/browse/CASSANDRA


Re: Adding New Nodes/Data Center to an existing Cluster.

2015-09-01 Thread Sachin Nikam
@Neha,
We are using DSE 4.7 & Cassandra 2.2

@Alain,
I will check with out OPS team about repair vs rebuild and get back to you.
Regards
Sachin

On Tue, Sep 1, 2015 at 5:59 AM, Alain RODRIGUEZ  wrote:

> Hi Sachin,
>
> You are speaking about a repair, when the proper command to do this is
> "rebuild" ?
>
> Did you tried adding your DC this way:
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
>  ?
>
>
> 2015-09-01 5:32 GMT+02:00 Neha Trivedi :
>
>> Hi,
>> Can you specify which version of Cassandra you are using?
>> Can you provide the Error Stack ?
>>
>> regards
>> Neha
>>
>> On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
>> sebastian.este...@datastax.com> wrote:
>>
>>> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>>>
>>> All the best,
>>>
>>>
>>> [image: datastax_logo.png] 
>>>
>>> Sebastián Estévez
>>>
>>> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>>>
>>> [image: linkedin.png]  [image:
>>> facebook.png]  [image: twitter.png]
>>>  [image: g+.png]
>>> 
>>> 
>>>
>>>
>>> 
>>>
>>> DataStax is the fastest, most scalable distributed database technology,
>>> delivering Apache Cassandra to the world’s most innovative enterprises.
>>> Datastax is built to be agile, always-on, and predictably scalable to any
>>> size. With more than 500 customers in 45 countries, DataStax is the
>>> database technology and transactional backbone of choice for the worlds
>>> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>>
>>> On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans 
>>> wrote:
>>>

 On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam 
 wrote:

> When we add 3 more nodes in Data Center B, the repair tool starts
> syncing the data between 2 data centers and then gives up after ~2 days.
>
> Has anybody run in to similar issue before? If so what is the solution?
>

 https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?


 --
 Eric Evans
 eev...@wikimedia.org

>>>
>>>
>>
>


Re: abnormal log after remove a node

2015-09-01 Thread Alain RODRIGUEZ
Hi,

I finally did the exact same thing before receiving the answer.

I guess this will remain FTR :).

Thanks though !

Alain


2015-09-01 16:22 GMT+02:00 曹志富 :

> Just restart all of the c* node
>
> --
> Ranger Tsao
>
> 2015-08-25 18:17 GMT+08:00 Alain RODRIGUEZ :
>
>> Hi, I am facing the same issue on 2.0.16.
>>
>> Did you solve this ? How ?
>>
>> I plan to try a rolling restart and see if gossip state recover from this.
>>
>> C*heers,
>>
>> Alain
>>
>> 2015-06-19 11:40 GMT+02:00 曹志富 :
>>
>>> I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node
>>> from this cluster using nodetool decommission.
>>>
>>> But tody I find some log like this:
>>>
>>> INFO  [GossipStage:1] 2015-06-19 17:38:05,616 Gossiper.java:968 -
>>> InetAddress /172.19.105.41 is now DOWN
>>> INFO  [GossipStage:1] 2015-06-19 17:38:05,617 StorageService.java:1885 -
>>> Removing tokens [-1014432261309809702, -1055322450438958612,
>>> -1120728727235087395, -1191392141261832305, -1203676771883970142,
>>> -1215563040745505837, -1215648909329054362, -1269531760567530381,
>>> -1278047879489577908, -1313427877031136549, -1342822572958042617,
>>> -1350792764922315814, -1383390744017639599, -139000372807970456,
>>> -140827955201469664, -1631551789771606023, -1633789813430312609,
>>> -1795528665156349205, -1836619444785023397, -1879127294549041822,
>>> -1962337787208890426, -2022309807234530256, -2033402140526360327,
>>> -2089413865145942100, -210961549458416802, -2148530352195763113,
>>> -2184481573787758786, -610790268720205, -2340762266634834427,
>>> -2513416003567685694, -2520971378752190013, -2596695976621541808,
>>> -2620636796023437199, -2640378596436678113, -2679143017361311011,
>>> -2721176590519112233, -2749213392354746126, -279267896827516626,
>>> -2872377759991294853, -2904711688111888325, -290489381926812623,
>>> -3000574339499272616, -301428600802598523, -3019280155316984595,
>>> -3024451041907074275, -3056898917375012425, -3161300347260716852,
>>> -3166392383659271772, -3327634380871627036, -3530685865340274372,
>>> -3563112657791369745, -366930313427781469, -3729582520450700795,
>>> -3901838244986519991, -4065326606010524312, -4174346928341550117,
>>> -4184239233207315432, -4204369933734181327, -4206479093137814808,
>>> -421410317165821100, -4311166118017934135, -4407123461118340117,
>>> -4466364858622123151, -4466939645485100087, -448955147512581975,
>>> -4587780638857304626, -4649897584350376674, -4674234125365755024
>>> , -4833801201210885896, -4857586579802212277, -4868896650650107463,
>>> -4980063310159547694, -4983471821416248610, -4992846054037653676,
>>> -5026994389965137674, -514302500353679181
>>> 0, -5198414516309928594, -5245363745777287346, -5346838390293957674,
>>> -5374413419545696184, -5427881744040857637, -5453876964430787287,
>>> -5491923669475601173, -55219734138599212
>>> 6, -5523011502670737422, -5537121117160410549, -5557015938925208697,
>>> -5572489682738121748, -5745899409803353484, -5771239101488682535,
>>> -5893479791287484099, -59766730414807540
>>> 44, -6014643892406938367, -6086002438656595783, -6129360679394503700,
>>> -6224240257573911174, -6290393495130499466, -6378712056928268929,
>>> -6430306056990093461, -6800188263839065
>>> 013, -6912720411187525051, -7160327814305587432, -7175004328733776324,
>>> -7272070430660252577, -7307945744786025148, -742448651973108101,
>>> -7539255117639002578, -7657460716997978
>>> 94, -7846698077070579798, -7870621904906244395, -7900841391761900719,
>>> -7918145426423910061, -7936795453892692473, -8070255024778921411,
>>> -8086888710627677669, -8124855925323654
>>> 631, -8175270408138820500, -8271197636596881168, -8336685710406477123,
>>> -8466220397076441627, -8534337908154758270, -8550484400487603561,
>>> -862246738021989870, -8727219287242892
>>> 185, -8895705475282612927, -8921801772904834063, -9057266752652143883,
>>> -9059183540698454288, -9067986437682229598, -9148183367896132028,
>>> -962208188860606543, 10859447725819218
>>> 30, 1189775396643491793, 1253728955879686947, 1389982523380382228,
>>> 1429632314664544045, 143610053770130548, 150118120072602242,
>>> 1575692041584712198, 1624575905722628764, 17894
>>> 76212785155173, 1995296121962835019, 2041217364870030239,
>>> 2120277336231792146, 2124445736743406711, 2154979704292433983,
>>> 2340726755918680765, 23481654796845972, 23620268084352
>>> 24407, 2366144489007464626, 2381492708106933027, 2398868971489617398,
>>> 2427315953339163528, 2433999003913998534, 2633074510238705620,
>>> 266659839023809792, 2677817641360639089, 2
>>> 719725410894526151, 2751925111749406683, 2815703589803785617,
>>> 3041515796379693113, 3044903149214270978, 3094954503756703989,
>>> 3243933267690865263, 3246086646486800371, 33270068
>>> 97333869434, 3393657685587750192, 3395065499228709345,
>>> 3426126123948029459, 3500469615600510698, 3644011364716880512,
>>> 3693249207133187620, 3776164494954636918, 38780676797

Re: Re : Decommissioned node appears in logs, and is sometimes marked as "UNREACHEABLE" in `nodetool describecluster`

2015-09-01 Thread Sebastian Estevez
Are they in the system.peers table?
On Aug 28, 2015 4:21 PM, "sai krishnam raju potturi" 
wrote:

> We are using DSE on our clusters.
>
> DSE version : 4.6.7
> Cassandra version : 2.0.14
>
> thanks
> Sai Potturi
>
>
>
> On Fri, Aug 28, 2015 at 3:40 PM, Robert Coli  wrote:
>
>> On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
>> pskraj...@gmail.com> wrote:
>>
>>> we decommissioned nodes in a datacenter a while back. Those nodes
>>> keep showing up in the logs, and also sometimes marked as UNREACHABLE when
>>> `nodetool describecluster` is run.
>>>
>>
>> What version of Cassandra?
>>
>> This happens a lot in 1.0-2.0.
>>
>> =Rob
>>
>
>


Re: Upgrade from 2.1.0 to 2.1.9

2015-09-01 Thread Alain RODRIGUEZ
Hi Tony.

Did you read doc on Datastax site -->
http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeCassandraDetails.html
?

This being a minor upgrade I guess clients should not break, but it is
often advised to test things, just in case, even more if you never did any
update.

I heard nothing about any issue in 2.1 minor versions upgrade, but you
really should move away from 2.1.0, as since a lot of issues have been
fixed. I have in mind at least a big one, a memory leak that has been
fixed. Anyway, you never want to stay in x.x.0 versions for too long imho.
I even never use them excepted for testing purposes.

C*heers,

Alain

2015-08-28 22:58 GMT+02:00 Tony Anecito :

> Hi All,
> Been awhile since I upgaded and wanted to know what the steps are to
> upgrade from 2.1.0 to 2.1.9. Also want to know if I need to upgrade my java
> database driver.
>
> Thanks,
> -Tony
>


Re: abnormal log after remove a node

2015-09-01 Thread 曹志富
Just restart all of the c* node

--
Ranger Tsao

2015-08-25 18:17 GMT+08:00 Alain RODRIGUEZ :

> Hi, I am facing the same issue on 2.0.16.
>
> Did you solve this ? How ?
>
> I plan to try a rolling restart and see if gossip state recover from this.
>
> C*heers,
>
> Alain
>
> 2015-06-19 11:40 GMT+02:00 曹志富 :
>
>> I have a C* 2.1.5 with 24 nodes.A few days ago ,I have remove a node from
>> this cluster using nodetool decommission.
>>
>> But tody I find some log like this:
>>
>> INFO  [GossipStage:1] 2015-06-19 17:38:05,616 Gossiper.java:968 -
>> InetAddress /172.19.105.41 is now DOWN
>> INFO  [GossipStage:1] 2015-06-19 17:38:05,617 StorageService.java:1885 -
>> Removing tokens [-1014432261309809702, -1055322450438958612,
>> -1120728727235087395, -1191392141261832305, -1203676771883970142,
>> -1215563040745505837, -1215648909329054362, -1269531760567530381,
>> -1278047879489577908, -1313427877031136549, -1342822572958042617,
>> -1350792764922315814, -1383390744017639599, -139000372807970456,
>> -140827955201469664, -1631551789771606023, -1633789813430312609,
>> -1795528665156349205, -1836619444785023397, -1879127294549041822,
>> -1962337787208890426, -2022309807234530256, -2033402140526360327,
>> -2089413865145942100, -210961549458416802, -2148530352195763113,
>> -2184481573787758786, -610790268720205, -2340762266634834427,
>> -2513416003567685694, -2520971378752190013, -2596695976621541808,
>> -2620636796023437199, -2640378596436678113, -2679143017361311011,
>> -2721176590519112233, -2749213392354746126, -279267896827516626,
>> -2872377759991294853, -2904711688111888325, -290489381926812623,
>> -3000574339499272616, -301428600802598523, -3019280155316984595,
>> -3024451041907074275, -3056898917375012425, -3161300347260716852,
>> -3166392383659271772, -3327634380871627036, -3530685865340274372,
>> -3563112657791369745, -366930313427781469, -3729582520450700795,
>> -3901838244986519991, -4065326606010524312, -4174346928341550117,
>> -4184239233207315432, -4204369933734181327, -4206479093137814808,
>> -421410317165821100, -4311166118017934135, -4407123461118340117,
>> -4466364858622123151, -4466939645485100087, -448955147512581975,
>> -4587780638857304626, -4649897584350376674, -4674234125365755024
>> , -4833801201210885896, -4857586579802212277, -4868896650650107463,
>> -4980063310159547694, -4983471821416248610, -4992846054037653676,
>> -5026994389965137674, -514302500353679181
>> 0, -5198414516309928594, -5245363745777287346, -5346838390293957674,
>> -5374413419545696184, -5427881744040857637, -5453876964430787287,
>> -5491923669475601173, -55219734138599212
>> 6, -5523011502670737422, -5537121117160410549, -5557015938925208697,
>> -5572489682738121748, -5745899409803353484, -5771239101488682535,
>> -5893479791287484099, -59766730414807540
>> 44, -6014643892406938367, -6086002438656595783, -6129360679394503700,
>> -6224240257573911174, -6290393495130499466, -6378712056928268929,
>> -6430306056990093461, -6800188263839065
>> 013, -6912720411187525051, -7160327814305587432, -7175004328733776324,
>> -7272070430660252577, -7307945744786025148, -742448651973108101,
>> -7539255117639002578, -7657460716997978
>> 94, -7846698077070579798, -7870621904906244395, -7900841391761900719,
>> -7918145426423910061, -7936795453892692473, -8070255024778921411,
>> -8086888710627677669, -8124855925323654
>> 631, -8175270408138820500, -8271197636596881168, -8336685710406477123,
>> -8466220397076441627, -8534337908154758270, -8550484400487603561,
>> -862246738021989870, -8727219287242892
>> 185, -8895705475282612927, -8921801772904834063, -9057266752652143883,
>> -9059183540698454288, -9067986437682229598, -9148183367896132028,
>> -962208188860606543, 10859447725819218
>> 30, 1189775396643491793, 1253728955879686947, 1389982523380382228,
>> 1429632314664544045, 143610053770130548, 150118120072602242,
>> 1575692041584712198, 1624575905722628764, 17894
>> 76212785155173, 1995296121962835019, 2041217364870030239,
>> 2120277336231792146, 2124445736743406711, 2154979704292433983,
>> 2340726755918680765, 23481654796845972, 23620268084352
>> 24407, 2366144489007464626, 2381492708106933027, 2398868971489617398,
>> 2427315953339163528, 2433999003913998534, 2633074510238705620,
>> 266659839023809792, 2677817641360639089, 2
>> 719725410894526151, 2751925111749406683, 2815703589803785617,
>> 3041515796379693113, 3044903149214270978, 3094954503756703989,
>> 3243933267690865263, 3246086646486800371, 33270068
>> 97333869434, 3393657685587750192, 3395065499228709345,
>> 3426126123948029459, 3500469615600510698, 3644011364716880512,
>> 3693249207133187620, 3776164494954636918, 38780676797
>> 8035, 3872151295451662867, 3937077827707223414, 4041082935346014761,
>> 4060208918173638435, 4086747843759164940, 4165638694482690057,
>> 4203996339238989224, 4220155275330961826, 4
>> 366784953339236686, 4390116924352514616, 4391225331964772681,
>> 4392419346255765958, 

Re: Data Size on each node

2015-09-01 Thread Alain RODRIGUEZ
Hi,

Our migration to SSD (from m1.xl to I2.2xl on AWS) has been a big win. I
mean we wen from 80 / 90 % disk utilisation to 20 % max. Basically,
bottleneck are not disks performances anymore in our case. We get rid of
one of our major issue that was disk contention.

I highly recommend you to go ahead with this, even more with such a big
data set. Yet it will probably be more expensive per node.

An other solution for you might be adding nodes (to have less to handle per
node and make maintenance operations like repair, bootstrap, decommission,
... faster)

C*heers,

Alain




2015-09-01 10:17 GMT+02:00 Sachin Nikam :

> We currently have a Cassandra Cluster spread over 2 DC. The data size on
> each node of the cluster is 1.2TB with spinning disk. Minor and Major
> compactions are slowing down our Read queries. It has been suggested that
> replacing Spinning disks with SSD might help. Has anybody done something
> similar? If so what has been the results?
> Also if we go with SSD, how big can each node get for commercially
> available SSDs?
> Regards
> Sachin
>