Re: Security assessment of Cassandra

2016-02-16 Thread oleg yusim
Greetings,

Matt brought to my attention that I shared the document at "view only"
mode. My apologies for that. I corrected permissions and shared the
document personally with everybody, who indicated he/she would review it.

Thanks,

Oleg

On Fri, Feb 12, 2016 at 10:33 PM, oleg yusim  wrote:

> Greetings,
>
> Following Jack's and Matt's suggestions, I moved the doc to Google Docs
> and added to it all the security gaps in Cassandra I was able to discover
> (please, see second table below fist).
>
> Here is an updated link to my document:
>
>
> https://docs.google.com/document/d/13-yu-1a0MMkBiJFPNkYoTd1Hzed9tgKltWi6hFLZbsk/edit?usp=sharing
>
> Thanks,
>
> Oleg
>
> On Thu, Feb 11, 2016 at 2:29 PM, oleg yusim  wrote:
>
>> Greetings,
>>
>> Performing security assessment of Cassandra with the goal of generating
>> STIG for Cassandra (iase.disa.mil/stigs/Pages/a-z.aspx) I ran across
>> some questions regarding the way certain security features are implemented
>> (or not) in Cassandra.
>>
>> I composed the list of questions on these topics, which I wasn't able to
>> find definitive answer to anywhere else and posted it here:
>>
>> https://drive.google.com/open?id=0B2L9nW4Cyj41YWd1UkI4ZXVPYmM
>>
>> It is shared with all the members of that list, and any of the members of
>> this list is welcome to comment on this document (there is a place for
>> community comments specially reserved near each of the questions and my
>> take on it).
>>
>> I would greatly appreciate Cassandra community help here.
>>
>> Thanks,
>>
>> Oleg
>>
>
>


Re : decommissioned nodes shows up in "nodetool describecluster" as UNREACHABLE in 2.1.12 version

2016-02-16 Thread sai krishnam raju potturi
hi;
we have a 12 node cluster across 2 datacenters. We are currently using
cassandra 2.1.12 version.

SNITCH : GossipingPropertyFileSnitch

When we decommissioned few nodes in a particular datacenter and observed
the following :

nodetool status shows only the live nodes in the cluster.

nodetool describecluster shows the decommissioned nodes as UNREACHABLE.

nodetool gossipinfo shows the decommissioned nodes as "LEFT"


When the live nodes were restarted, "nodetool describecluster" shows only
the live nodes, which is expected.

Purging the gossip info too did not help.

INFO  17:27:07 InetAddress /X.X.X.X is now DOWN
INFO  17:27:07 Removing tokens [125897680671740685543105407593050165202,
140213388002871593911508364312533329916,
 98576967436431350637134234839492449485] for /X.X.X.X
INFO  17:27:07 InetAddress /X.X.X.X is now DOWN
INFO  17:27:07 Removing tokens [6977666116265389022494863106850615,
111270759969411259938117902792984586225,
138611464975439236357814418845450428175] for /X.X.X.X

Has anybody experienced similar behaviour. Restarting the entire cluster,
 everytime a node is decommissioned does not seem right. Thanks in advance
for the help.


thanks
Sai


Re: Sudden disk usage

2016-02-16 Thread Robert Coli
On Sat, Feb 13, 2016 at 4:30 PM, Branton Davis 
wrote:

> We use SizeTieredCompaction.  The nodes were about 67% full and we were
> planning on adding new nodes (doubling the cluster to 6) soon.
>

Be sure to add those new nodes one at a time.

Have you checked for, and cleared, old snapshots? Snapshots are
automatically taken at various times and have the unusual property of
growing larger over time. This is because they are hard links of data files
and do not take up disk space of their own until the files they link to are
compacted into new files.

=Rob


Re: Do I have to use repair -inc with the option -par forcely?

2016-02-16 Thread Carlos Rolo
+1 on what Alain said, but I do think if you are high enough on a 2.1.x
(will look later) version you don't need to follow the documentation. It is
outdated. Run a full repair, the you can start incremental repairs since
the SSTables will have the metadata on them about the last repair.

 Wait someone to confirm this/or confirm the docs are correct.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 91 891 81 00 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Tue, Feb 16, 2016 at 1:45 PM, Alain RODRIGUEZ  wrote:

> Hi,
>
> I am testing repairs repairs -inc -par and I can see that in all my nodes
>> the numbers of sstables explode to 5k from 5 sstables.
>
>
> This looks like a known issue, see
> https://issues.apache.org/jira/browse/CASSANDRA-10422
> Make sure your version is higher than 2.1.12, 2.2.4, 3.0.1, 3.1 to avoid
> this (and you are indeed facing CASSANDRA-10422).
> I am not sure you are facing this though, as you don't seem to be using
> subranges (nodetool repair -st  and -et options)
>
> *It is anyway to run repairs incrementals but not -par ?*
>>
>> I know that is it not possible to run sequential repair with incremental
>> repair at the same time.
>>
>
>
> From http://www.datastax.com/dev/blog/more-efficient-repairs
> "Incremental repairs can be opted into via the -inc option to nodetool
> repair. This is compatible with both sequential and parallel (-par)
> repair, e.g., bin/nodetool -par -inc  ."
> So you should be able to remove -par. Not sure this will solve your issue
> though.
>
>
> Did you respect this process to migrate to incremental repairs?
>
> https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesMigration.html#opsRepairNodesMigration__ol_dxj_gp5_2s
>
> C*heers,
> -
> Alain Rodriguez
> France
>
> The Last Pickle
> http://www.thelastpickle.com
>
>
>
> 2016-02-10 17:45 GMT+01:00 Jean Carlo :
>
>> Hi guys; The question is on the subject.
>>
>> I am testing repairs repairs -inc -par and I can see that in all my nodes
>> the numbers of sstables explode to 5k from 5 sstables.
>>
>> I cannot permit this behaivor on my cluster in production.
>>
>> *It is anyway to run repairs incrementals but not -par ?*
>>
>> I know that is it not possible to run sequential repair with incremental
>> repair at the same time.
>>
>> Best regards
>>
>> Jean Carlo
>>
>> "The best way to predict the future is to invent it" Alan Kay
>>
>
>

-- 


--





Re: Do I have to use repair -inc with the option -par forcely?

2016-02-16 Thread Alain RODRIGUEZ
Hi,

I am testing repairs repairs -inc -par and I can see that in all my nodes
> the numbers of sstables explode to 5k from 5 sstables.


This looks like a known issue, see
https://issues.apache.org/jira/browse/CASSANDRA-10422
Make sure your version is higher than 2.1.12, 2.2.4, 3.0.1, 3.1 to avoid
this (and you are indeed facing CASSANDRA-10422).
I am not sure you are facing this though, as you don't seem to be using
subranges (nodetool repair -st  and -et options)

*It is anyway to run repairs incrementals but not -par ?*
>
> I know that is it not possible to run sequential repair with incremental
> repair at the same time.
>


>From http://www.datastax.com/dev/blog/more-efficient-repairs
"Incremental repairs can be opted into via the -inc option to nodetool
repair. This is compatible with both sequential and parallel (-par) repair,
e.g., bin/nodetool -par -inc  ."
So you should be able to remove -par. Not sure this will solve your issue
though.


Did you respect this process to migrate to incremental repairs?
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesMigration.html#opsRepairNodesMigration__ol_dxj_gp5_2s

C*heers,
-
Alain Rodriguez
France

The Last Pickle
http://www.thelastpickle.com



2016-02-10 17:45 GMT+01:00 Jean Carlo :

> Hi guys; The question is on the subject.
>
> I am testing repairs repairs -inc -par and I can see that in all my nodes
> the numbers of sstables explode to 5k from 5 sstables.
>
> I cannot permit this behaivor on my cluster in production.
>
> *It is anyway to run repairs incrementals but not -par ?*
>
> I know that is it not possible to run sequential repair with incremental
> repair at the same time.
>
> Best regards
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>


Re: Can't bootstrap a node

2016-02-16 Thread Alain RODRIGUEZ
Hi Brian,

Did you noticed this error: "CF 9733d050-d0ed-11e5-904a-5574a0c0fd2a was
dropped during streaming" ?

Are you dropping keyspaces or tables (old name Column Family, CF) during
the bootstrap (maybe as a client app action ?)

Or are you creating KS / CF on both DataCenters before connecting them as
described in https://issues.apache.org/jira/browse/CASSANDRA-9956 ?

Anything else that could be related (any schema changes during the
bootstrap) ?

C*heers,
-
Alain Rodriguez
France

The Last Pickle
http://www.thelastpickle.com

2016-02-12 22:51 GMT+01:00 Brian Picciano :

> I posted this on the IRC but wasn't able to receive any help. I have two
> nodes running 3.0.3. They're in different datacenters, connected by
> openvpn. When I go to bootstrap the new node it handshakes fine, but always
> gets this error while transferring data:
>
> http://gobin.io/oMll
>
> If I follow the log's advice and run "nodetool bootstrap resume" I get the
> following:
>
> http://gobin.io/kkSu
>
> I'm fairly confident this is not a connection issue, ping is sub-50ms, and
> there isn't any packet loss that I can see. Any help would be greatly
> appreciated, I'd also be happy to give any further debugging info that
> might help. Thanks!
>


Re: Ops Centre Read Requests / TBL: Local Read Requests

2016-02-16 Thread Romain Hardouin
Yes you are right Anishek. If you write with LOCAL_ONE, values will be the same.