Re: Strange issue wherein cassandra not being started from cron

2017-01-11 Thread Ajay Garg
Hi Hannu.

On Wed, Jan 11, 2017 at 8:31 PM, Hannu Kröger <hkro...@gmail.com> wrote:

> One possible reason is that cassandra process gets different user when run
> differently. Check who owns the data files and check also what gets written
> into the /var/log/cassandra/system.log (or whatever that was).
>

Absolutely nothing gets written to /var/log/cassandra/system.log (when
trying to invoke cassandra via cron).


>
> Hannu
>
>
> On 11 Jan 2017, at 16.42, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
> Tried everything.
> Every other cron job/script I try works, just the cassandra-service does
> not.
>
> On Wed, Jan 11, 2017 at 8:51 AM, Edward Capriolo <edlinuxg...@gmail.com>
> wrote:
>
>>
>>
>> On Tuesday, January 10, 2017, Jonathan Haddad <j...@jonhaddad.com> wrote:
>>
>>> Last I checked, cron doesn't load the same, full environment you see
>>> when you log in. Also, why put Cassandra on a cron?
>>> On Mon, Jan 9, 2017 at 9:47 PM Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
>>>
>>>> Hi Ajay,
>>>>
>>>> Have you had a look at cron logs? - mine is in path /var/log/cron
>>>>
>>>> Thanks & Regards,
>>>>
>>>> On Tue, Jan 10, 2017 at 9:45 AM, Ajay Garg <ajaygargn...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All.
>>>>>
>>>>> Facing a very weird issue, wherein the command
>>>>>
>>>>> */etc/init.d/cassandra start*
>>>>>
>>>>> causes cassandra to start when the command is run from command-line.
>>>>>
>>>>>
>>>>> However, if I put the above as a cron job
>>>>>
>>>>>
>>>>>
>>>>> ** * * * * /etc/init.d/cassandra start*
>>>>> cassandra never starts.
>>>>>
>>>>>
>>>>> I have checked, and "cron" service is running.
>>>>>
>>>>>
>>>>> Any ideas what might be wrong?
>>>>> I am pasting the cassandra script for brevity.
>>>>>
>>>>>
>>>>> Thanks and Regards,
>>>>> Ajay
>>>>>
>>>>>
>>>>> 
>>>>> 
>>>>> #! /bin/sh
>>>>> ### BEGIN INIT INFO
>>>>> # Provides:  cassandra
>>>>> # Required-Start:$remote_fs $network $named $time
>>>>> # Required-Stop: $remote_fs $network $named $time
>>>>> # Should-Start:  ntp mdadm
>>>>> # Should-Stop:   ntp mdadm
>>>>> # Default-Start: 2 3 4 5
>>>>> # Default-Stop:  0 1 6
>>>>> # Short-Description: distributed storage system for structured data
>>>>> # Description:   Cassandra is a distributed (peer-to-peer) system
>>>>> for
>>>>> #the management and storage of structured data.
>>>>> ### END INIT INFO
>>>>>
>>>>> # Author: Eric Evans <eev...@racklabs.com>
>>>>>
>>>>> DESC="Cassandra"
>>>>> NAME=cassandra
>>>>> PIDFILE=/var/run/$NAME/$NAME.pid
>>>>> SCRIPTNAME=/etc/init.d/$NAME
>>>>> CONFDIR=/etc/cassandra
>>>>> WAIT_FOR_START=10
>>>>> CASSANDRA_HOME=/usr/share/cassandra
>>>>> FD_LIMIT=10
>>>>>
>>>>> [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
>>>>> [ -e /etc/cassandra/cassandra.yaml ] || exit 0
>>>>> [ -e /etc/cassandra/cassandra-env.sh ] || exit 0
>>>>>
>>>>> # Read configuration variable file if it is present
>>>>> [ -r /etc/default/$NAME ] && . /etc/default/$NAME
>>>>>
>>>>> # Read Cassandra environment file.
>>>>> . /etc/cassandra/cassandra-env.sh
>>>>>
>>>>> if [ -z "$JVM_OPTS" ]; then
>>>>> echo "Initialization failed; \$JVM_OPTS not set!" >&2
>>>>> exit 3
>>>>> fi
>>>>>
>>>>> export JVM_OPTS
>>>>>
>>>>> # Export JAVA_HOME, if set.
>>>>> [ -n "$JAVA_HOME" ] && export JAVA_HOME
>>>>>
>>>>> # Load the VERBOSE setting and other rcS variables
>>>>>

Re: Strange issue wherein cassandra not being started from cron

2017-01-11 Thread Ajay Garg
On Wed, Jan 11, 2017 at 8:29 PM, Martin Schröder <mar...@oneiros.de> wrote:

> 2017-01-11 15:42 GMT+01:00 Ajay Garg <ajaygargn...@gmail.com>:
> > Tried everything.
>
> Then try
>service cassandra start
> or
>systemctl start cassandra
>
> You still haven't explained to us why you want to start cassandra every
> minute.
>

Hi Martin.

Sometimes, the cassandra-process gets killed (reason unknown as of now).
Doing a manual "service cassandra start" works then.

Adding this in cron would at least ensure that the maximum downtime is 59
seconds (till the time root-cause of cassandra-crashing is known).



>
> Best
>Martin
>



-- 
Regards,
Ajay


Re: Strange issue wherein cassandra not being started from cron

2017-01-11 Thread Ajay Garg
Tried everything.
Every other cron job/script I try works, just the cassandra-service does
not.

On Wed, Jan 11, 2017 at 8:51 AM, Edward Capriolo <edlinuxg...@gmail.com>
wrote:

>
>
> On Tuesday, January 10, 2017, Jonathan Haddad <j...@jonhaddad.com> wrote:
>
>> Last I checked, cron doesn't load the same, full environment you see when
>> you log in. Also, why put Cassandra on a cron?
>> On Mon, Jan 9, 2017 at 9:47 PM Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
>>
>>> Hi Ajay,
>>>
>>> Have you had a look at cron logs? - mine is in path /var/log/cron
>>>
>>> Thanks & Regards,
>>>
>>> On Tue, Jan 10, 2017 at 9:45 AM, Ajay Garg <ajaygargn...@gmail.com>
>>> wrote:
>>>
>>>> Hi All.
>>>>
>>>> Facing a very weird issue, wherein the command
>>>>
>>>> */etc/init.d/cassandra start*
>>>>
>>>> causes cassandra to start when the command is run from command-line.
>>>>
>>>>
>>>> However, if I put the above as a cron job
>>>>
>>>>
>>>>
>>>> ** * * * * /etc/init.d/cassandra start*
>>>> cassandra never starts.
>>>>
>>>>
>>>> I have checked, and "cron" service is running.
>>>>
>>>>
>>>> Any ideas what might be wrong?
>>>> I am pasting the cassandra script for brevity.
>>>>
>>>>
>>>> Thanks and Regards,
>>>> Ajay
>>>>
>>>>
>>>> 
>>>> 
>>>> #! /bin/sh
>>>> ### BEGIN INIT INFO
>>>> # Provides:  cassandra
>>>> # Required-Start:$remote_fs $network $named $time
>>>> # Required-Stop: $remote_fs $network $named $time
>>>> # Should-Start:  ntp mdadm
>>>> # Should-Stop:   ntp mdadm
>>>> # Default-Start: 2 3 4 5
>>>> # Default-Stop:  0 1 6
>>>> # Short-Description: distributed storage system for structured data
>>>> # Description:   Cassandra is a distributed (peer-to-peer) system
>>>> for
>>>> #the management and storage of structured data.
>>>> ### END INIT INFO
>>>>
>>>> # Author: Eric Evans <eev...@racklabs.com>
>>>>
>>>> DESC="Cassandra"
>>>> NAME=cassandra
>>>> PIDFILE=/var/run/$NAME/$NAME.pid
>>>> SCRIPTNAME=/etc/init.d/$NAME
>>>> CONFDIR=/etc/cassandra
>>>> WAIT_FOR_START=10
>>>> CASSANDRA_HOME=/usr/share/cassandra
>>>> FD_LIMIT=10
>>>>
>>>> [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
>>>> [ -e /etc/cassandra/cassandra.yaml ] || exit 0
>>>> [ -e /etc/cassandra/cassandra-env.sh ] || exit 0
>>>>
>>>> # Read configuration variable file if it is present
>>>> [ -r /etc/default/$NAME ] && . /etc/default/$NAME
>>>>
>>>> # Read Cassandra environment file.
>>>> . /etc/cassandra/cassandra-env.sh
>>>>
>>>> if [ -z "$JVM_OPTS" ]; then
>>>> echo "Initialization failed; \$JVM_OPTS not set!" >&2
>>>> exit 3
>>>> fi
>>>>
>>>> export JVM_OPTS
>>>>
>>>> # Export JAVA_HOME, if set.
>>>> [ -n "$JAVA_HOME" ] && export JAVA_HOME
>>>>
>>>> # Load the VERBOSE setting and other rcS variables
>>>> . /lib/init/vars.sh
>>>>
>>>> # Define LSB log_* functions.
>>>> # Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
>>>> . /lib/lsb/init-functions
>>>>
>>>> #
>>>> # Function that returns 0 if process is running, or nonzero if not.
>>>> #
>>>> # The nonzero value is 3 if the process is simply not running, and 1 if
>>>> the
>>>> # process is not running but the pidfile exists (to match the exit
>>>> codes for
>>>> # the "status" command; see LSB core spec 3.1, section 20.2)
>>>> #
>>>> CMD_PATT="cassandra.+CassandraDaemon"
>>>> is_running()
>>>> {
>>>> if [ -f $PIDFILE ]; then
>>>> pid=`cat $PIDFILE`
>>>> grep -Eq "$CMD_PATT" &

Strange issue wherein cassandra not being started from cron

2017-01-09 Thread Ajay Garg
Hi All.

Facing a very weird issue, wherein the command

*/etc/init.d/cassandra start*

causes cassandra to start when the command is run from command-line.


However, if I put the above as a cron job



** * * * * /etc/init.d/cassandra start*
cassandra never starts.


I have checked, and "cron" service is running.


Any ideas what might be wrong?
I am pasting the cassandra script for brevity.


Thanks and Regards,
Ajay



#! /bin/sh
### BEGIN INIT INFO
# Provides:  cassandra
# Required-Start:$remote_fs $network $named $time
# Required-Stop: $remote_fs $network $named $time
# Should-Start:  ntp mdadm
# Should-Stop:   ntp mdadm
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: distributed storage system for structured data
# Description:   Cassandra is a distributed (peer-to-peer) system for
#the management and storage of structured data.
### END INIT INFO

# Author: Eric Evans 

DESC="Cassandra"
NAME=cassandra
PIDFILE=/var/run/$NAME/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
CONFDIR=/etc/cassandra
WAIT_FOR_START=10
CASSANDRA_HOME=/usr/share/cassandra
FD_LIMIT=10

[ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
[ -e /etc/cassandra/cassandra.yaml ] || exit 0
[ -e /etc/cassandra/cassandra-env.sh ] || exit 0

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME

# Read Cassandra environment file.
. /etc/cassandra/cassandra-env.sh

if [ -z "$JVM_OPTS" ]; then
echo "Initialization failed; \$JVM_OPTS not set!" >&2
exit 3
fi

export JVM_OPTS

# Export JAVA_HOME, if set.
[ -n "$JAVA_HOME" ] && export JAVA_HOME

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions

#
# Function that returns 0 if process is running, or nonzero if not.
#
# The nonzero value is 3 if the process is simply not running, and 1 if the
# process is not running but the pidfile exists (to match the exit codes for
# the "status" command; see LSB core spec 3.1, section 20.2)
#
CMD_PATT="cassandra.+CassandraDaemon"
is_running()
{
if [ -f $PIDFILE ]; then
pid=`cat $PIDFILE`
grep -Eq "$CMD_PATT" "/proc/$pid/cmdline" 2>/dev/null && return 0
return 1
fi
return 3
}
#
# Function that starts the daemon/service
#
do_start()
{
# Return
#   0 if daemon has been started
#   1 if daemon was already running
#   2 if daemon could not be started

ulimit -l unlimited
ulimit -n "$FD_LIMIT"

cassandra_home=`getent passwd cassandra | awk -F ':' '{ print $6; }'`
heap_dump_f="$cassandra_home/java_`date +%s`.hprof"
error_log_f="$cassandra_home/hs_err_`date +%s`.log"

[ -e `dirname "$PIDFILE"` ] || \
install -d -ocassandra -gcassandra -m755 `dirname $PIDFILE`



start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -q -p
"$PIDFILE" -t >/dev/null || return 1

start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -b -p
"$PIDFILE" -- \
-p "$PIDFILE" -H "$heap_dump_f" -E "$error_log_f" >/dev/null ||
return 2

}

#
# Function that stops the daemon/service
#
do_stop()
{
# Return
#   0 if daemon has been stopped
#   1 if daemon was already stopped
#   2 if daemon could not be stopped
#   other if a failure occurred
start-stop-daemon -K -p "$PIDFILE" -R TERM/30/KILL/5 >/dev/null
RET=$?
rm -f "$PIDFILE"
return $RET
}

case "$1" in
  start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
  stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
  restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
  0|1)
do_start
case "$?" in
  0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
  *)
# Failed to stop
log_end_msg 1
;;
esac
;;
  status)
is_running
stat=$?
case "$stat" in
  0) log_success_msg "$DESC is running" ;;
  1) log_failure_msg "could not access pidfile for $DESC" ;;
  *) log_success_msg "$DESC is not 

Re: Basic query in setting up secure inter-dc cluster

2016-04-25 Thread Ajay Garg
Hi Everyone.

Kindly reply in "yes" or "no", as to whether it is possible to setup
encryption only between particular pair of nodes?
Or is it an "all" or "none" feature, where encryption is present between
EVERY PAIR of nodes, or in NO PAIR of nodes.


Thanks and Regards,
Ajay

On Mon, Apr 18, 2016 at 9:55 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Also, wondering what is the difference between "all" and "dc" in
> "internode_encryption".
> Perhaps my answer lies in this?
>
> On Mon, Apr 18, 2016 at 9:51 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Ok, trying to wake up this thread again.
>>
>> I went through the following links ::
>>
>>
>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>>
>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLCertificates_t.html
>>
>>
>> and I am wondering *if it is possible to setup secure
>> inter-communication only between some nodes*.
>>
>> In particular, if I have a 2*2 cluster, is it possible to setup secure
>> communication ONLY between the nodes of DC2?
>> Once it works well, we would then setup secure-communication everywhere.
>>
>> We are wanting this, because DC2 is the backup centre, while DC1 is the
>> primary-centre connected directly to the application-server. We don't want
>> to screw things if something goes bad in DC1.
>>
>>
>> Will be grateful for pointers.
>>
>>
>> Thanks and Regards,
>> Ajay
>>
>> On Sun, Jan 17, 2016 at 9:09 PM, Ajay Garg <ajaygargn...@gmail.com>
>> wrote:
>>
>>> Hi All.
>>>
>>> A gentle query-reminder.
>>>
>>> I will be grateful if I could be given a brief technical overview, as to
>>> how secure-communication occurs between two nodes in a cluster.
>>>
>>> Please note that I wish for some information on the "how it works below
>>> the hood", and NOT "how to set it up".
>>>
>>>
>>>
>>> Thanks and Regards,
>>> Ajay
>>>
>>> On Wed, Jan 6, 2016 at 4:16 PM, Ajay Garg <ajaygargn...@gmail.com>
>>> wrote:
>>>
>>>> Thanks everyone for the reply.
>>>>
>>>> I actually have a fair bit of questions, but it will be nice if someone
>>>> could please tell me the flow (implementation-wise), as to how node-to-node
>>>> encryption works in a cluster.
>>>>
>>>> Let's say node1 from DC1, wishes to talk securely to node 2 from DC2
>>>> (with *"require_client_auth: false*").
>>>> I presume it would be like below (please correct me if am wrong) ::
>>>>
>>>> a)
>>>> node1 tries to connect to node2, using the certificate *as defined on
>>>> node1* in cassandra.yaml.
>>>>
>>>> b)
>>>> node2 will confirm if the certificate being offered by node1 is in the
>>>> truststore *as defined on node2* in cassandra.yaml.
>>>> if it is, secure-communication is allowed.
>>>>
>>>>
>>>> Is my thinking right?
>>>> I
>>>>
>>>> On Wed, Jan 6, 2016 at 1:55 PM, Neha Dave <nehajtriv...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ajay,
>>>>> Have a look here :
>>>>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>>>>>
>>>>> You can configure for DC level Security:
>>>>>
>>>>> Procedure
>>>>>
>>>>> On each node under sever_encryption_options:
>>>>>
>>>>>- Enable internode_encryption.
>>>>>The available options are:
>>>>>   - all
>>>>>   - none
>>>>>   - dc: Cassandra encrypts the traffic between the data centers.
>>>>>   - rack: Cassandra encrypts the traffic between the racks.
>>>>>
>>>>> regards
>>>>>
>>>>> Neha
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet <
>>>>> absi...@informatica.com> wrote:
>>>>>
>>>>>> Security is a very wide concept. What exactly do you want to achieve ?
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Ajay Garg [mailto:ajaygargn...@gmail.com]
>>>>>> *Sent:* Wednesday, January 06, 2016 11:27 AM
>>>>>> *To:* user@cassandra.apache.org
>>>>>> *Subject:* Basic query in setting up secure inter-dc cluster
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hi All.
>>>>>>
>>>>>> We have a 2*2 cluster deployed, but no security as of now.
>>>>>>
>>>>>> As a first stage, we wish to implement inter-dc security.
>>>>>>
>>>>>> Is it possible to enable security one machine at a time?
>>>>>>
>>>>>> For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
>>>>>>
>>>>>> If I make the changes JUST IN DC2M2 and restart it, will the traffic
>>>>>> between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
>>>>>> AFTER the changes are made in all the 4 machines?
>>>>>>
>>>>>> Asking here, because I don't want to screw up a live cluster due to
>>>>>> my lack of experience.
>>>>>>
>>>>>> Looking forward to some pointers.
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Regards,
>>>>>> Ajay
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Ajay
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ajay
>>>
>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Basic query in setting up secure inter-dc cluster

2016-04-17 Thread Ajay Garg
Also, wondering what is the difference between "all" and "dc" in
"internode_encryption".
Perhaps my answer lies in this?

On Mon, Apr 18, 2016 at 9:51 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Ok, trying to wake up this thread again.
>
> I went through the following links ::
>
>
> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>
> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLCertificates_t.html
>
>
> and I am wondering *if it is possible to setup secure inter-communication
> only between some nodes*.
>
> In particular, if I have a 2*2 cluster, is it possible to setup secure
> communication ONLY between the nodes of DC2?
> Once it works well, we would then setup secure-communication everywhere.
>
> We are wanting this, because DC2 is the backup centre, while DC1 is the
> primary-centre connected directly to the application-server. We don't want
> to screw things if something goes bad in DC1.
>
>
> Will be grateful for pointers.
>
>
> Thanks and Regards,
> Ajay
>
> On Sun, Jan 17, 2016 at 9:09 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Hi All.
>>
>> A gentle query-reminder.
>>
>> I will be grateful if I could be given a brief technical overview, as to
>> how secure-communication occurs between two nodes in a cluster.
>>
>> Please note that I wish for some information on the "how it works below
>> the hood", and NOT "how to set it up".
>>
>>
>>
>> Thanks and Regards,
>> Ajay
>>
>> On Wed, Jan 6, 2016 at 4:16 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
>>> Thanks everyone for the reply.
>>>
>>> I actually have a fair bit of questions, but it will be nice if someone
>>> could please tell me the flow (implementation-wise), as to how node-to-node
>>> encryption works in a cluster.
>>>
>>> Let's say node1 from DC1, wishes to talk securely to node 2 from DC2
>>> (with *"require_client_auth: false*").
>>> I presume it would be like below (please correct me if am wrong) ::
>>>
>>> a)
>>> node1 tries to connect to node2, using the certificate *as defined on
>>> node1* in cassandra.yaml.
>>>
>>> b)
>>> node2 will confirm if the certificate being offered by node1 is in the
>>> truststore *as defined on node2* in cassandra.yaml.
>>> if it is, secure-communication is allowed.
>>>
>>>
>>> Is my thinking right?
>>> I
>>>
>>> On Wed, Jan 6, 2016 at 1:55 PM, Neha Dave <nehajtriv...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ajay,
>>>> Have a look here :
>>>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>>>>
>>>> You can configure for DC level Security:
>>>>
>>>> Procedure
>>>>
>>>> On each node under sever_encryption_options:
>>>>
>>>>- Enable internode_encryption.
>>>>The available options are:
>>>>   - all
>>>>   - none
>>>>   - dc: Cassandra encrypts the traffic between the data centers.
>>>>   - rack: Cassandra encrypts the traffic between the racks.
>>>>
>>>> regards
>>>>
>>>> Neha
>>>>
>>>>
>>>>
>>>> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet <
>>>> absi...@informatica.com> wrote:
>>>>
>>>>> Security is a very wide concept. What exactly do you want to achieve ?
>>>>>
>>>>>
>>>>>
>>>>> *From:* Ajay Garg [mailto:ajaygargn...@gmail.com]
>>>>> *Sent:* Wednesday, January 06, 2016 11:27 AM
>>>>> *To:* user@cassandra.apache.org
>>>>> *Subject:* Basic query in setting up secure inter-dc cluster
>>>>>
>>>>>
>>>>>
>>>>> Hi All.
>>>>>
>>>>> We have a 2*2 cluster deployed, but no security as of now.
>>>>>
>>>>> As a first stage, we wish to implement inter-dc security.
>>>>>
>>>>> Is it possible to enable security one machine at a time?
>>>>>
>>>>> For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
>>>>>
>>>>> If I make the changes JUST IN DC2M2 and restart it, will the traffic
>>>>> between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
>>>>> AFTER the changes are made in all the 4 machines?
>>>>>
>>>>> Asking here, because I don't want to screw up a live cluster due to my
>>>>> lack of experience.
>>>>>
>>>>> Looking forward to some pointers.
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Regards,
>>>>> Ajay
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ajay
>>>
>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Basic query in setting up secure inter-dc cluster

2016-04-17 Thread Ajay Garg
Ok, trying to wake up this thread again.

I went through the following links ::

https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLCertificates_t.html


and I am wondering *if it is possible to setup secure inter-communication
only between some nodes*.

In particular, if I have a 2*2 cluster, is it possible to setup secure
communication ONLY between the nodes of DC2?
Once it works well, we would then setup secure-communication everywhere.

We are wanting this, because DC2 is the backup centre, while DC1 is the
primary-centre connected directly to the application-server. We don't want
to screw things if something goes bad in DC1.


Will be grateful for pointers.


Thanks and Regards,
Ajay

On Sun, Jan 17, 2016 at 9:09 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Hi All.
>
> A gentle query-reminder.
>
> I will be grateful if I could be given a brief technical overview, as to
> how secure-communication occurs between two nodes in a cluster.
>
> Please note that I wish for some information on the "how it works below
> the hood", and NOT "how to set it up".
>
>
>
> Thanks and Regards,
> Ajay
>
> On Wed, Jan 6, 2016 at 4:16 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Thanks everyone for the reply.
>>
>> I actually have a fair bit of questions, but it will be nice if someone
>> could please tell me the flow (implementation-wise), as to how node-to-node
>> encryption works in a cluster.
>>
>> Let's say node1 from DC1, wishes to talk securely to node 2 from DC2
>> (with *"require_client_auth: false*").
>> I presume it would be like below (please correct me if am wrong) ::
>>
>> a)
>> node1 tries to connect to node2, using the certificate *as defined on
>> node1* in cassandra.yaml.
>>
>> b)
>> node2 will confirm if the certificate being offered by node1 is in the
>> truststore *as defined on node2* in cassandra.yaml.
>> if it is, secure-communication is allowed.
>>
>>
>> Is my thinking right?
>> I
>>
>> On Wed, Jan 6, 2016 at 1:55 PM, Neha Dave <nehajtriv...@gmail.com> wrote:
>>
>>> Hi Ajay,
>>> Have a look here :
>>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>>>
>>> You can configure for DC level Security:
>>>
>>> Procedure
>>>
>>> On each node under sever_encryption_options:
>>>
>>>- Enable internode_encryption.
>>>The available options are:
>>>   - all
>>>   - none
>>>   - dc: Cassandra encrypts the traffic between the data centers.
>>>   - rack: Cassandra encrypts the traffic between the racks.
>>>
>>> regards
>>>
>>> Neha
>>>
>>>
>>>
>>> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet <
>>> absi...@informatica.com> wrote:
>>>
>>>> Security is a very wide concept. What exactly do you want to achieve ?
>>>>
>>>>
>>>>
>>>> *From:* Ajay Garg [mailto:ajaygargn...@gmail.com]
>>>> *Sent:* Wednesday, January 06, 2016 11:27 AM
>>>> *To:* user@cassandra.apache.org
>>>> *Subject:* Basic query in setting up secure inter-dc cluster
>>>>
>>>>
>>>>
>>>> Hi All.
>>>>
>>>> We have a 2*2 cluster deployed, but no security as of now.
>>>>
>>>> As a first stage, we wish to implement inter-dc security.
>>>>
>>>> Is it possible to enable security one machine at a time?
>>>>
>>>> For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
>>>>
>>>> If I make the changes JUST IN DC2M2 and restart it, will the traffic
>>>> between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
>>>> AFTER the changes are made in all the 4 machines?
>>>>
>>>> Asking here, because I don't want to screw up a live cluster due to my
>>>> lack of experience.
>>>>
>>>> Looking forward to some pointers.
>>>>
>>>>
>>>> --
>>>>
>>>> Regards,
>>>> Ajay
>>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Can we set TTL on individual fields (columns) using the Datastax java-driver

2016-02-08 Thread Ajay Garg
Something like ::


##
class A {

  @Id
  @Column (name = "pojo_key")
  int key;

  @Ttl(10)
  @Column (name = "pojo_temporary_guest")
  String guest;

}
##


When I persist, let's say value "ajay" in guest-field (pojo_temporary_guest
column), it stays forever, and does not become "null" after 10 seconds.

Kindly point me what I am doing wrong.
I will be grateful.


Thanks and Regards,
Ajay


Re: Basic query in setting up secure inter-dc cluster

2016-01-17 Thread Ajay Garg
Hi All.

A gentle query-reminder.

I will be grateful if I could be given a brief technical overview, as to
how secure-communication occurs between two nodes in a cluster.

Please note that I wish for some information on the "how it works below the
hood", and NOT "how to set it up".



Thanks and Regards,
Ajay

On Wed, Jan 6, 2016 at 4:16 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Thanks everyone for the reply.
>
> I actually have a fair bit of questions, but it will be nice if someone
> could please tell me the flow (implementation-wise), as to how node-to-node
> encryption works in a cluster.
>
> Let's say node1 from DC1, wishes to talk securely to node 2 from DC2 (with 
> *"require_client_auth:
> false*").
> I presume it would be like below (please correct me if am wrong) ::
>
> a)
> node1 tries to connect to node2, using the certificate *as defined on
> node1* in cassandra.yaml.
>
> b)
> node2 will confirm if the certificate being offered by node1 is in the
> truststore *as defined on node2* in cassandra.yaml.
> if it is, secure-communication is allowed.
>
>
> Is my thinking right?
> I
>
> On Wed, Jan 6, 2016 at 1:55 PM, Neha Dave <nehajtriv...@gmail.com> wrote:
>
>> Hi Ajay,
>> Have a look here :
>> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>>
>> You can configure for DC level Security:
>>
>> Procedure
>>
>> On each node under sever_encryption_options:
>>
>>- Enable internode_encryption.
>>The available options are:
>>   - all
>>   - none
>>   - dc: Cassandra encrypts the traffic between the data centers.
>>   - rack: Cassandra encrypts the traffic between the racks.
>>
>> regards
>>
>> Neha
>>
>>
>>
>> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet <absi...@informatica.com
>> > wrote:
>>
>>> Security is a very wide concept. What exactly do you want to achieve ?
>>>
>>>
>>>
>>> *From:* Ajay Garg [mailto:ajaygargn...@gmail.com]
>>> *Sent:* Wednesday, January 06, 2016 11:27 AM
>>> *To:* user@cassandra.apache.org
>>> *Subject:* Basic query in setting up secure inter-dc cluster
>>>
>>>
>>>
>>> Hi All.
>>>
>>> We have a 2*2 cluster deployed, but no security as of now.
>>>
>>> As a first stage, we wish to implement inter-dc security.
>>>
>>> Is it possible to enable security one machine at a time?
>>>
>>> For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
>>>
>>> If I make the changes JUST IN DC2M2 and restart it, will the traffic
>>> between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
>>> AFTER the changes are made in all the 4 machines?
>>>
>>> Asking here, because I don't want to screw up a live cluster due to my
>>> lack of experience.
>>>
>>> Looking forward to some pointers.
>>>
>>>
>>> --
>>>
>>> Regards,
>>> Ajay
>>>
>>
>>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Basic query in setting up secure inter-dc cluster

2016-01-06 Thread Ajay Garg
Thanks everyone for the reply.

I actually have a fair bit of questions, but it will be nice if someone
could please tell me the flow (implementation-wise), as to how node-to-node
encryption works in a cluster.

Let's say node1 from DC1, wishes to talk securely to node 2 from DC2
(with *"require_client_auth:
false*").
I presume it would be like below (please correct me if am wrong) ::

a)
node1 tries to connect to node2, using the certificate *as defined on node1*
in cassandra.yaml.

b)
node2 will confirm if the certificate being offered by node1 is in the
truststore *as defined on node2* in cassandra.yaml.
if it is, secure-communication is allowed.


Is my thinking right?
I

On Wed, Jan 6, 2016 at 1:55 PM, Neha Dave <nehajtriv...@gmail.com> wrote:

> Hi Ajay,
> Have a look here :
> https://docs.datastax.com/en/cassandra/1.2/cassandra/security/secureSSLNodeToNode_t.html
>
> You can configure for DC level Security:
>
> Procedure
>
> On each node under sever_encryption_options:
>
>- Enable internode_encryption.
>The available options are:
>   - all
>   - none
>   - dc: Cassandra encrypts the traffic between the data centers.
>   - rack: Cassandra encrypts the traffic between the racks.
>
> regards
>
> Neha
>
>
>
> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet <absi...@informatica.com>
> wrote:
>
>> Security is a very wide concept. What exactly do you want to achieve ?
>>
>>
>>
>> *From:* Ajay Garg [mailto:ajaygargn...@gmail.com]
>> *Sent:* Wednesday, January 06, 2016 11:27 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Basic query in setting up secure inter-dc cluster
>>
>>
>>
>> Hi All.
>>
>> We have a 2*2 cluster deployed, but no security as of now.
>>
>> As a first stage, we wish to implement inter-dc security.
>>
>> Is it possible to enable security one machine at a time?
>>
>> For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
>>
>> If I make the changes JUST IN DC2M2 and restart it, will the traffic
>> between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
>> AFTER the changes are made in all the 4 machines?
>>
>> Asking here, because I don't want to screw up a live cluster due to my
>> lack of experience.
>>
>> Looking forward to some pointers.
>>
>>
>> --
>>
>> Regards,
>> Ajay
>>
>
>


-- 
Regards,
Ajay


Basic query in setting up secure inter-dc cluster

2016-01-05 Thread Ajay Garg
Hi All.

We have a 2*2 cluster deployed, but no security as of now.
As a first stage, we wish to implement inter-dc security.

Is it possible to enable security one machine at a time?

For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
If I make the changes JUST IN DC2M2 and restart it, will the traffic
between DC1M1/DC1M2 and DC2M2 be secure? Or security will kick in ONLY
AFTER the changes are made in all the 4 machines?

Asking here, because I don't want to screw up a live cluster due to my lack
of experience.

Looking forward to some pointers.

-- 
Regards,
Ajay


Re: Doubt regarding consistency-level in Cassandra-2.1.10

2015-11-04 Thread Ajay Garg
Hi All.

I think we got the root-cause.

One of the fields in one of the class was marked with "@Version"
annotation, which was causing the Cassandra-Java-Driver to insert "If Not
Exists" in the insert query, thus invoking SERIAL consistency-level.

We removed the annotation (didn't really need that), and we have not
observed the error since about an hour or so.


Thanks Eric and Bryan for the help !!!


Thanks and Regards,
Ajay

On Wed, Nov 4, 2015 at 8:51 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Hmm... ok.
>
> Ideally, we require ::
>
> a)
> The intra-DC-node-syncing takes place at the statement/query level.
>
> b)
> The inter-DC-node-syncing takes place at cassandra level.
>
>
> That way, we don't spend too much delay at the statement/query level.
>
>
> For the so-called CAS/lightweight transactions, the above are impossible
> then?
>
> On Wed, Nov 4, 2015 at 5:58 AM, Bryan Cheng <br...@blockcypher.com> wrote:
>
>> What Eric means is that SERIAL consistency is a special type of
>> consistency that is only invoked for a subset of operations: those that use
>> CAS/lightweight transactions, for example "IF NOT EXISTS" queries.
>>
>> The differences between CAS operations and standard operations are
>> significant and there are large repercussions for tunable consistency. The
>> amount of time such an operation takes is greatly increased as well; you
>> may need to increase your internal node-to-node timeouts .
>>
>> On Mon, Nov 2, 2015 at 8:01 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
>>> Hi Eric,
>>>
>>> I am sorry, but I don't understand.
>>>
>>> If there had been some issue in the configuration, then the
>>> consistency-issue would be seen everytime (I guess).
>>> As of now, the error is seen sometimes (probably 30% of times).
>>>
>>> On Mon, Nov 2, 2015 at 10:24 PM, Eric Stevens <migh...@gmail.com> wrote:
>>>
>>>> Serial consistency gets invoked at the protocol level when doing
>>>> lightweight transactions such as CAS operations.  If you're expecting that
>>>> your topology is RF=2, N=2, it seems like some keyspace has RF=3, and so
>>>> there aren't enough nodes available to satisfy serial consistency.
>>>>
>>>> See
>>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html
>>>>
>>>> On Mon, Nov 2, 2015 at 1:29 AM Ajay Garg <ajaygargn...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All.
>>>>>
>>>>> I have a 2*2 Network-Topology Replication setup, and I run my
>>>>> application via DataStax-driver.
>>>>>
>>>>> I frequently get the errors of type ::
>>>>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>>>>> were required but only 0 acknowledged the write)*
>>>>>
>>>>> I have already tried passing a "write-options with LOCAL_QUORUM
>>>>> consistency-level" in all create/save statements, but I still get this
>>>>> error.
>>>>>
>>>>> Does something else need to be changed in
>>>>> /etc/cassandra/cassandra.yaml too?
>>>>> Or may be some another place?
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Ajay
>>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ajay
>>>
>>
>>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Doubt regarding consistency-level in Cassandra-2.1.10

2015-11-03 Thread Ajay Garg
Hmm... ok.

Ideally, we require ::

a)
The intra-DC-node-syncing takes place at the statement/query level.

b)
The inter-DC-node-syncing takes place at cassandra level.


That way, we don't spend too much delay at the statement/query level.


For the so-called CAS/lightweight transactions, the above are impossible
then?

On Wed, Nov 4, 2015 at 5:58 AM, Bryan Cheng <br...@blockcypher.com> wrote:

> What Eric means is that SERIAL consistency is a special type of
> consistency that is only invoked for a subset of operations: those that use
> CAS/lightweight transactions, for example "IF NOT EXISTS" queries.
>
> The differences between CAS operations and standard operations are
> significant and there are large repercussions for tunable consistency. The
> amount of time such an operation takes is greatly increased as well; you
> may need to increase your internal node-to-node timeouts .
>
> On Mon, Nov 2, 2015 at 8:01 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Hi Eric,
>>
>> I am sorry, but I don't understand.
>>
>> If there had been some issue in the configuration, then the
>> consistency-issue would be seen everytime (I guess).
>> As of now, the error is seen sometimes (probably 30% of times).
>>
>> On Mon, Nov 2, 2015 at 10:24 PM, Eric Stevens <migh...@gmail.com> wrote:
>>
>>> Serial consistency gets invoked at the protocol level when doing
>>> lightweight transactions such as CAS operations.  If you're expecting that
>>> your topology is RF=2, N=2, it seems like some keyspace has RF=3, and so
>>> there aren't enough nodes available to satisfy serial consistency.
>>>
>>> See
>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html
>>>
>>> On Mon, Nov 2, 2015 at 1:29 AM Ajay Garg <ajaygargn...@gmail.com> wrote:
>>>
>>>> Hi All.
>>>>
>>>> I have a 2*2 Network-Topology Replication setup, and I run my
>>>> application via DataStax-driver.
>>>>
>>>> I frequently get the errors of type ::
>>>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>>>> were required but only 0 acknowledged the write)*
>>>>
>>>> I have already tried passing a "write-options with LOCAL_QUORUM
>>>> consistency-level" in all create/save statements, but I still get this
>>>> error.
>>>>
>>>> Does something else need to be changed in /etc/cassandra/cassandra.yaml
>>>> too?
>>>> Or may be some another place?
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Ajay
>>>>
>>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>


-- 
Regards,
Ajay


Doubt regarding consistency-level in Cassandra-2.1.10

2015-11-02 Thread Ajay Garg
Hi All.

I have a 2*2 Network-Topology Replication setup, and I run my application
via DataStax-driver.

I frequently get the errors of type ::
*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write)*

I have already tried passing a "write-options with LOCAL_QUORUM
consistency-level" in all create/save statements, but I still get this
error.

Does something else need to be changed in /etc/cassandra/cassandra.yaml too?
Or may be some another place?

-- 
Regards,
Ajay


Re: Doubt regarding consistency-level in Cassandra-2.1.10

2015-11-02 Thread Ajay Garg
Hi Eric,

I am sorry, but I don't understand.

If there had been some issue in the configuration, then the
consistency-issue would be seen everytime (I guess).
As of now, the error is seen sometimes (probably 30% of times).

On Mon, Nov 2, 2015 at 10:24 PM, Eric Stevens <migh...@gmail.com> wrote:

> Serial consistency gets invoked at the protocol level when doing
> lightweight transactions such as CAS operations.  If you're expecting that
> your topology is RF=2, N=2, it seems like some keyspace has RF=3, and so
> there aren't enough nodes available to satisfy serial consistency.
>
> See
> http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html
>
> On Mon, Nov 2, 2015 at 1:29 AM Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Hi All.
>>
>> I have a 2*2 Network-Topology Replication setup, and I run my application
>> via DataStax-driver.
>>
>> I frequently get the errors of type ::
>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>> were required but only 0 acknowledged the write)*
>>
>> I have already tried passing a "write-options with LOCAL_QUORUM
>> consistency-level" in all create/save statements, but I still get this
>> error.
>>
>> Does something else need to be changed in /etc/cassandra/cassandra.yaml
>> too?
>> Or may be some another place?
>>
>>
>> --
>> Regards,
>> Ajay
>>
>


-- 
Regards,
Ajay


Can consistency-levels be different for "read" and "write" in Datastax Java-Driver?

2015-10-26 Thread Ajay Garg
Right now, I have setup "LOCAL QUORUM" as the consistency level in the
driver, but it seems that "SERIAL" is being used during writes, and I
consistently get this error of type ::

*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write)*


Am I missing something?


-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-25 Thread Ajay Garg
Some more observations ::

a)
CAS11 and CAS12 are down, CAS21 and CAS22 up.
If I connect via the driver to the cluster using only CAS21 and CAS22 as
contact-points, even then the exception occurs.

b)
CAS11 down, CAS12 up, CAS21 and CAS22 up.
If I connect via the driver to the cluster using only CAS21 and CAS22 as
contact-points, then connection goes fine.

c)
CAS11 up, CAS12 down, CAS21 and CAS22 up.
If I connect via the driver to the cluster using only CAS21 and CAS22 as
contact-points, then connection goes fine.


Seems the java-driver is kinda always requiring either one of CAS11 or
CAS12 to be up (although the expectation is that the driver must work fine
if ANY of the 4 nodes is up).


Thoughts, experts !? :)



On Sat, Oct 24, 2015 at 9:40 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Ideas please, on what I may be doing wrong?
>
> On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Hi All.
>>
>> I have been doing extensive testing, and replication works fine, even if
>> any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
>> Syncing always takes place (obviously, as long as continuous-downtime-value
>> does not exceed *max_hint_window_in_ms*).
>>
>>
>> However, things behave weird when I try connecting via DataStax
>> Java-Driver.
>> I always add the nodes to the cluster in the order ::
>>
>>  CAS11, CAS12, CAS21, CAS22
>>
>> during "cluster.connect" method.
>>
>>
>> Now, following happens ::
>>
>> a)
>> If CAS11 goes down, data is persisted fine (presumably first in CAS12,
>> and later replicated to CAS21 and CAS22).
>>
>> b)
>> If CAS11 and CAS12 go down, data is NOT persisted.
>> Instead the following exceptions are observed in the Java-Driver ::
>>
>>
>> ##
>> Exception in thread "main"
>> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
>> tried for query failed (no host was tried)
>> at
>> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
>> at
>> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
>> at com.datastax.driver.core.Cluster.connect(Cluster.java:267)
>> at com.example.cassandra.SimpleClient.connect(SimpleClient.java:43)
>> at
>> com.example.cassandra.SimpleClientTest.setUp(SimpleClientTest.java:50)
>> at
>> com.example.cassandra.SimpleClientTest.main(SimpleClientTest.java:86)
>> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
>> All host(s) tried for query failed (no host was tried)
>> at
>> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
>> at
>> com.datastax.driver.core.SessionManager.execute(SessionManager.java:446)
>> at
>> com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:482)
>> at
>> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:88)
>> at
>> com.datastax.driver.core.AbstractSession.executeAsync(AbstractSession.java:60)
>> at com.datastax.driver.core.Cluster.connect(Cluster.java:260)
>> ... 3 more
>>
>> ###
>>
>>
>> I have already tried ::
>>
>> 1)
>> Increasing driver-read-timeout from 12 seconds to 30 seconds.
>>
>> 2)
>> Increasing driver-connect-timeout from 5 seconds to 30 seconds.
>>
>> 3)
>> I have also confirmed that each of the 4 nodes are telnet-able over ports
>> 9042 and 9160 each.
>>
>>
>> Definitely seems to be some driver-issue, since
>> data-persistence/replication works perfect (with any permutation) if
>> data-persistence is done via "cqlsh".
>>
>>
>> Kindly provide some pointers.
>> Ultimately, it is the Java-driver that will be used in production, so it
>> is imperative that data-persistence/replication happens for any downing of
>> any permutation of node(s).
>>
>>
>> Thanks and Regards,
>> Ajay
>>
>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-25 Thread Ajay Garg
Bingo !!!

Using "LoadBalancingPolicy" did the trick.
Exactly what was needed !!!


Thanks and Regards,
Ajay

On Sun, Oct 25, 2015 at 5:52 PM, Ryan Svihla <r...@foundev.pro> wrote:

> Ajay,
>
> So It's the default driver behavior to pin requests to the first data
> center it connects to (DCAwareRoundRobin strategy). but let me explain why
> this is.
>
> I think you're thinking about data centers in Cassandra as a unit of
> failure, and while you can have say a rack fail, as you scale up and use
> rack awareness, it's rare you lose a whole "data center" in the sense
> you're thinking about, so lets reset a bit:
>
>1. If I'm designing a multidc architecture, usually the nature of
>latency I will not want my app servers connecting _across_ data centers.
>2. So since the common desire is not to magically have very high
>latency requests  bleed out to remote data centers, the default behavior of
>the driver is to pin to the first data center it connects too, you can
>change this with a different Load Balancing Policy (
>
> http://docs.datastax.com/en/drivers/java/2.0/com/datastax/driver/core/policies/LoadBalancingPolicy.html
>)
>3. However, I generally do NOT advise users connecting to an app
>server from another data center, since Cassandra is a masterless
>architecture you typically have issues that affect nodes, and not an entire
>data center and if they affect an entire data center (say the intra DC link
>is down) then it's going to affect your app server as well!
>
> So for new users, I typically just recommend pinning an app server to a DC
> and do your data center level switching further up. You can get more
> advanced and handle bleed out later, but you have to think of latencies.
>
> Final point, rely on repairs for your data consistency, hints are great
> and all but repair is how you make sure you're in sync.
>
> On Sun, Oct 25, 2015 at 3:10 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Some more observations ::
>>
>> a)
>> CAS11 and CAS12 are down, CAS21 and CAS22 up.
>> If I connect via the driver to the cluster using only CAS21 and CAS22 as
>> contact-points, even then the exception occurs.
>>
>> b)
>> CAS11 down, CAS12 up, CAS21 and CAS22 up.
>> If I connect via the driver to the cluster using only CAS21 and CAS22 as
>> contact-points, then connection goes fine.
>>
>> c)
>> CAS11 up, CAS12 down, CAS21 and CAS22 up.
>> If I connect via the driver to the cluster using only CAS21 and CAS22 as
>> contact-points, then connection goes fine.
>>
>>
>> Seems the java-driver is kinda always requiring either one of CAS11 or
>> CAS12 to be up (although the expectation is that the driver must work fine
>> if ANY of the 4 nodes is up).
>>
>>
>> Thoughts, experts !? :)
>>
>>
>>
>> On Sat, Oct 24, 2015 at 9:40 PM, Ajay Garg <ajaygargn...@gmail.com>
>> wrote:
>>
>>> Ideas please, on what I may be doing wrong?
>>>
>>> On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg <ajaygargn...@gmail.com>
>>> wrote:
>>>
>>>> Hi All.
>>>>
>>>> I have been doing extensive testing, and replication works fine, even
>>>> if any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought
>>>> up. Syncing always takes place (obviously, as long as
>>>> continuous-downtime-value does not exceed *max_hint_window_in_ms*).
>>>>
>>>>
>>>> However, things behave weird when I try connecting via DataStax
>>>> Java-Driver.
>>>> I always add the nodes to the cluster in the order ::
>>>>
>>>>  CAS11, CAS12, CAS21, CAS22
>>>>
>>>> during "cluster.connect" method.
>>>>
>>>>
>>>> Now, following happens ::
>>>>
>>>> a)
>>>> If CAS11 goes down, data is persisted fine (presumably first in CAS12,
>>>> and later replicated to CAS21 and CAS22).
>>>>
>>>> b)
>>>> If CAS11 and CAS12 go down, data is NOT persisted.
>>>> Instead the following exceptions are observed in the Java-Driver ::
>>>>
>>>>
>>>> ##
>>>> Exception in thread "main"
>>>> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
>>>> tried for query failed (no host was tried)
>>>> at
>&g

Re: Is replication possible with already existing data?

2015-10-24 Thread Ajay Garg
Ideas please, on what I may be doing wrong?

On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Hi All.
>
> I have been doing extensive testing, and replication works fine, even if
> any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
> Syncing always takes place (obviously, as long as continuous-downtime-value
> does not exceed *max_hint_window_in_ms*).
>
>
> However, things behave weird when I try connecting via DataStax
> Java-Driver.
> I always add the nodes to the cluster in the order ::
>
>  CAS11, CAS12, CAS21, CAS22
>
> during "cluster.connect" method.
>
>
> Now, following happens ::
>
> a)
> If CAS11 goes down, data is persisted fine (presumably first in CAS12, and
> later replicated to CAS21 and CAS22).
>
> b)
> If CAS11 and CAS12 go down, data is NOT persisted.
> Instead the following exceptions are observed in the Java-Driver ::
>
>
> ##
> Exception in thread "main"
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
> tried for query failed (no host was tried)
> at
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
> at
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
> at com.datastax.driver.core.Cluster.connect(Cluster.java:267)
> at com.example.cassandra.SimpleClient.connect(SimpleClient.java:43)
> at
> com.example.cassandra.SimpleClientTest.setUp(SimpleClientTest.java:50)
> at
> com.example.cassandra.SimpleClientTest.main(SimpleClientTest.java:86)
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
> All host(s) tried for query failed (no host was tried)
> at
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
> at
> com.datastax.driver.core.SessionManager.execute(SessionManager.java:446)
> at
> com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:482)
> at
> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:88)
> at
> com.datastax.driver.core.AbstractSession.executeAsync(AbstractSession.java:60)
> at com.datastax.driver.core.Cluster.connect(Cluster.java:260)
> ... 3 more
>
> ###
>
>
> I have already tried ::
>
> 1)
> Increasing driver-read-timeout from 12 seconds to 30 seconds.
>
> 2)
> Increasing driver-connect-timeout from 5 seconds to 30 seconds.
>
> 3)
> I have also confirmed that each of the 4 nodes are telnet-able over ports
> 9042 and 9160 each.
>
>
> Definitely seems to be some driver-issue, since
> data-persistence/replication works perfect (with any permutation) if
> data-persistence is done via "cqlsh".
>
>
> Kindly provide some pointers.
> Ultimately, it is the Java-driver that will be used in production, so it
> is imperative that data-persistence/replication happens for any downing of
> any permutation of node(s).
>
>
> Thanks and Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Downtime-Limit for a node in Network-Topology-Replication-Cluster?

2015-10-24 Thread Ajay Garg
Never mind Vasileios, you have been a great help !!
Thanks a ton again !!!


Thanks and Regards,
Ajay

On Sat, Oct 24, 2015 at 10:17 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:

> I am not sure I fully understand the question, because nodetool repair is
> one of the three ways for Cassandra to ensure consistency. If by "affect"
> you mean "make your data consistent and ensure all replicas are
> up-to-date", then yes, that's what I think it does.
>
> And yes, I would expect nodetool repair (especially depending on the
> options appended to it) to have a performance impact, but how big that
> impact is going to be depends on many things.
>
> We currently perform no scheduled repairs because of our workload and the
> consistency level that we use. So, as you can understand I am certainly not
> the best person to analyse that bit...
>
> Regards,
> Vasilis
>
> On Sat, Oct 24, 2015 at 5:09 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Thanks a ton Vasileios !!
>>
>> Just one last question ::
>> Does running "nodetool repair" affect the functionality of cluster for
>> current-live data?
>>
>> It's ok if the insertions/deletions of current-live data become a little
>> slow during the process, but data-consistency must be maintained. If that
>> is the case, I think we are good.
>>
>>
>> Thanks and Regards,
>> Ajay
>>
>> On Sat, Oct 24, 2015 at 6:03 PM, Vasileios Vlachos <
>> vasileiosvlac...@gmail.com> wrote:
>>
>>> Hello Ajay,
>>>
>>> Here is a good link:
>>>
>>> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html
>>>
>>> Generally, I find the DataStax docs to be OK. You could consult them for
>>> all usual operations etc. Ofc there are occasions where a given concept is
>>> not as clear, but you can always ask this list for clarification.
>>>
>>> If you find that something is wrong in the docs just email them (more
>>> info and contact email here: http://docs.datastax.com/en/ ).
>>>
>>> Regards,
>>> Vasilis
>>>
>>> On Sat, Oct 24, 2015 at 1:04 PM, Ajay Garg <ajaygargn...@gmail.com>
>>> wrote:
>>>
>>>> Thanks Vasileios for the reply !!!
>>>> That makes sense !!!
>>>>
>>>> I will be grateful if you could point me to the node-repair command for
>>>> Cassandra-2.1.10.
>>>> I don't want to get stuck in a wrong-versioned documentation (already
>>>> bitten once hard when setting up replication).
>>>>
>>>> Thanks again...
>>>>
>>>>
>>>> Thanks and Regards,
>>>> Ajay
>>>>
>>>> On Sat, Oct 24, 2015 at 4:14 PM, Vasileios Vlachos <
>>>> vasileiosvlac...@gmail.com> wrote:
>>>>
>>>>> Hello Ajay,
>>>>>
>>>>> Have a look in the *max_hint_window_in_ms* :
>>>>>
>>>>>
>>>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
>>>>>
>>>>> My understanding is that if a node remains down for more than
>>>>> *max_hint_window_in_ms*, then you will need to repair that node.
>>>>>
>>>>> Thanks,
>>>>> Vasilis
>>>>>
>>>>> On Sat, Oct 24, 2015 at 7:48 AM, Ajay Garg <ajaygargn...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> If a node in the cluster goes down and comes up, the data gets synced
>>>>>> up on this downed node.
>>>>>> Is there a limit on the interval for which the node can remain down?
>>>>>> Or the data will be synced up even if the node remains down for
>>>>>> weeks/months/years?
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Ajay
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Ajay
>>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>


-- 
Regards,
Ajay


Re: Downtime-Limit for a node in Network-Topology-Replication-Cluster?

2015-10-24 Thread Ajay Garg
Thanks a ton Vasileios !!

Just one last question ::
Does running "nodetool repair" affect the functionality of cluster for
current-live data?

It's ok if the insertions/deletions of current-live data become a little
slow during the process, but data-consistency must be maintained. If that
is the case, I think we are good.


Thanks and Regards,
Ajay

On Sat, Oct 24, 2015 at 6:03 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:

> Hello Ajay,
>
> Here is a good link:
>
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html
>
> Generally, I find the DataStax docs to be OK. You could consult them for
> all usual operations etc. Ofc there are occasions where a given concept is
> not as clear, but you can always ask this list for clarification.
>
> If you find that something is wrong in the docs just email them (more info
> and contact email here: http://docs.datastax.com/en/ ).
>
> Regards,
> Vasilis
>
> On Sat, Oct 24, 2015 at 1:04 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Thanks Vasileios for the reply !!!
>> That makes sense !!!
>>
>> I will be grateful if you could point me to the node-repair command for
>> Cassandra-2.1.10.
>> I don't want to get stuck in a wrong-versioned documentation (already
>> bitten once hard when setting up replication).
>>
>> Thanks again...
>>
>>
>> Thanks and Regards,
>> Ajay
>>
>> On Sat, Oct 24, 2015 at 4:14 PM, Vasileios Vlachos <
>> vasileiosvlac...@gmail.com> wrote:
>>
>>> Hello Ajay,
>>>
>>> Have a look in the *max_hint_window_in_ms* :
>>>
>>>
>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
>>>
>>> My understanding is that if a node remains down for more than
>>> *max_hint_window_in_ms*, then you will need to repair that node.
>>>
>>> Thanks,
>>> Vasilis
>>>
>>> On Sat, Oct 24, 2015 at 7:48 AM, Ajay Garg <ajaygargn...@gmail.com>
>>> wrote:
>>>
>>>> If a node in the cluster goes down and comes up, the data gets synced
>>>> up on this downed node.
>>>> Is there a limit on the interval for which the node can remain down? Or
>>>> the data will be synced up even if the node remains down for
>>>> weeks/months/years?
>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Ajay
>>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>


-- 
Regards,
Ajay


Some questions about setting public/private IP-Addresses in Cassandra Cluster

2015-10-24 Thread Ajay Garg
Hi All.

We have a scenario, where the Application-Server (APP), Node-1 (CAS11), and
Node-2 (CAS12) are hosted in DC1.
Node-3 (CAS21) and Node-4 (CAS22) are in DC2.

The intention is that we provide 4-way redundancy to APP, by specifying
CAS11, CAS12, CAS21 and CAS22 as the addresses via Java-Cassandra-connector.
That means, as long as at least one of the 4 nodes are up, the APP should
work.

We are using Network-Topology, with Murmur3Paritioning.
Each Cassandra-Node has two IPs :: one public, and one
private-within-the-same-data-center.


Following are our IP-Addresses configuration ::

a)
Everywhere in "cassandra-topology.properties", we have specified
Public-IP-Addresses of all 4 nodes.

b)
In each of "listen_address" in /etc/cassandra/cassandra.yaml, we have
specified the corresponding Public-IP-Address of the node.

c)
For CAS11 and CAS12, we have specified the corresponding private-IP-Address
for "rpc_address" in /etc/cassandra/cassandra.yaml (since APP is hosted in
the same data-center).
For CAS21 and CAS22, we have specified the corresponding public-IP-Address
for "rpc_address" in /etc/cassandra/cassandra.yaml (since APP can only
communicate over public IP-Addresses with these nodes).


Are any further optimizations possible, in the sense that specifying
private-IP-Addresses would work?
I ask this, because we need to minimize network-latency, so possibility of
private-IP-addresses will help in this regard.


Thanks and Regards,
Ajay


Downtime-Limit for a node in Network-Topology-Replication-Cluster?

2015-10-24 Thread Ajay Garg
If a node in the cluster goes down and comes up, the data gets synced up on
this downed node.
Is there a limit on the interval for which the node can remain down? Or the
data will be synced up even if the node remains down for weeks/months/years?



-- 
Regards,
Ajay


Re: Downtime-Limit for a node in Network-Topology-Replication-Cluster?

2015-10-24 Thread Ajay Garg
Thanks Vasileios for the reply !!!
That makes sense !!!

I will be grateful if you could point me to the node-repair command for
Cassandra-2.1.10.
I don't want to get stuck in a wrong-versioned documentation (already
bitten once hard when setting up replication).

Thanks again...


Thanks and Regards,
Ajay

On Sat, Oct 24, 2015 at 4:14 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:

> Hello Ajay,
>
> Have a look in the *max_hint_window_in_ms* :
>
>
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
>
> My understanding is that if a node remains down for more than
> *max_hint_window_in_ms*, then you will need to repair that node.
>
> Thanks,
> Vasilis
>
> On Sat, Oct 24, 2015 at 7:48 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> If a node in the cluster goes down and comes up, the data gets synced up
>> on this downed node.
>> Is there a limit on the interval for which the node can remain down? Or
>> the data will be synced up even if the node remains down for
>> weeks/months/years?
>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>


-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-24 Thread Ajay Garg
Hi All.

I have been doing extensive testing, and replication works fine, even if
any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
Syncing always takes place (obviously, as long as continuous-downtime-value
does not exceed *max_hint_window_in_ms*).


However, things behave weird when I try connecting via DataStax Java-Driver.
I always add the nodes to the cluster in the order ::

 CAS11, CAS12, CAS21, CAS22

during "cluster.connect" method.


Now, following happens ::

a)
If CAS11 goes down, data is persisted fine (presumably first in CAS12, and
later replicated to CAS21 and CAS22).

b)
If CAS11 and CAS12 go down, data is NOT persisted.
Instead the following exceptions are observed in the Java-Driver ::

##
Exception in thread "main"
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (no host was tried)
at
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
at com.datastax.driver.core.Cluster.connect(Cluster.java:267)
at com.example.cassandra.SimpleClient.connect(SimpleClient.java:43)
at
com.example.cassandra.SimpleClientTest.setUp(SimpleClientTest.java:50)
at com.example.cassandra.SimpleClientTest.main(SimpleClientTest.java:86)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (no host was tried)
at
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
at
com.datastax.driver.core.SessionManager.execute(SessionManager.java:446)
at
com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:482)
at
com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:88)
at
com.datastax.driver.core.AbstractSession.executeAsync(AbstractSession.java:60)
at com.datastax.driver.core.Cluster.connect(Cluster.java:260)
... 3 more
###


I have already tried ::

1)
Increasing driver-read-timeout from 12 seconds to 30 seconds.

2)
Increasing driver-connect-timeout from 5 seconds to 30 seconds.

3)
I have also confirmed that each of the 4 nodes are telnet-able over ports
9042 and 9160 each.


Definitely seems to be some driver-issue, since
data-persistence/replication works perfect (with any permutation) if
data-persistence is done via "cqlsh".


Kindly provide some pointers.
Ultimately, it is the Java-driver that will be used in production, so it is
imperative that data-persistence/replication happens for any downing of any
permutation of node(s).


Thanks and Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-23 Thread Ajay Garg
Any ideas, please?
To repeat, we are using the exact same cassandra-version on all 4 nodes
(2.1.10).

On Fri, Oct 23, 2015 at 9:43 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:

> Hi Michael.
>
> Please find below the contents of cassandra.yaml for CAS11 (the files on
> the rest of the three nodes are also exactly the same, except the
> "initial_token" and "listen_address" fields) ::
>
> CAS11 ::
>
> 
> cluster_name: 'InstaMsg Cluster'
> num_tokens: 256
> initial_token: -9223372036854775808
> hinted_handoff_enabled: true
> max_hint_window_in_ms: 1080 # 3 hours
> hinted_handoff_throttle_in_kb: 1024
> max_hints_delivery_threads: 2
> batchlog_replay_throttle_in_kb: 1024
> authenticator: AllowAllAuthenticator
> authorizer: AllowAllAuthorizer
> permissions_validity_in_ms: 2000
> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> data_file_directories:
> - /var/lib/cassandra/data
>
> commitlog_directory: /var/lib/cassandra/commitlog
>
> disk_failure_policy: stop
> commit_failure_policy: stop
> key_cache_size_in_mb:
> key_cache_save_period: 14400
> row_cache_size_in_mb: 0
> row_cache_save_period: 0
> counter_cache_size_in_mb:
> counter_cache_save_period: 7200
> saved_caches_directory: /var/lib/cassandra/saved_caches
> commitlog_sync: periodic
> commitlog_sync_period_in_ms: 1
> commitlog_segment_size_in_mb: 32
> seed_provider:
> - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>   parameters:
>   - seeds: "104.239.200.33,119.9.92.77"
>
> concurrent_reads: 32
> concurrent_writes: 32
> concurrent_counter_writes: 32
>
> memtable_allocation_type: heap_buffers
>
> index_summary_capacity_in_mb:
> index_summary_resize_interval_in_minutes: 60
> trickle_fsync: false
> trickle_fsync_interval_in_kb: 10240
> storage_port: 7000
> ssl_storage_port: 7001
> listen_address: 104.239.200.33
> start_native_transport: true
> native_transport_port: 9042
> start_rpc: true
> rpc_address: localhost
> rpc_port: 9160
> rpc_keepalive: true
>
> rpc_server_type: sync
> thrift_framed_transport_size_in_mb: 15
> incremental_backups: false
> snapshot_before_compaction: false
> auto_snapshot: true
>
> tombstone_warn_threshold: 1000
> tombstone_failure_threshold: 10
>
> column_index_size_in_kb: 64
> batch_size_warn_threshold_in_kb: 5
>
> compaction_throughput_mb_per_sec: 16
> compaction_large_partition_warning_threshold_mb: 100
>
> sstable_preemptive_open_interval_in_mb: 50
>
> read_request_timeout_in_ms: 5000
> range_request_timeout_in_ms: 1
>
> write_request_timeout_in_ms: 2000
> counter_write_request_timeout_in_ms: 5000
> cas_contention_timeout_in_ms: 1000
> truncate_request_timeout_in_ms: 6
> request_timeout_in_ms: 1
> cross_node_timeout: false
> endpoint_snitch: PropertyFileSnitch
>
> dynamic_snitch_update_interval_in_ms: 100
> dynamic_snitch_reset_interval_in_ms: 60
> dynamic_snitch_badness_threshold: 0.1
>
> request_scheduler: org.apache.cassandra.scheduler.NoScheduler
>
> server_encryption_options:
> internode_encryption: none
> keystore: conf/.keystore
> keystore_password: cassandra
> truststore: conf/.truststore
> truststore_password: cassandra
>
> client_encryption_options:
> enabled: false
> keystore: conf/.keystore
> keystore_password: cassandra
>
> internode_compression: all
> inter_dc_tcp_nodelay: false
> 
>
>
> What changes need to be made, so that whenever a downed server comes back
> up, the missing data comes back over to it?
>
> Thanks and Regards,
> Ajay
>
>
>
> On Fri, Oct 23, 2015 at 9:05 AM, Michael Shuler <mich...@pbandjelly.org>
> wrote:
>
>> On 10/22/2015 10:14 PM, Ajay Garg wrote:
>>
>>> However, CAS11 refuses to come up now.
>>> Following is the error in /var/log/cassandra/system.log ::
>>>
>>>
>>> 
>>> ERROR [main] 2015-10-23 03:07:34,242 CassandraDaemon.java:391 - Fatal
>>> configuration error
>>> org.apache.cassandra.exceptions.ConfigurationException: Cannot change
>>> the number of tokens from 1 to 256
>>>
>>
>> Check your cassandra.yaml - this node has vnodes enabled in the
>> configuration when it did not, previously. Check all nodes. Something
>> changed. Mixed vnode/non-vnode clusters is bad juju.
>>
>> --
>> Kind regards,
>> Michael
>>
>
>
>
> --
> Regards,
> Ajay
>



-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-23 Thread Ajay Garg
Thanks Steve and Michael.

Simply uncommenting "initial_token" did the trick !!!

Right now, I was evaluating replication, for the case when everything is a
clean install.
Will now try my hands on integrating/starting replication, with
pre-existing data.


Once again, thanks a ton for all the help guys !!!


Thanks and Regards,
Ajay

On Sat, Oct 24, 2015 at 2:06 AM, Steve Robenalt <sroben...@highwire.org>
wrote:

> Hi Ajay,
>
> Please take a look at the cassandra.yaml configuration reference regarding
> intial_token and num_tokens:
>
>
> http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html?scroll=reference_ds_qfg_n1r_1k__initial_token
>
> This is basically what Michael was referring to in his earlier message.
> Setting an initial token overrode your num_tokens setting on initial
> startup, but after initial startup, the initial token setting is ignored,
> so num_tokens comes into play, attempting to start up with 256 vnodes.
> That's where your error comes from.
>
> It's likely that all of your nodes started up like this since you have the
> same config on all of them (hopefully, you at least changed initial_token
> for each node).
>
> After reviewing the doc on the two sections above, you'll need to decide
> which path to take to recover. You can likely bring the downed node up by
> setting num_tokens to 1 (which you'd need to do on all nodes), in which
> case you're not really running vnodes. Alternately, you can migrate the
> cluster to vnodes:
>
>
> http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configVnodesProduction_t.html
>
> BTW, I recommend carefully reviewing the cassandra.yaml configuration
> reference for ANY change you make from the default. As you've experienced
> here, not all settings are intended to work together.
>
> HTH,
> Steve
>
>
>
> On Fri, Oct 23, 2015 at 12:07 PM, Ajay Garg <ajaygargn...@gmail.com>
> wrote:
>
>> Any ideas, please?
>> To repeat, we are using the exact same cassandra-version on all 4 nodes
>> (2.1.10).
>>
>> On Fri, Oct 23, 2015 at 9:43 AM, Ajay Garg <ajaygargn...@gmail.com>
>> wrote:
>>
>>> Hi Michael.
>>>
>>> Please find below the contents of cassandra.yaml for CAS11 (the files on
>>> the rest of the three nodes are also exactly the same, except the
>>> "initial_token" and "listen_address" fields) ::
>>>
>>> CAS11 ::
>>>
>>>
>>>
>>> What changes need to be made, so that whenever a downed server comes
>>> back up, the missing data comes back over to it?
>>>
>>> Thanks and Regards,
>>> Ajay
>>>
>>>
>>>
>>> On Fri, Oct 23, 2015 at 9:05 AM, Michael Shuler <mich...@pbandjelly.org>
>>> wrote:
>>>
>>>> On 10/22/2015 10:14 PM, Ajay Garg wrote:
>>>>
>>>>> However, CAS11 refuses to come up now.
>>>>> Following is the error in /var/log/cassandra/system.log ::
>>>>>
>>>>>
>>>>> 
>>>>> ERROR [main] 2015-10-23 03:07:34,242 CassandraDaemon.java:391 - Fatal
>>>>> configuration error
>>>>> org.apache.cassandra.exceptions.ConfigurationException: Cannot change
>>>>> the number of tokens from 1 to 256
>>>>>
>>>>
>>>> Check your cassandra.yaml - this node has vnodes enabled in the
>>>> configuration when it did not, previously. Check all nodes. Something
>>>> changed. Mixed vnode/non-vnode clusters is bad juju.
>>>>
>>>> --
>>>> Kind regards,
>>>> Michael
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ajay
>>>
>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>
>
>
> --
> Steve Robenalt
> Software Architect
> sroben...@highwire.org <bza...@highwire.org>
> (office/cell): 916-505-1785
>
> HighWire Press, Inc.
> 425 Broadway St, Redwood City, CA 94063
> www.highwire.org
>
> Technology for Scholarly Communication
>



-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-22 Thread Ajay Garg
Hi Carlos.


I setup a following setup ::

CAS11 and CAS12 in DC1
CAS21 and CAS22 in DC2

a)
Brought all the 4 up, replication worked perfect !!!

b)
Thereafter, downed CAS11 via "sudo service cassandra stop".
Replication continued to work fine on CAS12, CAS21 and CAS22.

c)
Thereafter, upped CAS11 via "sudo service cassandra start".


However, CAS11 refuses to come up now.
Following is the error in /var/log/cassandra/system.log ::



ERROR [main] 2015-10-23 03:07:34,242 CassandraDaemon.java:391 - Fatal
configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot change the
number of tokens from 1 to 256
at
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:966)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService.initServer(StorageService.java:734)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:387)
[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562)
[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651)
[apache-cassandra-2.1.10.jar:2.1.10]
INFO  [StorageServiceShutdownHook] 2015-10-23 03:07:34,271
Gossiper.java:1442 - Announcing shutdown
INFO  [GossipStage:1] 2015-10-23 03:07:34,282 OutboundTcpConnection.java:97
- OutboundTcpConnection using coalescing strategy DISABLED
ERROR [StorageServiceShutdownHook] 2015-10-23 03:07:34,305
CassandraDaemon.java:227 - Exception in thread
Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException: null
at
org.apache.cassandra.service.StorageService.getApplicationStateValue(StorageService.java:1624)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService.getTokensFor(StorageService.java:1632)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1686)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService.onChange(StorageService.java:1510)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1182)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.gms.Gossiper.addLocalApplicationStateInternal(Gossiper.java:1412)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.gms.Gossiper.addLocalApplicationStates(Gossiper.java:1427)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1417)
~[apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.gms.Gossiper.stop(Gossiper.java:1443)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:678)
~[apache-cassandra-2.1.10.jar:2.1.10]
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-2.1.10.jar:2.1.10]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]



Ideas?


Thanks and Regards,
Ajay



On Mon, Oct 12, 2015 at 3:46 PM, Carlos Alonso <i...@mrcalonso.com> wrote:

> Yes Ajay, in your particular scenario, after all hints are delivered, both
> CAS11 and CAS12 will have the exact same data.
>
> Cheers!
>
> Carlos Alonso | Software Engineer | @calonso <https://twitter.com/calonso>
>
> On 11 October 2015 at 05:21, Ajay Garg <ajaygargn...@gmail.com> wrote:
>
>> Thanks a ton Anuja for the help !!!
>>
>> On Fri, Oct 9, 2015 at 12:38 PM, anuja jain <anujaja...@gmail.com> wrote:
>> > Hi Ajay,
>> >
>> >
>> > On Fri, Oct 9, 2015 at 9:00 AM, Ajay Garg <ajaygargn...@gmail.com>
>> wrote:
>> >>
>> > In this case, it will be the responsibility of APP1 to start connection
>> to
>> > CAS12. On the other hand if your APP1 is connecting to cassandra using
>> Java
>> > driver, you can add multiple contact points(CAS11 and CAS12 here) so
>> that if
>> > CAS11 is down it will directly connect to CAS12.
>>
>> Great .. Java-driver it will be :)
>>
>>
>>
>>
>> >>
>> > In such a case, CAS12 will store hints for the data to be stored on
>> CAS11
>> > (the tokens of which lies within the range of tokens CAS11 holds)  and
>> > whenever CAS11 is up again, the hints will be transferred to it and the
>> data
>> > will be di

Re: Is replication possible with already existing data?

2015-10-22 Thread Ajay Garg
Hi Michael.

Please find below the contents of cassandra.yaml for CAS11 (the files on
the rest of the three nodes are also exactly the same, except the
"initial_token" and "listen_address" fields) ::

CAS11 ::


cluster_name: 'InstaMsg Cluster'
num_tokens: 256
initial_token: -9223372036854775808
hinted_handoff_enabled: true
max_hint_window_in_ms: 1080 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data

commitlog_directory: /var/lib/cassandra/commitlog

disk_failure_policy: stop
commit_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 1
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
  parameters:
  - seeds: "104.239.200.33,119.9.92.77"

concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32

memtable_allocation_type: heap_buffers

index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 104.239.200.33
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: localhost
rpc_port: 9160
rpc_keepalive: true

rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true

tombstone_warn_threshold: 1000
tombstone_failure_threshold: 10

column_index_size_in_kb: 64
batch_size_warn_threshold_in_kb: 5

compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100

sstable_preemptive_open_interval_in_mb: 50

read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 1

write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 6
request_timeout_in_ms: 1
cross_node_timeout: false
endpoint_snitch: PropertyFileSnitch

dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 60
dynamic_snitch_badness_threshold: 0.1

request_scheduler: org.apache.cassandra.scheduler.NoScheduler

server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra

client_encryption_options:
enabled: false
keystore: conf/.keystore
keystore_password: cassandra

internode_compression: all
inter_dc_tcp_nodelay: false



What changes need to be made, so that whenever a downed server comes back
up, the missing data comes back over to it?

Thanks and Regards,
Ajay



On Fri, Oct 23, 2015 at 9:05 AM, Michael Shuler <mich...@pbandjelly.org>
wrote:

> On 10/22/2015 10:14 PM, Ajay Garg wrote:
>
>> However, CAS11 refuses to come up now.
>> Following is the error in /var/log/cassandra/system.log ::
>>
>>
>> 
>> ERROR [main] 2015-10-23 03:07:34,242 CassandraDaemon.java:391 - Fatal
>> configuration error
>> org.apache.cassandra.exceptions.ConfigurationException: Cannot change
>> the number of tokens from 1 to 256
>>
>
> Check your cassandra.yaml - this node has vnodes enabled in the
> configuration when it did not, previously. Check all nodes. Something
> changed. Mixed vnode/non-vnode clusters is bad juju.
>
> --
> Kind regards,
> Michael
>



-- 
Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-10 Thread Ajay Garg
Thanks a ton Anuja for the help !!!

On Fri, Oct 9, 2015 at 12:38 PM, anuja jain <anujaja...@gmail.com> wrote:
> Hi Ajay,
>
>
> On Fri, Oct 9, 2015 at 9:00 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
> In this case, it will be the responsibility of APP1 to start connection to
> CAS12. On the other hand if your APP1 is connecting to cassandra using Java
> driver, you can add multiple contact points(CAS11 and CAS12 here) so that if
> CAS11 is down it will directly connect to CAS12.

Great .. Java-driver it will be :)




>>
> In such a case, CAS12 will store hints for the data to be stored on CAS11
> (the tokens of which lies within the range of tokens CAS11 holds)  and
> whenever CAS11 is up again, the hints will be transferred to it and the data
> will be distributed evenly.
>

Evenly?

Should not the data be """EXACTLY""" equal after CAS11 comes back up
and the sync/transfer/whatever happens?
After all, before CAS11 went down, CAS11 and CAS12 were replicating all data.


Once again, thanks for your help.
I will be even more grateful if you would help me clear the lingering
doubt to second point.


Thanks and Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-08 Thread Ajay Garg
On Thu, Oct 8, 2015 at 9:47 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
> Thanks Eric for the reply.
>
>
> On Thu, Oct 8, 2015 at 1:44 AM, Eric Stevens <migh...@gmail.com> wrote:
>> If you're at 1 node (N=1) and RF=1 now, and you want to go N=3 RF=3, you
>> ought to be able to increase RF to 3 before bootstrapping your new nodes,
>> with no downtime and no loss of data (even temporary).  Effective RF is
>> min-bounded by N, so temporarily having RF > N ought to behave as RF = N.
>>
>> If you're starting at N > RF and you want to increase RF, things get
>> harrier
>> if you can't afford temporary consistency issues.
>>
>
> We are ok with temporary consistency issues.
>
> Also, I was going through the following articles
> https://10kloc.wordpress.com/2012/12/27/cassandra-chapter-5-data-replication-strategies/
>
> and following doubts came up in my mind ::
>
>
> a)
> Let's say at site-1, Application-Server (APP1) uses the two
> Cassandra-instances (CAS11 and CAS12), and APP1 generally uses CAS11 for all
> its needs (of course, whatever happens on CAS11, the same is replicated to
> CAS12 at Cassandra-level).
>
> Now, if CAS11 goes down, will it be the responsibility of APP1 to "detect"
> this and pick up CAS12 for its needs?
> Or some automatic Cassandra-magic will happen?
>
>
> b)
> In the same above scenario, let's say before CAS11 goes down, the amount of
> data in both CAS11 and CAS12 was "x".
>
> After CAS11 goes down, the data is being put in CAS12 only.
> After some time, CAS11 comes back up.
>
> Now, data in CAS11 is still "x", while data in CAS12 is "y" (obviously, "y"
>> "x").
>
> Now, will the additional ("y" - "x") data be automatically
> put/replicated/whatever back in CAS11 through Cassandra?
> Or it has to be done manually?
>

Any pointers, please ???

>
> If there are easy recommended solutions to above, I am beginning to think
> that a 2*2 (2 nodes each at 2 data-centres) will be the ideal setup
> (allowing failures of entire site, or a few nodes on the same site).
>
> I am sorry for asking such newbie questions, and I will be grateful if these
> silly questions could be answered by the experts :)
>
>
> Thanks and Regards,
> Ajay



-- 
Regards,
Ajay


Is replication possible with already existing data?

2015-10-07 Thread Ajay Garg
Hi All.

We have a scenario, where till now we had been using a plain, simple
single node, with the keyspace created using ::

CREATE KEYSPACE our_db WITH replication = {'class': 'SimpleStrategy',
'replication_factor': '1'}  AND durable_writes = true;


We now plan to introduce replication (in the true sense) in our scheme
of things, but cannot afford to lose any data.
We, however can take a bit of downtime, and do any data-migration if
required (we have already done data-migration once in the past, when
we moved our plain, simple single node from one physical machine to
another).


So,

a)
Is it possible at all to introduce replication in our scenario?
If yes, what needs to be done to NOT LOSE our current existing data?

b)
Also, will "NetworkTopologyStrategy" work in our scenario (since
NetworkTopologyStrategy seems to be more robust)?


Brief pointers to above will give huge confidence-boosts in our endeavours.


Thanks and Regards,
Ajay


Re: Is replication possible with already existing data?

2015-10-07 Thread Ajay Garg
Hi Sean.

Thanks for the reply.

On Wed, Oct 7, 2015 at 10:13 PM,   wrote:
> How many nodes are you planning to add?

I guess 2 more.

> How many replicas do you want?

1 (original) + 2 (replicas).
That makes it a total of 3 copies of every row of data.



> In general, there shouldn't be a problem adding nodes and then altering the 
> keyspace to change replication.

Great !!
I guess 
http://docs.datastax.com/en/cql/3.0/cql/cql_reference/alter_keyspace_r.html
will do the trick for changing schema-replication-details !!


> You will want to run repairs to stream the data to the new replicas.

Hmm.. we'll be really grateful if you could point us to a suitable
link for the above step.
If there is a nice-utility, we would be perfectly set up to start our
fun-exercise, consisting of following steps ::

a)
(As advised by you) Changing the schema, to allow a replication_factor of 3.

b)
(As advised by you) Duplicating the already-existing-data on the other 2 nodes.

c)
Thereafter, let Cassandra create a total of 3 copies for every row of
new-incoming-data.


Once again, thanks a ton for the help !!


Thanks and Regards,
Ajay


> You shouldn't need downtime or data migration -- this is the beauty of
> Cassandra.




>
>
> Sean Durity – Lead Cassandra Admin
>

> 
>
> The information in this Internet Email is confidential and may be legally 
> privileged. It is intended solely for the addressee. Access to this Email by 
> anyone else is unauthorized. If you are not the intended recipient, any 
> disclosure, copying, distribution or any action taken or omitted to be taken 
> in reliance on it, is prohibited and may be unlawful. When addressed to our 
> clients any opinions or advice contained in this Email are subject to the 
> terms and conditions expressed in any applicable governing The Home Depot 
> terms of business or client engagement letter. The Home Depot disclaims all 
> responsibility and liability for the accuracy and content of this attachment 
> and for any damages or losses arising from any inaccuracies, errors, viruses, 
> e.g., worms, trojan horses, etc., or other items of a destructive nature, 
> which may be contained in this attachment and shall not be liable for direct, 
> indirect, consequential or special damages in connection with this e-mail 
> message or its attachment.



-- 
Regards,
Ajay


Re: Possible to restore ENTIRE data from Cassandra-Schema in one go?

2015-09-15 Thread Ajay Garg
Thanks Mam for the reply.

I guess there is manual work needed to bring all the SSTables files
into one directory, so doesn't really solve the purpose I guess. So,
going the "vanilla" way might be simpler :)

Thanks anyways for the help !!!

Thanks and Regards,
Ajay

On Tue, Sep 15, 2015 at 11:34 AM, Neha Dave <nehajtriv...@gmail.com> wrote:
> Havent used it.. but u can try SSTaable Bulk Loader:
>
> http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html
>
> regards
> Neha
>
> On Tue, Sep 15, 2015 at 11:21 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
>> Hi All.
>>
>> We have a schema on one Cassandra-node, and wish to duplicate the
>> entire schema on another server.
>> Think of this a 2 clusters, each cluster containing one node.
>>
>> We have found the way to dump/restore schema-metainfo at ::
>>
>> https://dzone.com/articles/dumpingloading-schema
>>
>>
>> And dumping/restoring data at ::
>>
>>
>> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_takes_snapshot_t.html
>>
>> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html
>>
>>
>> For the restoring data step, it seems that restoring every "table"
>> requires a dedicated step.
>> So, if the schema has 100 "tables", we would need 100 steps.
>>
>>
>> Is it so? If yes, can the entire data be dumped/restored in one go?
>> Just asking, to save time, if it could :)
>>
>>
>>
>>
>> Thanks and Regards,
>> Ajay
>
>



-- 
Regards,
Ajay


Re: Getting intermittent errors while taking snapshot

2015-09-15 Thread Ajay Garg
Hi All.

Granting complete-permissions to the keyspace-folder
(/var/lib/cassandra/data/instamsg) fixed the issue.
Now, multiple, successive snapshot-commands run to completion fine.


sudo chmod -R 777 /var/lib/cassandra/data/instamsg



Thanks and Regards,
Ajay

On Tue, Sep 15, 2015 at 12:04 PM, Ajay Garg <ajaygargn...@gmail.com> wrote:
> Hi All.
>
> Taking snapshots sometimes works, sometimes don't.
> Following is the stacktrace whenever the process fails ::
>
>
> ##
> ajay@ajay-HP-15-Notebook-PC:/var/lib/cassandra/data/instamsg$ nodetool
> -h localhost -p 7199 snapshot instamsgRequested creating snapshot(s)
> for [instamsg] with snapshot name [1442298538121]
> error: 
> /var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/snapshots/1442298538121/instamsg-clients-ka-15-TOC.txt
> -> 
> /var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/instamsg-clients-ka-15-TOC.txt:
> Operation not permitted
> -- StackTrace --
> java.nio.file.FileSystemException:
> /var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/snapshots/1442298538121/instamsg-clients-ka-15-TOC.txt
> -> 
> /var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/instamsg-clients-ka-15-TOC.txt:
> Operation not permitted
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at 
> sun.nio.fs.UnixFileSystemProvider.createLink(UnixFileSystemProvider.java:476)
> at java.nio.file.Files.createLink(Files.java:1086)
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:94)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1842)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2279)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2361)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2355)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:207)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2388)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>  

Getting intermittent errors while taking snapshot

2015-09-15 Thread Ajay Garg
Hi All.

Taking snapshots sometimes works, sometimes don't.
Following is the stacktrace whenever the process fails ::


##
ajay@ajay-HP-15-Notebook-PC:/var/lib/cassandra/data/instamsg$ nodetool
-h localhost -p 7199 snapshot instamsgRequested creating snapshot(s)
for [instamsg] with snapshot name [1442298538121]
error: 
/var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/snapshots/1442298538121/instamsg-clients-ka-15-TOC.txt
-> 
/var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/instamsg-clients-ka-15-TOC.txt:
Operation not permitted
-- StackTrace --
java.nio.file.FileSystemException:
/var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/snapshots/1442298538121/instamsg-clients-ka-15-TOC.txt
-> 
/var/lib/cassandra/data/instamsg/clients-b32f01b02eec11e5866887c3880d7c45/instamsg-clients-ka-15-TOC.txt:
Operation not permitted
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixFileSystemProvider.createLink(UnixFileSystemProvider.java:476)
at java.nio.file.Files.createLink(Files.java:1086)
at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:94)
at 
org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1842)
at 
org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2279)
at 
org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2361)
at 
org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2355)
at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:207)
at 
org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2388)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$251(TCPTransport.java:683)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$1/13812661.run(Unknown
Source)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 

Re: Not able to cqlsh on 2.1.9 on Ubuntu 14.04

2015-09-14 Thread Ajay Garg
Hi All.

Thanks for your replies.

a)
cqlsh  does not work either :(


b)
Following are the parameters as asked ::

listen_address: localhost
rpc_address: localhost

broadcast_rpc_address is not set.
According to the yaml file ::

# RPC address to broadcast to drivers and other Cassandra nodes. This cannot
# be set to 0.0.0.0. If left blank, this will be set to the value of
# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
# be set.
# broadcast_rpc_address: 1.2.3.4


c)
Following is the netstat-output, with process information ::

###
ajay@comp:~$ sudo netstat -apn | grep 9042
[sudo] password for admin:
tcp6   0  0 127.0.0.1:9042  :::*
LISTEN  10169/java
###


Kindly let me know what else we can try .. it is really driving us nuttsss :(

On Mon, Sep 14, 2015 at 9:40 PM, Jared Biel
<jared.b...@bolderthinking.com> wrote:
> Whoops, I accidentally pressed a hotkey and sent my message prematurely.
> Here's what netstat should look like with those settings:
>
> sudo netstat -apn | grep 9042
> tcp6   0  0 0.0.0.0:9042:::*LISTEN
> 21248/java
>
> -Jared
>
> On 14 September 2015 at 16:09, Jared Biel <jared.b...@bolderthinking.com>
> wrote:
>>
>> I assume "@ Of node" is ethX's IP address? Has cassandra been restarted
>> since changes were made to cassandra.yaml? The netstat output that you
>> posted doesn't look right; we use settings similar to what you've posted.
>> Here's what it looks like on one of our nodes.
>>
>>
>> -Jared
>>
>> On 14 September 2015 at 10:34, Ahmed Eljami <ahmed.elj...@gmail.com>
>> wrote:
>>>
>>> In cassanrda.yaml:
>>> listen_address:@ Of node
>>> rpc_address:0.0.0.0
>>>
>>> brodcast_rpc_address:@ Of node
>>>
>>> 2015-09-14 11:31 GMT+01:00 Neha Dave <nehajtriv...@gmail.com>:
>>>>
>>>> Try
>>>> >cqlsh 
>>>>
>>>> regards
>>>> Neha
>>>>
>>>> On Mon, Sep 14, 2015 at 3:53 PM, Ajay Garg <ajaygargn...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Hi All.
>>>>>
>>>>> We have setup a Ubuntu-14.04 server, and followed the steps exactly as
>>>>> per http://wiki.apache.org/cassandra/DebianPackaging
>>>>>
>>>>> Installation completes fine, Cassandra starts fine, however cqlsh does
>>>>> not work.
>>>>> We get the error ::
>>>>>
>>>>>
>>>>> ###
>>>>> ajay@comp:~$ cqlsh
>>>>> Connection error: ('Unable to connect to any servers', {'127.0.0.1':
>>>>> error(None, "Tried connecting to [('127.0.0.1', 9042)]. Last error:
>>>>> None")})
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>>
>>>>> Version-Info ::
>>>>>
>>>>>
>>>>> ###
>>>>> ajay@comp:~$ dpkg -l | grep cassandra
>>>>> ii  cassandra   2.1.9
>>>>>  all  distributed storage system for structured data
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>>
>>>>> The port "seems" to be opened fine.
>>>>>
>>>>>
>>>>> ###
>>>>> ajay@comp:~$ netstat -an | grep 9042
>>>>> tcp6   0  0 127.0.0.1:9042  :::*
>>>>> LISTEN
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>>
>>>>> Firewall-filters ::
>>>>>
>>>>>
>>>>> ###
>>>>> ajay@comp:~$ sudo

Re: Not able to cqlsh on 2.1.9 on Ubuntu 14.04

2015-09-14 Thread Ajay Garg
Hi All.

I re-established my server from scratch, and installed the 21x server.
Now, cqlsh works right out of the box.

When I had last setup the server, I had (accidentally) installed the
20x server on first attempt, removed it, and then installed the 21x
series server. Seems that caused some hidden problem.


I am heartfully grateful to everyone for bearing with me.


Thanks and Regards,
Ajay

On Tue, Sep 15, 2015 at 10:16 AM, Ajay Garg <ajaygargn...@gmail.com> wrote:
> Hi Jared.
>
> Thanks for your help.
>
> I made the config-changes.
> Also, I changed the seed (right now, we are just trying to get one
> instance up and running) ::
>
> 
> seed_provider:
> # Addresses of hosts that are deemed contact points.
> # Cassandra nodes use this list of hosts to find each other and learn
> # the topology of the ring.  You must change this if you are running
> # multiple nodes!
> - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>   parameters:
>   # seeds is actually a comma-delimited list of addresses.
>   # Ex: ",,"
>   - seeds: "our.ip.address.here"
> 
>
>
>
>
> Following is the netstat output ::
>
> 
> ajay@comp:~$ sudo netstat -apn | grep 9042
> tcp6   0  0 0.0.0.0:9042:::*
> LISTEN  22469/java
> 
>
>
>
> Still, when I try, we get ::
>
> 
> ajay@comp:~$ cqlsh our.ip.address.here
> Connection error: ('Unable to connect to any servers',
> {'our.ip.address.here': error(None, "Tried connecting to
> [('our.ip.address.here', 9042)]. Last error: None")})
> 
>
>
> :( :(
>
> On Mon, Sep 14, 2015 at 11:00 PM, Jared Biel
> <jared.b...@bolderthinking.com> wrote:
>> Is there a reason that you're setting listen_address and rpc_address to
>> localhost?
>>
>> listen_address doc: "the Right Thing is to use the address associated with
>> the hostname". So, set the IP address of this to eth0 for example. I believe
>> if it is set to localhost then you won't be able to form a cluster with
>> other nodes.
>>
>> rpc_address: this is the address to which clients will connect. I recommend
>> 0.0.0.0 here so clients can connect to IP address of the server as well as
>> localhost if they happen to reside on the same instance.
>>
>>
>> Here are all of the address settings from our config file. 192.168.1.10 is
>> the IP address of eth0 and broadcast_address is commented out.
>>
>> listen_address: 192.168.1.10
>> # broadcast_address: 1.2.3.4
>> rpc_address: 0.0.0.0
>> broadcast_rpc_address: 192.168.1.10
>>
>> Follow these directions to get up and running with the first node
>> (destructive process):
>>
>> 1. Stop cassandra
>> 2. Remove data from cassandra var directory (rm -rf /var/lib/cassandra/*)
>> 3. Make above changes to config file. Also set seeds to the eth0 IP address
>> 4. Start cassandra
>> 5. Set seeds in config file back to "" after cassandra is up and running.
>>
>> After following that process, you'll be able to connect to the node from any
>> host that can reach Cassandra's ports on that node ("cqlsh" command will
>> work.) To join more nodes to the cluster, follow the steps same steps as
>> above, except the seeds value to the IP address of an already running node.
>>
>> Regarding the empty "seeds" config entry: our configs are automated with
>> configuration management. During the node bootstrap process a script
>> performs the above. The reason that we set seeds back to empty is that we
>> don't want nodes coming up/down to cause the config file to change and thus
>> cassandra to restart needlessly. So far we haven't had any issues with seeds
>> being set to empty after a node has joined the cluster, but this may not be
>> the recommended way of doing things.
>>
>> -Jared
>>
>> On 14 September 2015 at 16:46, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>>
>>> Hi All.
>>>
>>> Thanks for your replies.
>>>
>>> a)
>>> cqlsh  does not work either :(
>>>
>>>
>>> b)
>>> Following are the parameters as asked ::
>>>
>>> listen_address: localhost
>&

Re: Not able to cqlsh on 2.1.9 on Ubuntu 14.04

2015-09-14 Thread Ajay Garg
Hi Jared.

Thanks for your help.

I made the config-changes.
Also, I changed the seed (right now, we are just trying to get one
instance up and running) ::


seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring.  You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
  parameters:
  # seeds is actually a comma-delimited list of addresses.
  # Ex: ",,"
  - seeds: "our.ip.address.here"





Following is the netstat output ::


ajay@comp:~$ sudo netstat -apn | grep 9042
tcp6   0  0 0.0.0.0:9042:::*
LISTEN  22469/java




Still, when I try, we get ::


ajay@comp:~$ cqlsh our.ip.address.here
Connection error: ('Unable to connect to any servers',
{'our.ip.address.here': error(None, "Tried connecting to
[('our.ip.address.here', 9042)]. Last error: None")})



:( :(

On Mon, Sep 14, 2015 at 11:00 PM, Jared Biel
<jared.b...@bolderthinking.com> wrote:
> Is there a reason that you're setting listen_address and rpc_address to
> localhost?
>
> listen_address doc: "the Right Thing is to use the address associated with
> the hostname". So, set the IP address of this to eth0 for example. I believe
> if it is set to localhost then you won't be able to form a cluster with
> other nodes.
>
> rpc_address: this is the address to which clients will connect. I recommend
> 0.0.0.0 here so clients can connect to IP address of the server as well as
> localhost if they happen to reside on the same instance.
>
>
> Here are all of the address settings from our config file. 192.168.1.10 is
> the IP address of eth0 and broadcast_address is commented out.
>
> listen_address: 192.168.1.10
> # broadcast_address: 1.2.3.4
> rpc_address: 0.0.0.0
> broadcast_rpc_address: 192.168.1.10
>
> Follow these directions to get up and running with the first node
> (destructive process):
>
> 1. Stop cassandra
> 2. Remove data from cassandra var directory (rm -rf /var/lib/cassandra/*)
> 3. Make above changes to config file. Also set seeds to the eth0 IP address
> 4. Start cassandra
> 5. Set seeds in config file back to "" after cassandra is up and running.
>
> After following that process, you'll be able to connect to the node from any
> host that can reach Cassandra's ports on that node ("cqlsh" command will
> work.) To join more nodes to the cluster, follow the steps same steps as
> above, except the seeds value to the IP address of an already running node.
>
> Regarding the empty "seeds" config entry: our configs are automated with
> configuration management. During the node bootstrap process a script
> performs the above. The reason that we set seeds back to empty is that we
> don't want nodes coming up/down to cause the config file to change and thus
> cassandra to restart needlessly. So far we haven't had any issues with seeds
> being set to empty after a node has joined the cluster, but this may not be
> the recommended way of doing things.
>
> -Jared
>
> On 14 September 2015 at 16:46, Ajay Garg <ajaygargn...@gmail.com> wrote:
>>
>> Hi All.
>>
>> Thanks for your replies.
>>
>> a)
>> cqlsh  does not work either :(
>>
>>
>> b)
>> Following are the parameters as asked ::
>>
>> listen_address: localhost
>> rpc_address: localhost
>>
>> broadcast_rpc_address is not set.
>> According to the yaml file ::
>>
>> # RPC address to broadcast to drivers and other Cassandra nodes. This
>> cannot
>> # be set to 0.0.0.0. If left blank, this will be set to the value of
>> # rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address
>> must
>> # be set.
>> # broadcast_rpc_address: 1.2.3.4
>>
>>
>> c)
>> Following is the netstat-output, with process information ::
>>
>>
>> ###
>> ajay@comp:~$ sudo netstat -apn | grep 9042
>> [sudo] password for admin:
>> tcp6   0  0 127.0.0.1:9042  :::*
>> LISTEN  10169/java
>>
>> 

Possible to restore ENTIRE data from Cassandra-Schema in one go?

2015-09-14 Thread Ajay Garg
Hi All.

We have a schema on one Cassandra-node, and wish to duplicate the
entire schema on another server.
Think of this a 2 clusters, each cluster containing one node.

We have found the way to dump/restore schema-metainfo at ::

https://dzone.com/articles/dumpingloading-schema


And dumping/restoring data at ::

http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_takes_snapshot_t.html
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html


For the restoring data step, it seems that restoring every "table"
requires a dedicated step.
So, if the schema has 100 "tables", we would need 100 steps.


Is it so? If yes, can the entire data be dumped/restored in one go?
Just asking, to save time, if it could :)




Thanks and Regards,
Ajay


Test Subject

2015-09-14 Thread Ajay Garg
Testing simple content, as my previous email bounced :(

-- 
Regards,
Ajay


Not able to cqlsh on 2.1.9 on Ubuntu 14.04

2015-09-14 Thread Ajay Garg
Hi All.

We have setup a Ubuntu-14.04 server, and followed the steps exactly as
per http://wiki.apache.org/cassandra/DebianPackaging

Installation completes fine, Cassandra starts fine, however cqlsh does not work.
We get the error ::

###
ajay@comp:~$ cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1':
error(None, "Tried connecting to [('127.0.0.1', 9042)]. Last error:
None")})
###



Version-Info ::

###
ajay@comp:~$ dpkg -l | grep cassandra
ii  cassandra   2.1.9
 all  distributed storage system for structured data
###



The port "seems" to be opened fine.

###
ajay@comp:~$ netstat -an | grep 9042
tcp6   0  0 127.0.0.1:9042  :::*LISTEN
###



Firewall-filters ::

###
ajay@comp:~$ sudo iptables -L
[sudo] password for ajay:
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere state
RELATED,ESTABLISHED
ACCEPT tcp  --  anywhere anywhere tcp dpt:ssh
DROP   all  --  anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
###



Even telnet fails :(

###
ajay@comp:~$ telnet localhost 9042
Trying 127.0.0.1...
###



Any ideas please?? We have been stuck on this for a good 3 hours now :(



Thanks and Regards,
Ajay