upgrading cassandra

2019-02-05 Thread Adil
Hi,
we have a cluster with cassandra 3.0.9 and we are going to upgrade it to
the latest version which i think is 3.11.3 but a teamate told me that the
latest version is 3.0.17.
what is the latest stable version?

thanks in advance.


Re: Error while read after upgrade from 2.2.7 to 3.0.8

2016-10-02 Thread Adil
Hi,
That means that some clients closes the connection, have you upgraded all
clients?

Il 30/set/2016 14:25, "Oleg Krayushkin"  ha scritto:

> Hi,
>
> Since the upgrade from Cassandra version 2.2.7 to 3.0.8 We're getting
> following error almost every several minutes on every node. For node at
> 173.170.147.120 error in system.log would be:
>
> INFO  [SharedPool-Worker-4] 2016-09-30 10:26:39,068 Message.java:605
>- Unexpected exception during request; channel = [id: 0xfd64cd67, 
> /173.170.147.120:50660 :> /18.4.63.191:9042]
> java.io.IOException: Error while read(...): Connection reset by peer
> at io.netty.channel.epoll.Native.readAddress(Native Method) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
>
> As far as I see, in all such errors there are always [id: <...>,
> /: :> /: transport_port>.  Also broadcast_address and listen_address are always
> belong to the current node adresses.
>
> What are possible reasons of such errors and how can I fix it? Any
> thoughts would be appreciated.
>


Re: migrating from 2.1.2 to 3.0.8 log errors

2016-08-17 Thread Adil
just to share with you, by running rebuild_index the problem is solved.

2016-08-11 22:05 GMT+02:00 Adil <adil.cha...@gmail.com>:

> After migrating C* from 2.1.2 to 3.0.8, all queries with the where
> condition involved ad indexed column return zero rows for the old data,
> instead news inserted data are returned from the same query, I'm guessing
> that something was remained incomplete about indexes, should we run rebuild
> indexes? Any idea?
> Thank
> Ad.
>
> Il 10/ago/2016 23:58, "Adil" <adil.cha...@gmail.com> ha scritto:
>
>> Thank you for your response, we have updated datastax driver to 3.1.0
>> using V3 protocol, i think there are still some webapp that still using the
>> 2.1.6 java driver..we will upgrade thembut we noticed strange things,
>> on web apps upgraded to 3.1.0 some queries return zero results even if data
>> exists, I can see it with cqlsh
>>
>> 2016-08-10 20:48 GMT+02:00 Tyler Hobbs <ty...@datastax.com>:
>>
>>> That just means that a client/driver disconnected.  Those log messages
>>> are supposed to be suppressed, but perhaps that stopped working in 3.x due
>>> to another change.
>>>
>>> On Wed, Aug 10, 2016 at 10:33 AM, Adil <adil.cha...@gmail.com> wrote:
>>>
>>>> Hi guys,
>>>> We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
>>>> cassandra 2.1.2 to 3.0.8, all seems fine, executing nodetool status shows
>>>> all nodes UN, but in each node's log there is this log error continuously:
>>>> java.io.IOException: Error while read(...): Connection reset by peer
>>>> at io.netty.channel.epoll.Native.readAddress(Native Method)
>>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>>>> doReadBytes(EpollSocketChannel.java:675) ~[netty-all-4.0.23.Final.jar:4
>>>> .0.23.Final]
>>>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>>>> epollInReady(EpollSocketChannel.java:714)
>>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>> at 
>>>> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326)
>>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
>>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(Sin
>>>> gleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4
>>>> .0.23.Final]
>>>> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnabl
>>>> eDecorator.run(DefaultThreadFactory.java:137)
>>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>>>
>>>> we have installed java-8_101
>>>>
>>>> anya idea what woud be the problem?
>>>>
>>>> thanks
>>>>
>>>> Adil
>>>> does anyone
>>>>
>>>>
>>>
>>>
>>> --
>>> Tyler Hobbs
>>> DataStax <http://datastax.com/>
>>>
>>
>>


Re: migrating from 2.1.2 to 3.0.8 log errors

2016-08-11 Thread Adil
After migrating C* from 2.1.2 to 3.0.8, all queries with the where
condition involved ad indexed column return zero rows for the old data,
instead news inserted data are returned from the same query, I'm guessing
that something was remained incomplete about indexes, should we run rebuild
indexes? Any idea?
Thank
Ad.

Il 10/ago/2016 23:58, "Adil" <adil.cha...@gmail.com> ha scritto:

> Thank you for your response, we have updated datastax driver to 3.1.0
> using V3 protocol, i think there are still some webapp that still using the
> 2.1.6 java driver..we will upgrade thembut we noticed strange things,
> on web apps upgraded to 3.1.0 some queries return zero results even if data
> exists, I can see it with cqlsh
>
> 2016-08-10 20:48 GMT+02:00 Tyler Hobbs <ty...@datastax.com>:
>
>> That just means that a client/driver disconnected.  Those log messages
>> are supposed to be suppressed, but perhaps that stopped working in 3.x due
>> to another change.
>>
>> On Wed, Aug 10, 2016 at 10:33 AM, Adil <adil.cha...@gmail.com> wrote:
>>
>>> Hi guys,
>>> We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
>>> cassandra 2.1.2 to 3.0.8, all seems fine, executing nodetool status shows
>>> all nodes UN, but in each node's log there is this log error continuously:
>>> java.io.IOException: Error while read(...): Connection reset by peer
>>> at io.netty.channel.epoll.Native.readAddress(Native Method)
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>>> doReadBytes(EpollSocketChannel.java:675) ~[netty-all-4.0.23.Final.jar:4
>>> .0.23.Final]
>>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>>> epollInReady(EpollSocketChannel.java:714) ~[netty-all-4.0.23.Final.jar:4
>>> .0.23.Final]
>>> at 
>>> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326)
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(Sin
>>> gleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4
>>> .0.23.Final]
>>> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnabl
>>> eDecorator.run(DefaultThreadFactory.java:137)
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>>
>>> we have installed java-8_101
>>>
>>> anya idea what woud be the problem?
>>>
>>> thanks
>>>
>>> Adil
>>> does anyone
>>>
>>>
>>
>>
>> --
>> Tyler Hobbs
>> DataStax <http://datastax.com/>
>>
>
>


Re: migrating from 2.1.2 to 3.0.8 log errors

2016-08-10 Thread Adil
Thank you for your response, we have updated datastax driver to 3.1.0 using
V3 protocol, i think there are still some webapp that still using the 2.1.6
java driver..we will upgrade thembut we noticed strange things, on web
apps upgraded to 3.1.0 some queries return zero results even if data
exists, I can see it with cqlsh

2016-08-10 20:48 GMT+02:00 Tyler Hobbs <ty...@datastax.com>:

> That just means that a client/driver disconnected.  Those log messages are
> supposed to be suppressed, but perhaps that stopped working in 3.x due to
> another change.
>
> On Wed, Aug 10, 2016 at 10:33 AM, Adil <adil.cha...@gmail.com> wrote:
>
>> Hi guys,
>> We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
>> cassandra 2.1.2 to 3.0.8, all seems fine, executing nodetool status shows
>> all nodes UN, but in each node's log there is this log error continuously:
>> java.io.IOException: Error while read(...): Connection reset by peer
>> at io.netty.channel.epoll.Native.readAddress(Native Method)
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>> doReadBytes(EpollSocketChannel.java:675) ~[netty-all-4.0.23.Final.jar:4
>> .0.23.Final]
>> at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.
>> epollInReady(EpollSocketChannel.java:714) ~[netty-all-4.0.23.Final.jar:4
>> .0.23.Final]
>> at 
>> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326)
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(Sin
>> gleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4
>> .0.23.Final]
>> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnabl
>> eDecorator.run(DefaultThreadFactory.java:137)
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>
>> we have installed java-8_101
>>
>> anya idea what woud be the problem?
>>
>> thanks
>>
>> Adil
>> does anyone
>>
>>
>
>
> --
> Tyler Hobbs
> DataStax <http://datastax.com/>
>


migrating from 2.1.2 to 3.0.8 log errors

2016-08-10 Thread Adil
Hi guys,
We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
cassandra 2.1.2 to 3.0.8, all seems fine, executing nodetool status shows
all nodes UN, but in each node's log there is this log error continuously:
java.io.IOException: Error while read(...): Connection reset by peer
at io.netty.channel.epoll.Native.readAddress(Native Method)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

we have installed java-8_101

anya idea what woud be the problem?

thanks

Adil
does anyone


Re: electricity outage problem

2016-01-15 Thread Adil
Hi,
we did full restart of the cluster but nodetool status still giving
incoerent info from different nodes, some nodes appers UP from a node but
appers DOWN from another, and in the log as is said still having the
message "received an invalid gossip generation for peer /x.x.x.x"
cassandra version is 2.1.2, we want to execute the purge operation as
explained here
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_gossip_purge.html
but we don't found the peers folder, should we do it via cql deleting the
peers content? should we do it for all nodes?

thanks


2016-01-12 17:42 GMT+01:00 Jack Krupansky <jack.krupan...@gmail.com>:

> Sometimes you may have to clear out the saved Gossip state:
>
> https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html
>
> Note the instruction about bringing up the seed nodes first. Normally seed
> nodes are only relevant when initially joining a node to a cluster (and
> then the Gossip state will be persisted locally), but if you clear te
> persisted Gossip state the seed nodes will again be needed to find the rest
> of the cluster.
>
> I'm not sure whether a power outage is the same as stopping and restarting
> an instance (AWS) in terms of whether the restarted instance retains its
> current public IP address.
>
>
>
> -- Jack Krupansky
>
> On Tue, Jan 12, 2016 at 10:02 AM, daemeon reiydelle <daeme...@gmail.com>
> wrote:
>
>> This happens when there is insufficient time for nodes coming up to join
>> a network. It takes a few seconds for a node to come up, e.g. your seed
>> node. If you tell a node to join a cluster you can get this scenario
>> because of high network utilization as well. I wait 90 seconds after the
>> first (i.e. my first seed) node comes up to start the next one. Any nodes
>> that are seeds need some 60 seconds, so the additional 30 seconds is a
>> buffer. Additional nodes each wait 60 seconds before joining (although this
>> is a parallel tree for large clusters).
>>
>>
>>
>>
>>
>> *...*
>>
>>
>>
>>
>>
>>
>> *“Life should not be a journey to the grave with the intention of
>> arriving safely in apretty and well preserved body, but rather to skid in
>> broadside in a cloud of smoke,thoroughly used up, totally worn out, and
>> loudly proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M.
>> ReiydelleUSA (+1) 415.501.0198 <%28%2B1%29%20415.501.0198>London (+44) (0)
>> 20 8144 9872 <%28%2B44%29%20%280%29%2020%208144%209872>*
>>
>> On Tue, Jan 12, 2016 at 6:56 AM, Adil <adil.cha...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> we have two DC with 5 nodes in each cluster, yesterday there was an
>>> electricity outage causing all nodes down, we restart the clusters but when
>>> we run nodetool status on DC1 it results that some nodes are DN, the
>>> strange thing is that running the command from diffrent node in DC1 doesn't
>>> give the same node in DC as own, we have noticed this message in the log
>>> "received an invalid gossip generation for peer", does anyone know how to
>>> resolve this problem? should we purge the gossip?
>>>
>>> thanks
>>>
>>> Adil
>>>
>>
>>
>


Re: electricity outage problem

2016-01-15 Thread Adil
our case is not about accepting connection, some nodes receives gossip
generation number greater the local one, a looked at the tables peers and
local and can't found where local one is stored.

2016-01-15 17:54 GMT+01:00 daemeon reiydelle <daeme...@gmail.com>:

> Nodes need about 60-90 second delay before it can start accepting
> connections as a seed node. Also a seed node needs time to accept a node
> starting up, and syncing to other nodes (on 10gbit the max new nodes is
> only 1 or 2, on 1gigabit it can handle at least 3-4 new nodes connecting).
> In a large cluster (500 nodes) I see this wierd condition where nodetool
> status shows overlapping subsets of nodes, and the problem does not go away
> after even an hour on a 10 gigabit network).
>
>
>
> *...*
>
>
>
>
>
>
> *“Life should not be a journey to the grave with the intention of arriving
> safely in apretty and well preserved body, but rather to skid in broadside
> in a cloud of smoke,thoroughly used up, totally worn out, and loudly
> proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
> (+1) 415.501.0198 <%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
> <%28%2B44%29%20%280%29%2020%208144%209872>*
>
> On Fri, Jan 15, 2016 at 9:17 AM, Adil <adil.cha...@gmail.com> wrote:
>
>> Hi,
>> we did full restart of the cluster but nodetool status still giving
>> incoerent info from different nodes, some nodes appers UP from a node but
>> appers DOWN from another, and in the log as is said still having the
>> message "received an invalid gossip generation for peer /x.x.x.x"
>> cassandra version is 2.1.2, we want to execute the purge operation as
>> explained here
>> https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_gossip_purge.html
>> but we don't found the peers folder, should we do it via cql deleting the
>> peers content? should we do it for all nodes?
>>
>> thanks
>>
>>
>> 2016-01-12 17:42 GMT+01:00 Jack Krupansky <jack.krupan...@gmail.com>:
>>
>>> Sometimes you may have to clear out the saved Gossip state:
>>>
>>> https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html
>>>
>>> Note the instruction about bringing up the seed nodes first. Normally
>>> seed nodes are only relevant when initially joining a node to a cluster
>>> (and then the Gossip state will be persisted locally), but if you clear te
>>> persisted Gossip state the seed nodes will again be needed to find the rest
>>> of the cluster.
>>>
>>> I'm not sure whether a power outage is the same as stopping and
>>> restarting an instance (AWS) in terms of whether the restarted instance
>>> retains its current public IP address.
>>>
>>>
>>>
>>> -- Jack Krupansky
>>>
>>> On Tue, Jan 12, 2016 at 10:02 AM, daemeon reiydelle <daeme...@gmail.com>
>>> wrote:
>>>
>>>> This happens when there is insufficient time for nodes coming up to
>>>> join a network. It takes a few seconds for a node to come up, e.g. your
>>>> seed node. If you tell a node to join a cluster you can get this scenario
>>>> because of high network utilization as well. I wait 90 seconds after the
>>>> first (i.e. my first seed) node comes up to start the next one. Any nodes
>>>> that are seeds need some 60 seconds, so the additional 30 seconds is a
>>>> buffer. Additional nodes each wait 60 seconds before joining (although this
>>>> is a parallel tree for large clusters).
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *...*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *“Life should not be a journey to the grave with the intention of
>>>> arriving safely in apretty and well preserved body, but rather to skid in
>>>> broadside in a cloud of smoke,thoroughly used up, totally worn out, and
>>>> loudly proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M.
>>>> ReiydelleUSA (+1) 415.501.0198 <%28%2B1%29%20415.501.0198>London (+44) (0)
>>>> 20 8144 9872 <%28%2B44%29%20%280%29%2020%208144%209872>*
>>>>
>>>> On Tue, Jan 12, 2016 at 6:56 AM, Adil <adil.cha...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> we have two DC with 5 nodes in each cluster, yesterday there was an
>>>>> electricity outage causing all nodes down, we restart the clusters but 
>>>>> when
>>>>> we run nodetool status on DC1 it results that some nodes are DN, the
>>>>> strange thing is that running the command from diffrent node in DC1 
>>>>> doesn't
>>>>> give the same node in DC as own, we have noticed this message in the log
>>>>> "received an invalid gossip generation for peer", does anyone know how to
>>>>> resolve this problem? should we purge the gossip?
>>>>>
>>>>> thanks
>>>>>
>>>>> Adil
>>>>>
>>>>
>>>>
>>>
>>
>


electricity outage problem

2016-01-12 Thread Adil
Hi,

we have two DC with 5 nodes in each cluster, yesterday there was an
electricity outage causing all nodes down, we restart the clusters but when
we run nodetool status on DC1 it results that some nodes are DN, the
strange thing is that running the command from diffrent node in DC1 doesn't
give the same node in DC as own, we have noticed this message in the log
"received an invalid gossip generation for peer", does anyone know how to
resolve this problem? should we purge the gossip?

thanks

Adil


Re: Mutable primary key in a table

2015-02-06 Thread Adil
Hi,
it seems you are doing some thing wrong in your model, why can you go with
updating columns of key1 instead of deleting/inserting row key?

2015-02-06 15:02 GMT+01:00 Ajaya Agrawal ajku@gmail.com:

 Hi guys,

 I want to take a row with primary key K1, rewrite it with primary key K2,
 and delete the original data with key K1, atomically.

 It seems like the only solution which won't have race conditions is to use
 batch statement to delete the old row and insert the new one. But the
 documentation of batch operation makes me nervous. The specific parts in
 docs are the ones which say that all nodes in your cluster become stressed
 if you use logged batches(default one).

 Is it a solved problem already?
 Cheers,
 Ajaya



nodetool repair

2015-01-09 Thread Adil
Hi guys,
We have two DC, we are planning to schedule running nodetool repair weekly,
my question is : nodetool repair is cross cluster or not? it's sufficient
to run it without options on a node or should be scheduled on every node
with the host option.

Thanks


logging over multi-datacenter

2014-11-20 Thread Adil
Hi,
We have two data-center, we configured PasswordAuthenticator on each node,
we increment the RF of system_auth to the number of nodes (each
data-center) as recommended.
We can logged-in via cqlsh without problem, but when i stop cassandra on
all nodes of a data-center we can't logged in in the other
data-center...this error is displayed as output:
Bad credentials]
message=org.apache.cassandra.exceptions.UnavailableException: Cannot
achieve consistency level QUORUM'

from what i understand we should be able to logged in even if there is only
one node UP but it seems that has to reach QUORUM consistency level (2
data-center).

my question is if the java driver cql uses the same condition and if there
is a way to set the consistency level to like LOCAL_ONE.

Thanks
Adil


Re: logging over multi-datacenter

2014-11-20 Thread Adil
cassandra version 2.1.2
with the default user
we create another user and with this one we could login even if only one
node is up

2014-11-20 15:16 GMT+01:00 Mark Reddy mark.l.re...@gmail.com:

 Hi Adil,

 What Cassandra version are you using? Are you using the default user or a
 non-default user?


 Mark

 On 20 November 2014 08:20, Adil adil.cha...@gmail.com wrote:

 Hi,
 We have two data-center, we configured PasswordAuthenticator on each
 node, we increment the RF of system_auth to the number of nodes (each
 data-center) as recommended.
 We can logged-in via cqlsh without problem, but when i stop cassandra on
 all nodes of a data-center we can't logged in in the other
 data-center...this error is displayed as output:
 Bad credentials]
 message=org.apache.cassandra.exceptions.UnavailableException: Cannot
 achieve consistency level QUORUM'

 from what i understand we should be able to logged in even if there is
 only one node UP but it seems that has to reach QUORUM consistency level (2
 data-center).

 my question is if the java driver cql uses the same condition and if
 there is a way to set the consistency level to like LOCAL_ONE.

 Thanks
 Adil





Re: logging over multi-datacenter

2014-11-20 Thread Adil
ok thank you.

2014-11-20 16:02 GMT+01:00 Mark Reddy mark.l.re...@gmail.com:

 Hi Adil,

 When using the default superuser ('cassandra') a consistency level of
 QUORUM is used. When using other users ONE is used.

 You are not supposed to use 'cassandra' user directly, except to create
 another superuser and use that one from that point on.


 Mark

 On 20 November 2014 14:40, Adil adil.cha...@gmail.com wrote:

 cassandra version 2.1.2
 with the default user
 we create another user and with this one we could login even if only one
 node is up

 2014-11-20 15:16 GMT+01:00 Mark Reddy mark.l.re...@gmail.com:

 Hi Adil,

 What Cassandra version are you using? Are you using the default user or
 a non-default user?


 Mark

 On 20 November 2014 08:20, Adil adil.cha...@gmail.com wrote:

 Hi,
 We have two data-center, we configured PasswordAuthenticator on each
 node, we increment the RF of system_auth to the number of nodes (each
 data-center) as recommended.
 We can logged-in via cqlsh without problem, but when i stop cassandra
 on all nodes of a data-center we can't logged in in the other
 data-center...this error is displayed as output:
 Bad credentials]
 message=org.apache.cassandra.exceptions.UnavailableException: Cannot
 achieve consistency level QUORUM'

 from what i understand we should be able to logged in even if there is
 only one node UP but it seems that has to reach QUORUM consistency level (2
 data-center).

 my question is if the java driver cql uses the same condition and if
 there is a way to set the consistency level to like LOCAL_ONE.

 Thanks
 Adil







Re: Cassandra default consistency level on multi datacenter

2014-11-15 Thread Adil
yes, already found...via the QueryOptions

2014-11-15 1:28 GMT+01:00 Tyler Hobbs ty...@datastax.com:

 Cassandra itself does not have default consistency levels.  These are only
 configured in the driver.

 On Fri, Nov 14, 2014 at 8:54 AM, Adil adil.cha...@gmail.com wrote:

 Hi,
 We are using two datacenter and we want to set the default consistency
 level to LOCAL_ONE instead of ONE but we don't know how to configure it.
 We set LOCAL_QUORUM via cql driver for the desired queries but we won't
 do the same for the default one.

 Thanks in advance

 Adil




 --
 Tyler Hobbs
 DataStax http://datastax.com/



Re: Cassandra communication between 2 datacenter

2014-11-14 Thread Adil
Thank you Eric, the problem in fact was that the ports were open only in
one sense.
now is working.

2014-11-13 22:38 GMT+01:00 Eric Plowe eric.pl...@gmail.com:

 Are you sure that both DC's can communicate with each other over the
 necessary ports?

 On Thu, Nov 13, 2014 at 3:46 PM, Adil adil.cha...@gmail.com wrote:

 yeh we started nodes one at timemy doubt is if we should configure
 alse cassandra-topology.properties or not? we leave it with default vlaues

 2014-11-13 21:05 GMT+01:00 Robert Coli rc...@eventbrite.com:

 On Thu, Nov 13, 2014 at 10:26 AM, Adil adil.cha...@gmail.com wrote:

 Hi,
 we have two datacenter with those inof:

 Cassandra version 2.1.0
 DC1 with 5 nodes
 DC2 with 5 nodes

 we set the snitch to GossipingPropertyFileSnitch and in
 cassandra-rackdc.properties we put:
 in DC1:
 dc=DC1
 rack=RAC1

 in DC2:
 dc=DC2
 rack=RAC1

 and in every node's cassandra.yaml we define two seeds of DC1 and two
 seed of DC2.


 Do you start the nodes one at a time, and then consult nodetool ring
 (etc.) to see if the cluster coalesces in the way you expect?

 If so, a Keyspace created in one should very quickly be created in the
 other.

 =Rob
 http://twitter.com/rcolidba






Cassandra default consistency level on multi datacenter

2014-11-14 Thread Adil
Hi,
We are using two datacenter and we want to set the default consistency
level to LOCAL_ONE instead of ONE but we don't know how to configure it.
We set LOCAL_QUORUM via cql driver for the desired queries but we won't do
the same for the default one.

Thanks in advance

Adil


Cassandra communication between 2 datacenter

2014-11-13 Thread Adil
Hi,
we have two datacenter with those inof:

Cassandra version 2.1.0
DC1 with 5 nodes
DC2 with 5 nodes

we set the snitch to GossipingPropertyFileSnitch and in
cassandra-rackdc.properties we put:
in DC1:
dc=DC1
rack=RAC1

in DC2:
dc=DC2
rack=RAC1

and in every node's cassandra.yaml we define two seeds of DC1 and two seed
of DC2.

we restart both DC, we create un keyspace with NetworkTopologyStrategy in
DC1 and we suspect that will be created also in DC2 but it's not the
case...so we create the same keyspace in DC2, we create a table in both DC,
we did un insert in DC1 but doing un select from the same table in DC2 we
found 0 rows.
so it seems that our clusters are not communicating between them.
doing a nodetool status overs each DC we see only the 5 nodes corresponding
to the current DC.

are we missed some configuration?

thanks in advance.


Re: Cassandra communication between 2 datacenter

2014-11-13 Thread Adil
yeh we started nodes one at timemy doubt is if we should configure alse
cassandra-topology.properties or not? we leave it with default vlaues

2014-11-13 21:05 GMT+01:00 Robert Coli rc...@eventbrite.com:

 On Thu, Nov 13, 2014 at 10:26 AM, Adil adil.cha...@gmail.com wrote:

 Hi,
 we have two datacenter with those inof:

 Cassandra version 2.1.0
 DC1 with 5 nodes
 DC2 with 5 nodes

 we set the snitch to GossipingPropertyFileSnitch and in
 cassandra-rackdc.properties we put:
 in DC1:
 dc=DC1
 rack=RAC1

 in DC2:
 dc=DC2
 rack=RAC1

 and in every node's cassandra.yaml we define two seeds of DC1 and two
 seed of DC2.


 Do you start the nodes one at a time, and then consult nodetool ring
 (etc.) to see if the cluster coalesces in the way you expect?

 If so, a Keyspace created in one should very quickly be created in the
 other.

 =Rob
 http://twitter.com/rcolidba