Re: Upgrade 3.11.1 to 3.11.4

2019-03-05 Thread Ioannis Zafiropoulos
Hi Kenneth,

Thanks for your interest to help. I had to take a decision quick because it
was a production cluster. So, long story short, I let the cluster finish
the decommission process before touching it. When decommissioned node left
the cluster I did a rolling restart and the nodes start behaving again
without errors, also auto-compaction resumed and all nodes had accumulated
a lot of files to compact. Then I performed a rolling upgrade from 3.11.1
to 3.11.4 which went very smoothly.

In retrospect to answer your questions:
> Was the cluster running ok before decommissioning the node?
Yes

> Why were you decommissioning the node?
Management decision, we wanted just  to shrink the cluster.

> Were you upgrading from 3.11.1 to 3.11.4?
No, that was not the initial intention. I arrived at that conclusion after
I realized I stepped into this bug on the rest of the nodes.
"Prevent compaction strategies from looping indefinitely" CASSANDRA-14079
<https://issues.apache.org/jira/browse/CASSANDRA-14079>

Thanks again!


On Thu, Feb 28, 2019 at 10:45 AM Kenneth Brotman
 wrote:

> Hi John,
>
>
>
> Was the cluster running ok before decommissioning the node?
>
> Why were you decommissioning the node?
>
> Were you upgrading from 3.11.1 to 3.11.4?
>
>
>
>
>
> *From:* Ioannis Zafiropoulos [mailto:john...@gmail.com]
> *Sent:* Wednesday, February 27, 2019 7:33 AM
> *To:* user@cassandra.apache.org
> *Subject:* Upgrade 3.11.1 to 3.11.4
>
>
>
> Hi all,
>
>
>
> During a decommission on a production cluster (9 nodes) we have some
> issues on the remaining nodes regarding compaction, and I have some
> questions about that:
>
>
>
> One remaining node who has stopped compacting, due to some bug
> <https://issues.apache.org/jira/browse/CASSANDRA-14079> in 3.11.1, *has
> received all* the streaming files from the decommission node
> (decommissioning is still in progress for the rest of the cluster). Could I
> upgrade this node to 3.11.4 and restart it?
>
>
>
> Some other nodes which *are still receiving* files appear to do very
> little to no auto-compaction from nodetool tpstats. Should I wait for
> streaming to complete or should I upgrade these nodes as well and restart
> them? What would happen if I bounce such a node? will the whole process of
> decommissioning fail?
>
>
>
> Do you recommend to eventually do a rolling upgrade to 3.11.4 or choose
> another version?
>
>
>
> Thanks in advance for your help,
>
> John Zaf
>


Upgrade 3.11.1 to 3.11.4

2019-02-27 Thread Ioannis Zafiropoulos
Hi all,

During a decommission on a production cluster (9 nodes) we have some issues
on the remaining nodes regarding compaction, and I have some questions
about that:

One remaining node who has stopped compacting, due to some bug
 in 3.11.1, *has
received all* the streaming files from the decommission node
(decommissioning is still in progress for the rest of the cluster). Could I
upgrade this node to 3.11.4 and restart it?

Some other nodes which *are still receiving* files appear to do very little
to no auto-compaction from nodetool tpstats. Should I wait for streaming to
complete or should I upgrade these nodes as well and restart them? What
would happen if I bounce such a node? will the whole process of
decommissioning fail?

Do you recommend to eventually do a rolling upgrade to 3.11.4 or choose
another version?

Thanks in advance for your help,
John Zaf


disk_failure_policy and blacklisted drives

2017-12-19 Thread Ioannis Zafiropoulos
Hello all,

We have a small cluster (3.11.1) with JBOD and with disk_failure_policy =
best_effort.

- How do I check if a drive has become blacklisted by Cassandra due to
failure?
- Would you recommend to switch back to the default disk_failure_policy =
stop which would be easier to check for nodes being down?

Thank you in advance!


Re: Cassandra 3.11 is compacting forever

2017-09-12 Thread Ioannis Zafiropoulos
A maybe long shot, have you deleted the cassandra-topology.properties file
since you are using GossipingPropertyFileSnitch?
I am sure I have seen a ticket about problems caused in some cases if that
file stays around.
I removed it from all the nodes and non-stop compaction stopped (after a
proper restart - not rolling).


On Fri, Sep 8, 2017 at 4:24 PM, Romain Hardouin  wrote:

> Hi,
>
> It might be useful to enable compaction logging with log_all subproperties.
>
> Best,
>
> Romain
>
> Le vendredi 8 septembre 2017 à 00:15:19 UTC+2, kurt greaves <
> k...@instaclustr.com> a écrit :
>
>
> Might be worth turning on debug logging for that node and when the
> compaction kicks off and CPU skyrockets send through the logs.​
>


Re: Migrate from DSE (Datastax) to Apache Cassandra

2017-08-17 Thread Ioannis Zafiropoulos
Ok found a solution for this problem.
I deleted the system's keyspace directory and restarted COSS and it was
rebuilt.
rm -rf /var/lib/cassandra/data/system

A bit drastic but I'll test it also on a multi-node cluster.



On Thu, Aug 17, 2017 at 3:57 PM, Ioannis Zafiropoulos <john...@gmail.com>
wrote:

> Thanks Felipe and Erick,
>
> Yes, your comment helped a lot, I was able to resolve that by:
> ALTER KEYSPACE dse_system WITH replication = {'class': 'SimpleStrategy',
> 'replication_factor':'1'};
>
> Another problem I had was with CentOS release 6.7 (Final)
> I was getting glibc 2.14 not found.
> Based on this <https://issues.apache.org/jira/browse/CASSANDRA-13072> I
> switched jna-4.4.0.jar with jna-4.2.2.jar and it worked.
>
> I just started COSS for the first time successfully, I am able to connect
> and work on the DB.
> It would be a perfect success if it was not for an exception that bugs me
> every time I start Cassandra:
>
> DEBUG [SSTableBatchOpen:1] 2017-08-17 14:36:50,477 SSTableReader.java:506
> - Opening 
> /cassandra/disk01/system/local-7ad54392bcdd35a684174e047860b377/mc-217-big
> (0.598KiB)
> DEBUG [SSTableBatchOpen:2] 2017-08-17 14:36:50,477 SSTableReader.java:506
> - Opening 
> /cassandra/disk01/system/local-7ad54392bcdd35a684174e047860b377/mc-155-big
> (0.139KiB)
> ERROR [SSTableBatchOpen:2] 2017-08-17 14:36:50,489
> DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
> java.lang.RuntimeException: Unknown column server_id during deserialization
> at org.apache.cassandra.db.SerializationHeader$Component.
> toHeader(SerializationHeader.java:309) ~[apache-cassandra-3.11.0.jar:
> 3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:513)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_65]
> at org.apache.cassandra.concurrent.NamedThreadFactory.
> lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]
>
> I looked at another DSE installation and system.local table has indeed a
> 'server_id' column.
> On my COSS testbed this column disappeared from the table as soon as I
> started for the first time COSS.
> I tried to sstablescrub, sstableupgrade but it didn't go away.
>
> I don't know if I should worry or how to fix it, any ideas?
>
>
>
> On Wed, Aug 16, 2017 at 1:38 PM, Felipe Esteves <
> felipe.este...@b2wdigital.com> wrote:
>
>> Ioannis,
>> As some people already said, there's one or two keyspaces that uses
>> EverywhereStrategy, dse_system is one of them, if I'm not wrong.
>> You  must remember to change them to a community strategy or it will fail.
>>
>>
>>
>>
>>
>


Re: Migrate from DSE (Datastax) to Apache Cassandra

2017-08-17 Thread Ioannis Zafiropoulos
Thanks Felipe and Erick,

Yes, your comment helped a lot, I was able to resolve that by:
ALTER KEYSPACE dse_system WITH replication = {'class': 'SimpleStrategy',
'replication_factor':'1'};

Another problem I had was with CentOS release 6.7 (Final)
I was getting glibc 2.14 not found.
Based on this  I
switched jna-4.4.0.jar with jna-4.2.2.jar and it worked.

I just started COSS for the first time successfully, I am able to connect
and work on the DB.
It would be a perfect success if it was not for an exception that bugs me
every time I start Cassandra:

DEBUG [SSTableBatchOpen:1] 2017-08-17 14:36:50,477 SSTableReader.java:506 -
Opening
/cassandra/disk01/system/local-7ad54392bcdd35a684174e047860b377/mc-217-big
(0.598KiB)
DEBUG [SSTableBatchOpen:2] 2017-08-17 14:36:50,477 SSTableReader.java:506 -
Opening
/cassandra/disk01/system/local-7ad54392bcdd35a684174e047860b377/mc-155-big
(0.139KiB)
ERROR [SSTableBatchOpen:2] 2017-08-17 14:36:50,489
DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.RuntimeException: Unknown column server_id during deserialization
at
org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:309)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:513)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_65]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_65]
at
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]

I looked at another DSE installation and system.local table has indeed a
'server_id' column.
On my COSS testbed this column disappeared from the table as soon as I
started for the first time COSS.
I tried to sstablescrub, sstableupgrade but it didn't go away.

I don't know if I should worry or how to fix it, any ideas?



On Wed, Aug 16, 2017 at 1:38 PM, Felipe Esteves <
felipe.este...@b2wdigital.com> wrote:

> Ioannis,
> As some people already said, there's one or two keyspaces that uses
> EverywhereStrategy, dse_system is one of them, if I'm not wrong.
> You  must remember to change them to a community strategy or it will fail.
>
>
>
>
>


Re: Migrate from DSE (Datastax) to Apache Cassandra

2017-08-16 Thread Ioannis Zafiropoulos
We use NetworkTopologyStrategy as the replication strategy.

The only DSE specific features we use (left untouched by default) are:
authenticator: com.datastax.bdp.cassandra.auth.DseAuthenticator
authorizer: com.datastax.bdp.cassandra.auth.DseAuthorizer
role_manager: com.datastax.bdp.cassandra.auth.DseRoleManager

So I hope by changing these to the COSS recommended ones, before the
migration, DSE will be able to switch to them on its own (?)
and then do the final transition of tarball installation.

Thank you all for your answers!

On Tue, Aug 15, 2017 at 10:42 PM, Jon Haddad <jonathan.had...@gmail.com>
wrote:

> I agree with Jeff, it’s not necessary to launch a new cluster for this
> operation.
>
> On Aug 15, 2017, at 7:39 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>
> Or just alter the key space replication strategy and remove the DSE
> specific strategies in favor of network topology strategy
>
>
> --
> Jeff Jirsa
>
>
> On Aug 15, 2017, at 7:26 PM, Erick Ramirez <flightc...@gmail.com> wrote:
>
> Ioannis, it's not a straightforward process to migrate from DSE to COSS.
> There are some parts of DSE which are not recognised by COSS, e.g.
> EverywhereStrategy for replication only known to DSE.
>
> You are better off standing up a new COSS 3.11 cluster and restore app
> keyspaces to the new cluster. Cheers!
>
> On Wed, Aug 16, 2017 at 6:33 AM, Ioannis Zafiropoulos <john...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> We have setup a new cluster DSE 5.1.2 (with Cassandra 3.11.0.1758) and we
>> want to migrate it to Apache Cassandra 3.11.0 without loosing schema or
>> data.
>>
>> Anybody, has done it before?
>>
>> Obviously we are going to test this, but it would be nice to hear if
>> somebody else has gone through with the procedure.
>>
>> Thank you!
>>
>
>
>


Migrate from DSE (Datastax) to Apache Cassandra

2017-08-15 Thread Ioannis Zafiropoulos
Hi all,

We have setup a new cluster DSE 5.1.2 (with Cassandra 3.11.0.1758) and we
want to migrate it to Apache Cassandra 3.11.0 without loosing schema or
data.

Anybody, has done it before?

Obviously we are going to test this, but it would be nice to hear if
somebody else has gone through with the procedure.

Thank you!


Re: Removing a disk from JBOD configuration

2017-07-31 Thread Ioannis Zafiropoulos
Excellent! Thank you Jeff.


On Mon, Jul 31, 2017 at 10:26 AM, Jeff Jirsa <jji...@gmail.com> wrote:

> 3.10 has 6696 in it, so my understanding is you'll probably be fine just
> running repair
>
>
> Yes, same risks if you swap drives - before 6696, you want to replace a
> whole node if any sstables are damaged or lost (if you do deletes, and if
> it hurts you if deleted data comes back to life).
>
>
> --
> Jeff Jirsa
>
>
> On Jul 31, 2017, at 6:41 AM, Ioannis Zafiropoulos <john...@gmail.com>
> wrote:
>
> Thank you Jeff for your answer,
>
> I use RF=3 and our client connect always with QUORUM. So I guess I will be
> alright after a repair (?)
> Follow up questions,
> - It seems that the risks you describing would be the same as if I had
> replaced the drive with an new fresh one and run repair, is that correct?
> - can I do the reverse procedure in the future, that is, to add a new
> drive with the same procedure I described?
>
> Thanks,
> John
>
>
>
> On Mon, Jul 31, 2017 at 5:42 AM, Jeff Jirsa <jji...@gmail.com> wrote:
>
>> It depends on what consistency level you use for reads/writes, and
>> whether you do deletes
>>
>> The real danger is that there may have been a tombstone on the drive the
>> failed covering data on the disks that remain, where the delete happened
>> older than gc-grace - if you simple yank the disk, that data will come back
>> to life (it's also possible some data temporarily reverts to a previous
>> state for some queries, though the reversion can be fixed with nodetool
>> repair, the resurrection can't be undone). If you don't do deletes, this is
>> not a problem. If there's no danger to you if data comes back to life, then
>> you're probably ok as well.
>>
>> Cassandra-6696 dramatically lowers this risk , if you're using a new
>> enough version of Cassandra
>>
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Jul 31, 2017, at 1:49 AM, Ioannis Zafiropoulos <john...@gmail.com>
>> wrote:
>> >
>> > Hi All,
>> >
>> > I have a 7 node cluster (Version 3.10) consisting of 5 disks each in
>> JBOD. A few hours ago I had a disk failure on a node. I am wondering if I
>> can:
>> >
>> > - stop Cassandra on that node
>> > - remove the disk, physically and from cassandra.yaml
>> > - start Cassandra on that node
>> > - run repair
>> >
>> > I mean, is it necessary to replace a failed disk instead of just
>> removing it?
>> > (assuming that the remaining disks have enough free space)
>> >
>> > Thank you for your help,
>> > John
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>


Re: Removing a disk from JBOD configuration

2017-07-31 Thread Ioannis Zafiropoulos
I just want to add that we use vnodes=16 if that helps with my questions..

On Mon, Jul 31, 2017 at 9:41 AM, Ioannis Zafiropoulos <john...@gmail.com>
wrote:

> Thank you Jeff for your answer,
>
> I use RF=3 and our client connect always with QUORUM. So I guess I will be
> alright after a repair (?)
> Follow up questions,
> - It seems that the risks you describing would be the same as if I had
> replaced the drive with an new fresh one and run repair, is that correct?
> - can I do the reverse procedure in the future, that is, to add a new
> drive with the same procedure I described?
>
> Thanks,
> John
>
>
>
> On Mon, Jul 31, 2017 at 5:42 AM, Jeff Jirsa <jji...@gmail.com> wrote:
>
>> It depends on what consistency level you use for reads/writes, and
>> whether you do deletes
>>
>> The real danger is that there may have been a tombstone on the drive the
>> failed covering data on the disks that remain, where the delete happened
>> older than gc-grace - if you simple yank the disk, that data will come back
>> to life (it's also possible some data temporarily reverts to a previous
>> state for some queries, though the reversion can be fixed with nodetool
>> repair, the resurrection can't be undone). If you don't do deletes, this is
>> not a problem. If there's no danger to you if data comes back to life, then
>> you're probably ok as well.
>>
>> Cassandra-6696 dramatically lowers this risk , if you're using a new
>> enough version of Cassandra
>>
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Jul 31, 2017, at 1:49 AM, Ioannis Zafiropoulos <john...@gmail.com>
>> wrote:
>> >
>> > Hi All,
>> >
>> > I have a 7 node cluster (Version 3.10) consisting of 5 disks each in
>> JBOD. A few hours ago I had a disk failure on a node. I am wondering if I
>> can:
>> >
>> > - stop Cassandra on that node
>> > - remove the disk, physically and from cassandra.yaml
>> > - start Cassandra on that node
>> > - run repair
>> >
>> > I mean, is it necessary to replace a failed disk instead of just
>> removing it?
>> > (assuming that the remaining disks have enough free space)
>> >
>> > Thank you for your help,
>> > John
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>


Re: Removing a disk from JBOD configuration

2017-07-31 Thread Ioannis Zafiropoulos
Thank you Jeff for your answer,

I use RF=3 and our client connect always with QUORUM. So I guess I will be
alright after a repair (?)
Follow up questions,
- It seems that the risks you describing would be the same as if I had
replaced the drive with an new fresh one and run repair, is that correct?
- can I do the reverse procedure in the future, that is, to add a new drive
with the same procedure I described?

Thanks,
John



On Mon, Jul 31, 2017 at 5:42 AM, Jeff Jirsa <jji...@gmail.com> wrote:

> It depends on what consistency level you use for reads/writes, and whether
> you do deletes
>
> The real danger is that there may have been a tombstone on the drive the
> failed covering data on the disks that remain, where the delete happened
> older than gc-grace - if you simple yank the disk, that data will come back
> to life (it's also possible some data temporarily reverts to a previous
> state for some queries, though the reversion can be fixed with nodetool
> repair, the resurrection can't be undone). If you don't do deletes, this is
> not a problem. If there's no danger to you if data comes back to life, then
> you're probably ok as well.
>
> Cassandra-6696 dramatically lowers this risk , if you're using a new
> enough version of Cassandra
>
>
>
> --
> Jeff Jirsa
>
>
> > On Jul 31, 2017, at 1:49 AM, Ioannis Zafiropoulos <john...@gmail.com>
> wrote:
> >
> > Hi All,
> >
> > I have a 7 node cluster (Version 3.10) consisting of 5 disks each in
> JBOD. A few hours ago I had a disk failure on a node. I am wondering if I
> can:
> >
> > - stop Cassandra on that node
> > - remove the disk, physically and from cassandra.yaml
> > - start Cassandra on that node
> > - run repair
> >
> > I mean, is it necessary to replace a failed disk instead of just
> removing it?
> > (assuming that the remaining disks have enough free space)
> >
> > Thank you for your help,
> > John
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Removing a disk from JBOD configuration

2017-07-31 Thread Ioannis Zafiropoulos
Hi All,

I have a 7 node cluster (Version 3.10) consisting of 5 disks each in JBOD.
A few hours ago I had a disk failure on a node. I am wondering if I can:

- stop Cassandra on that node
- remove the disk, physically and from cassandra.yaml
- start Cassandra on that node
- run repair

I mean, is it necessary to replace a failed disk instead of just removing
it?
(assuming that the remaining disks have enough free space)

Thank you for your help,
John


Re: cqlsh fails to connect

2016-10-28 Thread Ioannis Zafiropoulos
Ok, I tried with a new empty one node cluster of the same DSE version and
cqlsh works without hiccups.
So, the whole issue exists because I upgraded from Cassandra 2.1.11.

The procedure I followed for the upgrade was very simple:
- nodetool drain (on all nodes)
- shutdown all nodes
- Uncompressed the DSE tarball os new version in a new path
- modified new cassandra.conf
- started all nodes
- nodetool upgradesstables

The cluster(the one with the problematic cqlsh)  is up and running without
problems, I am able to connect with DBeaver and via Java.
What could have gone wrong so that the latest python drivers (3.7.1 &
3.6.0) will not let my connect from python?

Thanks

On Fri, Oct 28, 2016 at 12:50 PM, Ioannis Zafiropoulos <john...@gmail.com>
wrote:

> Hi Rajesh,
>
> I just tried python 2.711 & 2.7.12 and I get the same error 'invalid
> continuation byte'.
>
> On Fri, Oct 28, 2016 at 11:53 AM, Rajesh Radhakrishnan <
> rajesh.radhakrish...@phe.gov.uk> wrote:
>
>>
>> Hi John Z,
>>
>> Did you tried running with latest Python 2.7.11 or 2.7.12?
>>
>> Kind regards,
>> Rajesh Radhakrishnan
>>
>> --
>> *From:* Ioannis Zafiropoulos [john...@gmail.com]
>> *Sent:* 27 October 2016 22:16
>> *To:* user@cassandra.apache.org
>> *Subject:* cqlsh fails to connect
>>
>> I upgraded DSE 4.8.9 to 5.0.3, that is, from Cassandra 2.1.11 to 3.0.9
>> I used DSE 5.0.3 tarball installation. Cassandra cluster is up and
>> running OK and I am able to connect through DBeaver.
>>
>> Tried a lot of things and cannot connect with cqlsh:
>>
>> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
>> UnicodeDecodeError('utf8', '+\x00\x00H\x08\x00\xf0$+\x00\
>> x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7, 'invalid continuation
>> byte')})
>>
>> Versions
>> 
>> $ pip freeze | grep cas
>> cassandra-driver==3.6.0
>> cassandra-driver-dse==1.0.3
>>
>> $ python
>> Python 2.7.5 (default, Nov 20 2015, 02:00:19)
>> [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>>
>> $ cat /etc/redhat-release
>> CentOS Linux release 7.2.1511 (Core)
>>
>> $ which cqlsh (DSE's cqlsh client)
>> /opt/dse/bin/cqlsh
>>
>> Tried also
>> $ export CQLSH_NO_BUNDLED=false
>>
>> Also tried
>> --
>> Tried to install via pip in a fresh clean box the cqlsh client. I ended
>> up with the latest cassandra-driver 3.7.1
>>
>> $ pip freeze | grep cas
>> cassandra-driver==3.7.1
>>
>> $ pip freeze | grep cql
>> cql==1.4.0
>> cqlsh==5.0.3
>>
>> And got this:
>> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
>> ProtocolError("cql_version '3.3.1' is not supported by remote (w/ native
>> protocol). Supported versions: [u'3.4.0']",)})
>>
>> Tried to force it:
>> $ cqlsh XX.XX.XX.XX --cqlversion="3.4.0"
>> Got the original-first error message:
>> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
>> UnicodeDecodeError('utf8', '+\x00\x00H\x08\x00\xf0$+\x00\
>> x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7, 'invalid continuation
>> byte')})
>>
>> At some point I got this message too, but I don't remember what I did:
>> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
>> DriverException(u'Failed decoding result column "table_name" of type
>> varchar: ',)})
>>
>> Thank you in advance for you help,
>> John Z
>>
>>
>>
>>
>>
>> 
>> **
>> The information contained in the EMail and any attachments is
>> confidential and intended solely and for the attention and use of the named
>> addressee(s). It may not be disclosed to any other person without the
>> express authority of Public Health England, or the intended recipient, or
>> both. If you are not the intended recipient, you must not disclose, copy,
>> distribute or retain this message or any part of it. This footnote also
>> confirms that this EMail has been swept for computer viruses by
>> Symantec.Cloud, but please re-sweep any attachments before opening or
>> saving. http://www.gov.uk/PHE
>> 
>> **
>>
>
>


Re: cqlsh fails to connect

2016-10-28 Thread Ioannis Zafiropoulos
Hi Rajesh,

I just tried python 2.711 & 2.7.12 and I get the same error 'invalid
continuation byte'.

On Fri, Oct 28, 2016 at 11:53 AM, Rajesh Radhakrishnan <
rajesh.radhakrish...@phe.gov.uk> wrote:

>
> Hi John Z,
>
> Did you tried running with latest Python 2.7.11 or 2.7.12?
>
> Kind regards,
> Rajesh Radhakrishnan
>
> ------
> *From:* Ioannis Zafiropoulos [john...@gmail.com]
> *Sent:* 27 October 2016 22:16
> *To:* user@cassandra.apache.org
> *Subject:* cqlsh fails to connect
>
> I upgraded DSE 4.8.9 to 5.0.3, that is, from Cassandra 2.1.11 to 3.0.9
> I used DSE 5.0.3 tarball installation. Cassandra cluster is up and running
> OK and I am able to connect through DBeaver.
>
> Tried a lot of things and cannot connect with cqlsh:
>
> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
> UnicodeDecodeError('utf8', '+\x00\x00H\x08\x00\xf0$+\x00\
> x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7, 'invalid continuation
> byte')})
>
> Versions
> 
> $ pip freeze | grep cas
> cassandra-driver==3.6.0
> cassandra-driver-dse==1.0.3
>
> $ python
> Python 2.7.5 (default, Nov 20 2015, 02:00:19)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>
> $ cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
>
> $ which cqlsh (DSE's cqlsh client)
> /opt/dse/bin/cqlsh
>
> Tried also
> $ export CQLSH_NO_BUNDLED=false
>
> Also tried
> --
> Tried to install via pip in a fresh clean box the cqlsh client. I ended up
> with the latest cassandra-driver 3.7.1
>
> $ pip freeze | grep cas
> cassandra-driver==3.7.1
>
> $ pip freeze | grep cql
> cql==1.4.0
> cqlsh==5.0.3
>
> And got this:
> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
> ProtocolError("cql_version '3.3.1' is not supported by remote (w/ native
> protocol). Supported versions: [u'3.4.0']",)})
>
> Tried to force it:
> $ cqlsh XX.XX.XX.XX --cqlversion="3.4.0"
> Got the original-first error message:
> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
> UnicodeDecodeError('utf8', '+\x00\x00H\x08\x00\xf0$+\x00\
> x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7, 'invalid continuation
> byte')})
>
> At some point I got this message too, but I don't remember what I did:
> Connection error: ('Unable to connect to any servers', {'x.x.x.x':
> DriverException(u'Failed decoding result column "table_name" of type
> varchar: ',)})
>
> Thank you in advance for you help,
> John Z
>
>
>
>
>
> **
> The information contained in the EMail and any attachments is confidential
> and intended solely and for the attention and use of the named
> addressee(s). It may not be disclosed to any other person without the
> express authority of Public Health England, or the intended recipient, or
> both. If you are not the intended recipient, you must not disclose, copy,
> distribute or retain this message or any part of it. This footnote also
> confirms that this EMail has been swept for computer viruses by
> Symantec.Cloud, but please re-sweep any attachments before opening or
> saving. http://www.gov.uk/PHE
> **
>


cqlsh fails to connect

2016-10-27 Thread Ioannis Zafiropoulos
I upgraded DSE 4.8.9 to 5.0.3, that is, from Cassandra 2.1.11 to 3.0.9
I used DSE 5.0.3 tarball installation. Cassandra cluster is up and running
OK and I am able to connect through DBeaver.

Tried a lot of things and cannot connect with cqlsh:

Connection error: ('Unable to connect to any servers', {'x.x.x.x':
UnicodeDecodeError('utf8',
'+\x00\x00H\x08\x00\xf0$+\x00\x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7,
'invalid continuation byte')})

Versions

$ pip freeze | grep cas
cassandra-driver==3.6.0
cassandra-driver-dse==1.0.3

$ python
Python 2.7.5 (default, Nov 20 2015, 02:00:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

$ which cqlsh (DSE's cqlsh client)
/opt/dse/bin/cqlsh

Tried also
$ export CQLSH_NO_BUNDLED=false

Also tried
--
Tried to install via pip in a fresh clean box the cqlsh client. I ended up
with the latest cassandra-driver 3.7.1

$ pip freeze | grep cas
cassandra-driver==3.7.1

$ pip freeze | grep cql
cql==1.4.0
cqlsh==5.0.3

And got this:
Connection error: ('Unable to connect to any servers', {'x.x.x.x':
ProtocolError("cql_version '3.3.1' is not supported by remote (w/ native
protocol). Supported versions: [u'3.4.0']",)})

Tried to force it:
$ cqlsh XX.XX.XX.XX --cqlversion="3.4.0"
Got the original-first error message:
Connection error: ('Unable to connect to any servers', {'x.x.x.x':
UnicodeDecodeError('utf8',
'+\x00\x00H\x08\x00\xf0$+\x00\x00\x10\x7f-\xe7S+\x00\x00`B\xb3\xe5S', 6, 7,
'invalid continuation byte')})

At some point I got this message too, but I don't remember what I did:
Connection error: ('Unable to connect to any servers', {'x.x.x.x':
DriverException(u'Failed decoding result column "table_name" of type
varchar: ',)})

Thank you in advance for you help,
John Z