Hello,
Using withLocalDC="myLocalDC" and withUsedHostsPerRemoteDc>0 will guarantee
that you will connect to one of the nodes in "myLocalDC",
but DOES NOT guarantee that your read/write request will be acknowledged by
a "myLocalDC" node. It may well be acknowledged by a remote DC node as
well, eve
I am getting this error repeatedly while I am trying to add a new DC
consisting of one node in AWS to my existing cluster. I have tried 5 times
already. Running Cassandra 2.1.13
I have also set:
streaming_socket_timeout_in_ms: 360
in all of my nodes
Does anybody have any idea how this can be
an you check the
> cassandra-rackdc.properties of this new node?
>
> https://issues.apache.org/jira/browse/CASSANDRA-8279
>
>
> Regards,
> Mike Yeap
>
> On Wed, May 25, 2016 at 2:31 PM, George Sigletos
> wrote:
>
>> I am getting this error repeatedly while I am
Hello,
Here is additional stack trace from system.log:
ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Remote peer 192.168.1.140 failed stream session.
ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
Stream
:
> This is the log of the destination/rebuilding node, you need to check what
> is the error message on the stream source node (192.168.1.140).
>
>
> 2016-05-25 15:22 GMT-03:00 George Sigletos :
>
>> Hello,
>>
>> Here is additional stack trace from system.log:
&
ng_socket_timeout_in_ms, the new default will be 8640ms (1
>> day).
>>
>> We are addressing this on
>> https://issues.apache.org/jira/browse/CASSANDRA-11839.
>>
>> 2016-05-25 16:42 GMT-03:00 George Sigletos :
>>
>>> Hello again,
>
cs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
>
> This will ultimately fixed by CASSANDRA-11841 by adding keep-alive to the
> streaming protocol.
>
> 2016-05-25 18:09 GMT-03:00 George Sigletos :
>
>> Thanks a lot for your help. I will try that t
ssages
> on [STREAM-OUT-/192.168.1.141] and [STREAM-IN-/172.31.22.104] ?
>
> > Streaming does not seem to be resumed again from this node. Shall I just
> kill again the entire rebuild process?
>
> Yes, resumable rebuild will be supported on CASSANDRA-10810.
>
> 2016-05-26 8
:05 PM, George Sigletos
wrote:
> The time the first streaming failure occurs varies from a few hours to 1+
> day.
>
> We also experience slowness problems with the destination node on Amazon.
> Rebuild is slow. That may also contribute to the problem.
>
> Unfortunately we only
> the new nodes using 2.1.13, and upgrade after.
>
> On Fri, May 27, 2016 at 8:41 AM, George Sigletos
> wrote:
>
> >>>> ERROR [STREAM-IN-/192.168.1.141] 2016-05-26 09:08:05,027
> >>>> StreamSession.java:505 - [Stream
> #74c57bc0-231a-11e6-a698-1b05ac77
ng versions 2.1.13 and 2.1.14. Topology changes like this aren't
>> supported with mixed Cassandra versions. Sometimes it will work,
>> sometimes it won't (and it will definitely not work in this instance).
>>
>> You should either upgrade your 2.1.13 no
(ConnectionHandler.java:257)
~[apache-cassandra-2.1.14.jar:2.1.14]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
INFO [SharedPool-Worker-1] 2016-05-28 17:54:59,612 Gossiper.java:993 -
InetAddress /54.172.235.227 is now UP
On Fri, May 27, 2016 at 5:37 PM, George Sigletos
wrote:
>
, George Sigletos
wrote:
> No luck unfortunately. It seems that the connection to the destination
> node was lost.
>
> However there was progress compared to the previous times. A lot more data
> was streamed.
>
> (From source node)
> INFO [GossipTasks:1] 2016-05-28 17:53:5
lastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2016-05-20 17:54 GMT+02:00 George Sigletos :
>
>> Hello,
>>
>> Using withLocalDC="myLocalDC" and withUsedHostsPerRemoteDc>0 will
>> guarantee
I am also getting the same error:
cqlsh -u cassandra -p cassandra
Connection error: ('Unable to connect to any servers', {'':
OperationTimedOut('errors=Timed out creating connection (5 seconds),
last_host=None',)})
But it is not consistent. Sometimes I manage to connect. It is random.
Using 2.1.
.jar:na]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
~[guava-16.0.jar:na]
On Tue, Sep 20, 2016 at 11:12 AM, George Sigletos
wrote:
> I am also getting the same error:
> cqlsh -u cassandra -p cassandra
>
> Connection error: ('Unable to conne
.jar:na]
... 23 common frames omitted
On Tue, Sep 20, 2016 at 11:22 AM, George Sigletos
wrote:
> This appears in the system log:
>
> Caused by: java.lang.RuntimeException:
> org.apache.cassandra.exceptions.ReadTimeoutException:
> Operation timed out - received only 2 resp
Hello,
I keep executing a TRUNCATE command on an empty table and it throws
OperationTimedOut randomly:
cassandra@cqlsh> truncate test.mytable;
OperationTimedOut: errors={}, last_host=cassiebeta-01
cassandra@cqlsh> truncate test.mytable;
OperationTimedOut: errors={}, last_host=cassiebeta-01
Havin
control when this call will timeout, it is
> fairly normal that it does!
>
>
> On Wed, Sep 28, 2016 at 12:50 PM, George Sigletos
> wrote:
>
>> Hello,
>>
>> I keep executing a TRUNCATE command on an empty table and it throws
>> OperationTimedOut
see?
>
> Cheers,
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On Wed, Sep 28, 2016 at 12:43 PM, George Sigletos
> wrote:
>
>> Thanks a lot for your reply.
>>
>> I understand that trunca
Even when I set a lower request-timeout in order to trigger a timeout,
still no WARN or ERROR in the logs
On Wed, Sep 28, 2016 at 8:22 PM, George Sigletos
wrote:
> Hi Joaquin,
>
> Unfortunately neither WARN nor ERROR found in the system logs across the
> cluster when execut
Hello,
I tried to upgrade two of our clusters from 2.1.8 to 2.1.9. In some, but
not all nodes, I got errors about corrupt sstables when restarting. I
downgraded back to 2.1.8 for now.
Has anybody else faced the same problem? Should sstablescrub fix the
problem? I ddin't tried that yet.
Kind rega
Hello again and sorry for the late response,
Still having problems with upgrading from 2.1.8 to 2.1.9.
I decided to start the problematic nodes with "disk_failure_policy:
best_effort"
Currently running "nodetool scrub "
Then removing the corrupted sstables and planning to run repair afterwards
I'm also facing problems regarding corrupt sstables and also couldn't run
sstablescrub successfully.
I restarted my nodes with disk failure policy "best_effort", then I run the
"nodetool scrub "
Once done I removed the corrupt tables manually and started repair
On Thu, Oct 1, 2015 at 7:27 PM, J
Hello,
I have been frequently receiving those warnings:
java.lang.IllegalArgumentException: Mutation of 35141120 bytes is too large
for the maxiumum size of 33554432
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:221)
~[apache-cassandra-2.1.9.jar:2.1.9]
at org.apache.cassa
REQUEST_RESPONSE 0
COUNTER_MUTATION 0
On Tue, Oct 6, 2015 at 5:35 PM, Kiran mk wrote:
> Do you see more dropped mutation messages in nodetool tpstats output.
> On Oct 6, 2015 7:51 PM, "George Sigletos" wrote:
>
>> Hello,
>>
>> I have bee
Hello,
We would like to migrate one keyspace from a 6-node cluster to a 3-node one.
Since an individual node does not contain all data, this means that we
should run the sstableloader 6 times, one for each node of our cluster.
To be precise, do "nodetool flush " then run sstableloader -d <3
targ
te the json of your keyspace and then load
> this json to your keyspace in new cluster using json2sstable utility.
>
> On Tue, Dec 1, 2015 at 3:06 AM, Robert Coli wrote:
>
>> On Thu, Nov 19, 2015 at 7:01 AM, George Sigletos
>> wrote:
>>
>>> We would like to migr
Hello,
We had a similar problem where we needed to migrate data from one cluster
to another.
We ended up using Spark to accomplish this. It is fast and reliable but
some downtime was required after all.
We minimized the downtime by doing a first run, and then run incremental
updates.
Kind regar
e
On Mon, Dec 21, 2015 at 12:53 PM, Noorul Islam K M
wrote:
> George Sigletos writes:
>
> > Hello,
> >
> > We had a similar problem where we needed to migrate data from one cluster
> > to another.
> >
> > We ended up using Spark to accomplish this. It
Unfortunately Datastax decided to discontinue Opscenter for open source
Cassandra, starting from version 2.2.
Pitty
On Wed, Jan 6, 2016 at 6:00 PM, Michael Shuler
wrote:
> On 01/06/2016 10:55 AM, Michael Shuler wrote:
> > On 01/06/2016 01:47 AM, Wills Feng wrote:
> >> Looks like opscenter doesn
Hello,
I am trying to change the IP of a live node (I am not replacing a dead
one).
So I stop the service on my node (not a seed node), I change the IP from
192.168.xx.xx to 10.179.xx.xx, and modify "listen_address" and
"rpc_address" in the cassandra.yaml, while I also set auto_bootstrap:
false.
To give a complete picture, my node has actually two network interfaces:
eth0 for 192.168.xx.xx and eth1 for 10.179.xx.xx
On Tue, Mar 14, 2017 at 7:46 PM, George Sigletos
wrote:
> Hello,
>
> I am trying to change the IP of a live node (I am not replacing a dead
> one).
>
> So
> wrote:
>
>> Cassandra uses the IP address for more or less everything. It's possible
>> to change it through some hackery however probably not a great idea. The
>> nodes system tables will still reference the old IP which is likely your
>> problem here.
>&g
Hello,
We recently added a new datacenter to our cluster and run "nodetool rebuild
-- " in all 5 new nodes, one by one.
After this process finished we noticed there is data missing from the new
datacenter, although it exists on the current one.
How would that be possible? Should I maybe have run
have not finished.
> what does nodetool netstats output for your newly built up nodes?
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 17:15 +0200, George Sigletos wrote:
>
> Hello,
>
> We recently added a new datacenter to our cluster and run "nodetool
>
case a full repair fixed the issues.
> but no doubt .. it would be more satisfying to know the root cause for
> that issue
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 19:12 +0200, George Sigletos wrote:
>
> In 3 out of 5 nodes of our new DC the rebuild process finished
37 matches
Mail list logo