listening for CQL clients
one of the hint file was corrupted, which is a known bug
https://issues.apache.org/jira/browse/CASSANDRA-12728
it was resolved in 3.0.14 and 3.11.0 version.
removing the corrupt file solved the issue for me
On Wed, Jul 26, 2017 at 10:30 AM, Junaid Nasir <jna...@an10.io>
I have a small 3 nodes C* + Spark cluster, when I run any query on spark it
gives me connection refused error on 2 C* nodes. which puts all the
pressure on single node resulting in bad performance. below is the error
from spark-submit
17/07/25 12:00:22 INFO Cluster: New Cassandra host
I have a C* cluster (3 nodes) with some 60gb data (replication factor 2).
when I started using C* coming from SQL background didn't give much thought
about modeling the data correctly. so what I did was
CREATE TABLE data ( deviceId int,
time timestamp,
nge do you want to take all devices or a specific device?
>
>
> On Jul 26, 2017 3:15 PM, "Junaid Nasir" <jna...@an10.io> wrote:
>
> I have a C* cluster (3 nodes) with some 60gb data (replication factor 2).
> when I started using C* coming from SQL background didn't give
each node has its own ip for listen_address and rpc_address. seed node ip
is fixed to 10.128.1.1 on all nodes. configuration was written using
ansible and I have also verified it.
On Wed, Jul 26, 2017 at 3:52 AM, Erik Forkalsud <eforkals...@cj.com> wrote:
> On 07/25/2017 05:13 AM, Jun
Cassandra ensure that adding or removing nodes are very easy and that load
is balanced between nodes when a change is made. but it's not working in my
case.
I have a single node C* deployment (with 270 GB of data) and want to load
balance the data on multiple nodes, I followed this guide
reamReceiveTask:1] 2017-05-31 10:07:23,707
> StreamResultFuture.java:219 - [Stream
> #f0eb9670-45e8-11e7-a17b-81e9a18c6eac] All sessions completed
On Wed, May 31, 2017 at 2:50 PM, Junaid Nasir <jna...@an10.io> wrote:
> Cassandra version is 3.10, and yes its not a production server. i hav
<
oleksandr.shul...@zalando.de> wrote:
> On Wed, May 31, 2017 at 9:55 AM, Junaid Nasir <jna...@an10.io> wrote:
>
>> Cassandra ensure that adding or removing nodes are very easy and that
>> load is balanced between nodes when a change is made. but it's not working
>> i
as mentioned here
http://www.datastax.com/dev/blog/cassandra-2-2-3-0-and-beyond
> Under normal conditions, we will NOT release 3.x.y stability releases for
> x > 0. That is, we will have a traditional 3.0.y stability series, but the
> odd-numbered bugfix-only releases will fill that role for the
g to node after tokens
> changed.
>
> Best regards, Vladimir Yudovin,
> *Winguzone <https://winguzone.com?from=list> - Cloud Cassandra Hosting*
>
>
> On Wed, 31 May 2017 03:55:54 -0400 *Junaid Nasir <jna...@an10.io
> <jna...@an10.io>>* wrote
>
> Cass
adcast_address/. I am surprised that your node
> started up with rpc_broadcast_address set as this is an unsupported
> property. I am assuming you are using Cassandra version 3.10.
>
>
> Regards,
> Akhil
>
> On 2/06/2017, at 11:06 PM, Junaid Nasir <jna...@an10.io> wrote:
name.
>
> Regards,
> Akhil
>
> > 10.128.1.2
> > On 31/05/2017, at 10:15 PM, Junaid Nasir <jna...@an10.io> wrote:
> >
> > after nodetool repair on new node following log appears in cassandra log
> >
> > INFO [StreamConnectionEstablisher:1] 20
No, I didn't set it (left it at default value)
On Fri, Jun 9, 2017 at 3:18 AM, ZAIDI, ASAD A <az1...@att.com> wrote:
> Did you make sure auto_bootstrap property is indeed set to [true] when
> you added the node?
>
>
>
> *From:* Junaid Nasir [mailto:jna...@an10.io]
> *S
2 AM Akhil Mehra <akhilme...@gmail.com> wrote:
>>
>>> auto_bootstrap is true by default. Ensure its set to true. On startup
>>> look at your logs for your auto_bootstrap value. Look at the node
>>> configuration line in your log file.
>>>
>>> Akhil
We are building a IoT platform where time series data from millions of devices
is to be collected and then used to do some analytics pertaining to Business
Intelligence/Analytics (BI/BA).
Within the above context, we are running into the issue of have range based
queries, where the granularity of
15 matches
Mail list logo