My approach is the obvious taking a big outage window especially at work we
are using 1.2 and is using token range. I am generally a believer that (1)
patches should be applied, but (2) routinely we replace each host with a
new EC2, so (1) I know my infrastructure code (puppet/chef/ansible/salt
We run on both ephemeral and persistent on AWS. Ephemeral storage is the
local storage attached to the server host. We don't have extreme write &
read, so EBS is fine.
If youever shut down the EC2 instance, your data is guaranteed to be gone
because AWS moves your VM to another host after every
stable which is part
> of the compaction is corrupt then compaction kind of hangs.
> Try to run nodetool scrub ...
>
> On 8 August 2016 at 09:34, John Wong <gokoproj...@gmail.com> wrote:
>
>> On Mon, Aug 8, 2016 at 11:16 AM, Surbhi Gupta <surbhi.gupt...@gmail.com
On Mon, Aug 8, 2016 at 11:16 AM, Surbhi Gupta
wrote:
> Once you restart a node compaction will start automatically, if u dont
> want to do so then do
> nodetool disableautocompaction as soon as node is started .
>
>
Thanks. I certainly can give that a try for the
Hi
We have a compaction stuck. No progress ever made.
nodetool compactionstats
pending tasks: 2
compaction typekeyspace column family
completed total unit progress
Compactionmy_columnfamily 0
410383527 bytes 0.00%
Even if you can upgrade Cassandra straight from 1.2 to 3.X, also consider
driver compatibility.
On Thu, Mar 31, 2016 at 2:43 PM, Tony Anecito wrote:
> I would also like to know.
>
> Thanks!
>
>
> On Thursday, March 31, 2016 6:14 AM, Steven Choo
> wrote:
On Mon, Mar 14, 2016 at 6:13 PM, Robert Coli wrote:
> On Mon, Mar 14, 2016 at 11:38 AM, Rajath Subramanyam
> wrote:
>
>> When cassandra-stress tool dumps the output at the end of the
>> benchmarking run, what is the unit of latency statistics ?
>>
>
>
I believe K is asking whether he can sync up the clock now because he's
concerned about losing data, as 30-40 seconds is pretty bad.
On Sat, Mar 12, 2016 at 8:57 AM, Spencer Brown wrote:
> NTP gradually speeds or slows the clock to arrive at actual time. So the
> clock
For production I'd stick with ephemeral disks (aka instance storage) if you
have running a lot of transaction.
However, for regular small testing/qa cluster, or something you know you
want to reload often, EBS is definitely good enough and we haven't had
issues 99%. The 1% is kind of anomaly where
Hi.
We are using the open source version of OpsCenter. We find it useful, but
the disk space for OpsCenter metrics has been increasing and can sometime
outgrow to 30-50G in a matter of a day or two. We do have a lot of
keyspaces and column families.
Usually this dev cluster is quiet on the
Hi
I am on Cassandra 1.2.19 and we are running OpsCenter 5.1.4 (and agent
5.1.4).
We recently started to observe high load occurring in our dev cluster.
There was some small spikes in terms of amount of requests going into the
cluster so we are not ruling out that possibility.
What we see is we
Obviously you will get a better answer from someone directly with
datastax... but IMO, I would look to either
* use package manager like apt or yum, they are usually up-to-date if you
use the ppa route.
* keep the package locally in your own infrastructure. I have had mirror
issue or content
Hi.
We are running Cassandra 1.2.19, and we are AWS customer, so we store our
data in ephemeral storage.
If we recreate an instance with the same IP, what is the best way to get
the node up and running with the previous data? Right now I am relying on
backup.
I was hoping that we can stream the
Hi.
I am using Cassandra 1.2.15 and OpsCenter 5.0.2 (non-enterprise). We are
planning to upgrade in the upcoming week.
I found some old opscenter tables still lurking around.
/mnt/data/OpsCenter# du -h
68K./events_timeline
18M./rollups86400
172K./pdps
269M./rollups7200
68K
On Mon, Sep 21, 2015 at 8:32 PM, Shenghua(Daniel) Wan wrote:
> Hi,
> When a node is dead, is it supposed to exist in the ring?
>
It is still considered as part of a cluster. Imagine a case when you do a
rolling restart, the node would be temporary out of service for
On Fri, Sep 18, 2015 at 3:02 PM, Sanjay Baronia <
sanjay.baro...@triliodata.com> wrote:
>
> Will be at the Cassandra summit next week if any of you would like a demo.
>
>
>
Sanjay, is Trilio Data's work private? Unfortunately I will not attend the
Summit, but maybe Trilio can also talk about
e initial confusion with
> vnodes vs non-vnodes, so you should be safe to follow the procedure
> described there. Make sure to set the
> -Dcassandra.replace_address=address_of_dead_node JVM option (don't worry
> about the initial token).
>
> 2015-09-17 21:21 GMT-03:00 John Wong <g
Hi
Can the community help to confirm that
http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
will work for non-vnode cluster in Cassandra 1.2.
It looks like I don't have to set the initial token for the replacement
node (but same IP) at all if I run the JVM
Hi
We monitor the number of pending tasks and in our dev cluster (size of 3),
for two out of three nodes, the # of pending tasks did not decrease until I
restarted cassandra on those two nodes. This is not the first time I saw
this behavior. It is Saturday today and the traffic is very little.
Hi
I am not familiar with Azkaban and probably a better question to the
Azkaban community IMO. But there seems to be two modes (
http://azkaban.github.io/azkaban/docs/2.5/) one is solo and one is
two-server mode, but either way I think still SPOF? If there is no
election, just based on process,
Sorry i meant integration with Cassandra (based on the docs by default it
suggests MySQL)
On Sunday, August 16, 2015, John Wong gokoproj...@gmail.com wrote:
There is no leader in cassandra. I suggest you ask Azkaban community about
intgteation with Azkaban and Azkaban HA.
On Sunday, August
instance for that matter on that
Cassandra server. I'm thinking that I can copy the applocation folder to
all nodes and then determine which one to run using zookeeper. Is that
possible ?
Sent from Outlook http://aka.ms/Ox5hz3
On Sun, Aug 16, 2015 at 6:47 AM -0700, John Wong gokoproj
The EC2 nodes must be in the default VPC.
Did you really mean the default VPC created by AWS or just a VPC? Because I
would be very surprise default VPC must be used.
On Sat, Aug 15, 2015 at 2:50 AM, Prem Yadav ipremya...@gmail.com wrote:
The EC2 nodes must be in the default VPC.
create a
Use VPC Peering rather than VPN, More reliable.
On Tue, Aug 11, 2015 at 5:14 PM, Prem Yadav ipremya...@gmail.com wrote:
1) There are ways to connect two VPCs using VPN.
2) About the connectivity using public IP. Can you ping the one public ip
from another one in a different region.
If ping
at 6:22 PM, John Wong gokoproj...@gmail.com wrote:
What other benefits can Ec2Snitch provide in a single-region, multi-az
AWS deployment besides automatically setting dc and rack for you as the
snitch reads from EC2 metadata.
Obviously there is a concern with what if there is something wrong
Hi
Based on
*
http://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureSnitchEC2_t.html
*
http://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureSnitchGossipPF_c.html
What other benefits can Ec2Snitch provide in a single-region, multi-az AWS
deployment
Hi all Sebastain
We recently encountered similar issue. At least we observed agent
constantly died with OOM. Unfortunately, we are still with 1.2.X and it
will be a while before we can totally move to Cassandra 2 series.
Is there a backport patch to fix OOM in OpsCenter 5.1 branch? Please let
Hi all
Quick questions.
I was auditing disk usage of my cluster (cassandra 1.2.19). I found there
was a node with 27G worth of snapshots in OpsCenter data directory. I don't
remember doing any snapshots...
I do run nodetool repair -pr every night, so they might be created by the
repair process.
Thanks. Yeah we typically restart the nodes in the minor version to force
resync.
But is there a problem with letting schema disagreement running for a long
time?
Thanks.
John
On Mon, Jul 6, 2015 at 2:29 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Jul 2, 2015 at 9:31 PM, John Wong
Hi.
Here is a schema disagreement we encountered.
Schema versions:
b6467059-5897-3cc1-9ee2-73f31841b0b0: [10.0.1.100, 10.0.1.109]
c8971b2d-0949-3584-aa87-0050a4149bbd: [10.0.1.55, 10.0.1.16,
10.0.1.77]
c733920b-2a31-30f0-bca1-45a8c9130a2c: [10.0.1.221]
We deployed an
that was a poor wild guess.
But is it safe to be okay with schema disagreement? I worry about data
consistency if I let it sit too long.
Thanks.
John
On Jul 2, 2015, at 9:37 PM, John Wong gokoproj...@gmail.com wrote:
Hi.
Here is a schema disagreement we encountered.
Schema versions
Which version are you running and what's your kernel version? We are still
running on 1.2 branch but we have not seen any high cpu usage yet...
On Tue, Jun 30, 2015 at 11:10 PM, snair123 . nair...@outlook.com
javascript:_e(%7B%7D,'cvml','nair...@outlook.com'); wrote:
reboot of the machine
Hi.
Supposed I have a 6-node cluster running and I want to build a 3-node
cluster based on that 6-node cluster. What is the recommended way to
quickly build such cluster? Each node is about 120Gb and we have RF=3. We
are on Cassandra 1.2.19 and we are not using vnode.
My initial research shows
33 matches
Mail list logo