We recently went down the rabbit hole of trying to understand the output of
lsof. lsof -n has a lot of duplicates (files opened by multiple threads).
Use 'lsof -p $PID' or 'lsof -u cassandra' instead.
On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng wrote:
> Is your
Hey all.
We've been seeing this warning on one of our clusters:
2015-10-18 14:28:52,898 WARN [ValidationExecutor:14]
org.apache.cassandra.db.context.CounterContext invalid global counter shard
detected; (4aa69016-4cf8-4585-8f23-e59af050d174, 1, 67158) and
(4aa69016-4cf8-4585-8f23-e59af050d174,
Howdy Cassandra folks.
Crickets here and it's sort of unsettling that we're alone with this
issue. Is it appropriate to create a JIRA issue for this or is there maybe
another way to deal with it?
Thanks!
On Sun, Oct 18, 2015 at 1:55 PM, Branton Davis <branton.da...@spanning.com>
wrote:
On Mon, Oct 19, 2015 at 5:42 PM, Robert Coli <rc...@eventbrite.com> wrote:
> On Mon, Oct 19, 2015 at 9:20 AM, Branton Davis <branton.da...@spanning.com
> > wrote:
>
>> Is that also true if you're standing up multiple nodes from backups that
>> already have data?
On Tue, Oct 20, 2015 at 3:31 PM, Robert Coli <rc...@eventbrite.com> wrote:
> On Tue, Oct 20, 2015 at 9:13 AM, Branton Davis <branton.da...@spanning.com
> > wrote:
>
>>
>>> Just to clarify, I was thinking about a scenario/disaster where we lost
>> the enti
artner-magic-quadrant-odbms>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500
Is that also true if you're standing up multiple nodes from backups that
already have data? Could you not stand up more than one at a time since
they already have the data?
On Mon, Oct 19, 2015 at 10:48 AM, Eric Stevens wrote:
> It seems to me that as long as cleanup hasn't
One of our clusters had a strange thing happen tonight. It's a 3 node
cluster, running 2.1.10. The primary keyspace has RF 3, vnodes with 256
tokens.
This evening, over the course of about 6 hours, disk usage increased from
around 700GB to around 900GB on only one node. I was at a loss as to
reed again.
>
> If using SizeTieredCompaction you can end up with very huge sstables as I
> do (>250gb each). In the worst case you could possibly need twice the space
> - a reason why I set up my monitoring for disk to 45% usage.
>
> Just my 2 cents.
> Jan
>
> Von meinem iPhone
If you use Chef, there's this cookbook:
https://github.com/michaelklishin/cassandra-chef-cookbook
It's not perfect, but you can make a wrapper cookbook pretty easily to
fix/extend it to do anything you need.
On Wed, Jan 27, 2016 at 11:25 PM, Richard L. Burton III
wrote:
>
//www.thelastpickle.com
>
> 2016-02-18 8:28 GMT+01:00 Anishek Agarwal <anis...@gmail.com>:
>
>> Hey Branton,
>>
>> Please do let us know if you face any problems doing this.
>>
>> Thanks
>> anishek
>>
>> On Thu, Feb 18, 2016 at 3:33 AM,
ata2
# unmount second volume
umount /dev/xvdf
# In AWS console:
# - detach sdf volume
# - delete volume
# remove mount directory
rm -Rf /var/data/cassandra_data2/
# restart cassandra
service cassandra start
# run repair
/usr/local/cassandra/bin/nodetool repair -pr
On Thu, Feb 18, 2016 at 3:12
now. After that I shutdown the node and my
> last rsync now has to copy only a few files which is quite fast and so the
> downtime for that node is within minutes.
>
> Jan
>
>
>
> Von meinem iPhone gesendet
>
> Am 18.02.2016 um 22:12 schrieb Branton Davis <branton.da...
This may be a silly question, but has anyone considered making
the mailing list accept unsubscribe requests this way? Or at least filter
them out and auto-respond with a message explaining how to unsubscribe? Seems
like it should be pretty simple and would make it easier for folks to leave
and
This isn't a direct answer to your question, but jolokia (
https://jolokia.org/) may be a useful alternative. It runs as an agent
attached to your cassandra process and provides a REST API for JMX.
On Tue, Jul 19, 2016 at 11:19 AM, Ricardo Sancho
wrote:
> Is anyone
gt; /var/data/cassandra_new/cassandra/*
> folders back into the cluster if you still have it.
>
> -Jeremiah
>
>
>
> On Oct 20, 2016, at 3:58 PM, Branton Davis <branton.da...@spanning.com>
> wrote:
>
> Howdy folks. I asked some about this in IRC yesterday, but
.
Thanks for the assurance. I'm thinking (hoping) that we're good.
On Thu, Oct 20, 2016 at 11:24 PM, kurt Greaves <k...@instaclustr.com> wrote:
>
> On 20 October 2016 at 20:58, Branton Davis <branton.da...@spanning.com>
> wrote:
>
>> Would they have taken on the toke
Howdy folks. I asked some about this in IRC yesterday, but we're looking
to hopefully confirm a couple of things for our sanity.
Yesterday, I was performing an operation on a 21-node cluster (vnodes,
replication factor 3, NetworkTopologyStrategy, and the nodes are balanced
across 3 AZs on AWS
est to totally separate Cassandra application data
> directory from system keyspace directory (e.g. they don't share common
> parent folder, and such).
>
> Regards,
>
> Yabin
>
> On Thu, Oct 20, 2016 at 4:58 PM, Branton Davis <branton.da...@spanning.com
> <javascript:_
etool cleanup".
> So to answer your question, I don't think the data have been moved away.
> More likely you have extra duplicate here :
>
> Yabin
>
> On Thu, Oct 20, 2016 at 6:41 PM, Branton Davis <branton.da...@spanning.com
> > wrote:
>
>> Thanks for the
I doubt that's true anymore. EBS volumes, while previously discouraged,
are the most flexible way to go, and are very reliable. You can attach,
detach, and snapshot them too. If you don't need provisioned IOPS, the GP2
SSDs are more cost-effective and allow you to balance IOPS with cost.
On
21 matches
Mail list logo