Re: Maximum memory usage reached

2019-03-06 Thread Jeff Jirsa
Also, that particular logger is for the internal chunk / page cache. If it 
can’t allocate from within that pool, it’ll just use a normal bytebuffer. 

It’s not really a problem, but if you see performance suffer, upgrade to latest 
3.11.4, there was a bit of a perf improvement in the case where that cache 
fills up.


-- 
Jeff Jirsa


> On Mar 6, 2019, at 11:40 AM, Jonathan Haddad  wrote:
> 
> That’s not an error. To the left of the log message is the severity, level 
> INFO. 
> 
> Generally, I don’t recommend running Cassandra on only 2GB ram or for small 
> datasets that can easily fit in memory. Is there a reason why you’re picking 
> Cassandra for this dataset?
> 
>> On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev  
>> wrote:
>> Hi All,
>> 
>>  
>> 
>> We have a tiny 3-node cluster
>> 
>> C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
>> 
>> HEAP_SIZE is 2G
>> 
>> JVM options are default
>> 
>> All setting in cassandra.yaml are default (file_cache_size_in_mb not set)
>> 
>>  
>> 
>> Data per node – just ~ 1Gbyte
>> 
>>  
>> 
>> We’re getting following errors messages:
>> 
>>  
>> 
>> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
>> CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
>> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
>>  ]
>> 
>> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
>> CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
>> sstables to 
>> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
>>  to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read 
>> Throughput = 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = 
>> ~106/s.  194 total partitions merged to 44.  Partition merge counts were 
>> {1:18, 4:44, }
>> 
>> INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
>> IndexSummaryRedistribution.java:75 - Redistributing index summaries
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>>  
>> 
>> What’s interesting that “Maximum memory usage reached” messages appears each 
>> 15 minutes.
>> 
>> Reboot temporary solve the issue, but it then appears again after some time
>> 
>>  
>> 
>> Checked, there are no huge partitions (max partition size is ~2Mbytes )
>> 
>>  
>> 
>> How such small amount of data may cause this issue?
>> 
>> How to debug this issue further?
>> 
>>  
>> 
>>  
>> 
>> Regards,
>> 
>> Kyrill
>> 
>>  
>> 
>>  
>> 
> -- 
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade


Re: Maximum memory usage reached

2019-03-06 Thread Jonathan Haddad
That’s not an error. To the left of the log message is the severity, level
INFO.

Generally, I don’t recommend running Cassandra on only 2GB ram or for small
datasets that can easily fit in memory. Is there a reason why you’re
picking Cassandra for this dataset?

On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev 
wrote:

> Hi All,
>
>
>
> We have a tiny 3-node cluster
>
> C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade
> immediately)
>
> HEAP_SIZE is 2G
>
> JVM options are default
>
> All setting in cassandra.yaml are default (file_cache_size_in_mb not set)
>
>
>
> Data per node – just ~ 1Gbyte
>
>
>
> We’re getting following errors messages:
>
>
>
> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545
> CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df)
> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
> ]
>
> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582
> CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df)
> 4 sstables to
> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
> to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read
> Throughput = 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput =
> ~106/s.  194 total partitions merged to 44.  Partition merge counts were
> {1:18, 4:44, }
>
> INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007
> IndexSummaryRedistribution.java:75 - Redistributing index summaries
>
> INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
>
>
> What’s interesting that “Maximum memory usage reached” messages appears
> each 15 minutes.
>
> Reboot temporary solve the issue, but it then appears again after some time
>
>
>
> Checked, there are no huge partitions (max partition size is ~2Mbytes )
>
>
>
> How such small amount of data may cause this issue?
>
> How to debug this issue further?
>
>
>
>
>
> Regards,
>
> Kyrill
>
>
>
>
>
-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Maximum memory usage reached

2019-03-06 Thread Kyrylo Lebediev
Hi All,

We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)

Data per node – just ~ 1Gbyte

We’re getting following errors messages:

DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
 ]
DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
sstables to 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
 to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read Throughput 
= 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = ~106/s.  194 
total partitions merged to 44.  Partition merge counts were {1:18, 4:44, }
INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

What’s interesting that “Maximum memory usage reached” messages appears each 15 
minutes.
Reboot temporary solve the issue, but it then appears again after some time

Checked, there are no huge partitions (max partition size is ~2Mbytes )

How such small amount of data may cause this issue?
How to debug this issue further?


Regards,
Kyrill




Re: AxonOps - Cassandra operational management tool

2019-03-06 Thread AxonOps
Hi Kenneth,

We  using AxonOps with on a number of production clusters already, but
we're continuously improving it, so we've got a good level of comfort and
confidence with the product with our own customers.

In terms of our recommendations on the upper bounds of the cluster size, we
do not know yet. The biggest resource user is with Elasticsearch that
stores all the data. The free version available supports up to 6 nodes and
AxonOps can easily support this.

You can already install the product from our APT or YUM repos. The
installation instructions are available here - https://docs.axonops.com

Hayato


On Tue, 5 Mar 2019 at 20:44, Kenneth Brotman 
wrote:

> Hayato,
>
>
>
> I agree with what you are addressing as I’ve always thought the big
> elephant in the room regarding Cassandra was that you had to use all these
> other tools, each of which requires updating, configuring changes, and that
> too much attention had to be paid to all those other tools instead of what
> your trying to accomplish; when instead if addressed it all could be
> centralized, internalized, or something but clearly it was quite doable.
>
>
>
> Questions regarding where things are at:
>
>
>
> Are you using AxonOps in any of your clients Apache Cassandra production
> clusters?
>
>
>
> What is the largest Cassandra cluster in which you use it?
>
>
>
> Would you recommend NOT using AxonOps on production clusters for now or do
> you consider it safe to do so?
>
>
>
> What is the largest Cassandra cluster you would recommend using AxonOps on?
>
>
>
> Can it handle multi-cloud clusters?
>
>
>
> Which clouds does it play nice with?
>
>
>
> Is it good for use for on-prem nodes (or cloud only)?
>
>
>
> Which versions of Cassandra does it play nice with?
>
>
>
> Any rough idea when a download will be available?
>
>
>
> Your blog post at
> https://digitalis.io/blog/apache-cassandra-management-tool/ provides a
> lot of answers already!  Really very promising!
>
>
>
> Thanks,
>
>
>
> Kenneth Brotman
>
>
>
>
>
>
>
> *From:* AxonOps [mailto:axon...@digitalis.io]
> *Sent:* Sunday, March 03, 2019 7:51 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: AxonOps - Cassandra operational management tool
>
>
>
> Hi Kenneth,
>
>
>
> Thanks for your great feedback! We're not trying to be secretive, but just
> not amazing at promoting ourselves!
>
>
>
> AxonOps was built by digitalis.io (https://digitalis.io), a company based
> in the UK providing consulting and managed services for Cassandra, Kafka
> and Spark. digitalis.io was founded 3 years ago by 2 ex-DataStax
> architects but their experience of Cassandra predates the tenure at
> DataStax.
>
>
>
> We have been looking after a lot of Cassandra clusters for our customers,
> but found ourselves spending more time maintaining monitoring and
> operational tools than Cassandra clusters themselves. The motivation was to
> build a management platform to make our lives easier. You can read my blog
> here - https://digitalis.io/blog/apache-cassandra-management-tool/
>
>
>
> We have not yet created any videos but that's in our backlog so people can
> see AxonOps in action. No testimonials yet either since the customer of the
> product has been ourselves, and only just released it to the public as beta
> few weeks ago. We've decided to share it for free to anybody using up to 6
> nodes, as we see a lot of clusters out there within this range.
>
>
>
> The only investment would be a minimum amount of your time to install it.
> We have made the installation process as easy as possible. Hopefully you
> will find it immensely quicker and easier than installing and configuring
> ELK, Prometheus, Grafana, Nagios, custom backups and repair scheduling. It
> has certainly made our lives easier for sure.
>
>
>
> We are fully aware of the new features going into 4.0 and beyond. As
> mentioned earlier, we built this for ourselves - a product that does
> everything we want in one solution providing a single pane of glass. It's
> free and we're sharing this with you.
>
>
>
> Enjoy!
>
>
>
> Hayato Shimizu
>
>
>
>
>
> On Sun, 3 Mar 2019 at 06:05, Kenneth Brotman 
> wrote:
>
> Sorry, Nitan was only making a comment about this post but the comments
> I’m making are to AxonOps.
>
>
>
> It appears we don’t have a name for anyone at AxonOps at all then!  You
> guys are going to need to be more open.
>
>
>
> *From:* Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
> *Sent:* Saturday, March 02, 2019 10:02 PM
> *To:* user@cassandra.apache.org
> *Subject:* RE: AxonOps - Cassandra operational management tool
>
>
>
> Nitan,
>
>
>
> A few thoughts:
>
>
> Isn’t it a lot to expect folks to download, install and evaluate the
> product considering,
>
> · You aren’t being very clear about who you are,
>
> · You don’t have any videos demonstrating the product,
>
> · You don’t provide any testimonials,
>
> · You have no case studies with repeatable results, ROI, etc.
> All the normal stuff.
>
> · 

4 Apache Events in 2019: DC Roadshow soon; next up Chicago, Las Vegas, and Berlin!

2019-03-06 Thread Rich Bowen
Dear Apache Enthusiast,

(You’re receiving this because you are subscribed to one or more user
mailing lists for an Apache Software Foundation project.)

TL;DR:
 * Apache Roadshow DC is in 3 weeks. Register now at
https://apachecon.com/usroadshowdc19/
 * Registration for Apache Roadshow Chicago is open.
http://apachecon.com/chiroadshow19
 * The CFP for ApacheCon North America is now open.
https://apachecon.com/acna19
 * Save the date: ApacheCon Europe will be held in Berlin, October 22nd
through 24th.  https://apachecon.com/aceu19


Registration is open for two Apache Roadshows; these are smaller events
with a more focused program and regional community engagement:

Our Roadshow event in Washington DC takes place in under three weeks, on
March 25th. We’ll be hosting a day-long event at the Fairfax campus of
George Mason University. The roadshow is a full day of technical talks
(two tracks) and an open source job fair featuring AWS, Bloomberg, dito,
GridGain, Linode, and Security University. More details about the
program, the job fair, and to register, visit
https://apachecon.com/usroadshowdc19/

Apache Roadshow Chicago will be held May 13-14th at a number of venues
in Chicago’s Logan Square neighborhood. This event will feature sessions
in AdTech, FinTech and Insurance, startups, “Made in Chicago”, Project
Shark Tank (innovations from the Apache Incubator), community diversity,
and more. It’s a great way to learn about various Apache projects “at
work” while playing at a brewery, a beercade, and a neighborhood bar.
Sign up today at https://www.apachecon.com/chiroadshow19/

We’re delighted to announce that the Call for Presentations (CFP) is now
open for ApacheCon North America in Las Vegas, September 9-13th! As the
official conference series of the ASF, ApacheCon North America will
feature over a dozen Apache project summits, including Cassandra,
Cloudstack, Tomcat, Traffic Control, and more. We’re looking for talks
in a wide variety of categories -- anything related to ASF projects and
the Apache development process. The CFP closes at midnight on May 26th.
In addition, the ASF will be celebrating its 20th Anniversary during the
event. For more details and to submit a proposal for the CFP, visit
https://apachecon.com/acna19/ . Registration will be opening soon.

Be sure to mark your calendars for ApacheCon Europe, which will be held
in Berlin, October 22-24th at the KulturBrauerei, a landmark of Berlin's
industrial history. In addition to innovative content from our projects,
we are collaborating with the Open Source Design community
(https://opensourcedesign.net/) to offer a track on design this year.
The CFP and registration will open soon at https://apachecon.com/aceu19/ .

Sponsorship opportunities are available for all events, with details
listed on each event’s site at http://apachecon.com/.

We look forward to seeing you!

Rich, for the ApacheCon Planners
@apachecon


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-06 Thread Alexander Dejanovski
Hi Manish,

the best way, if you have the opportunity to easily add new
hardware/instances, is to create a new DC with racks and switch traffic to
the new DC when it's ready (then remove the old one). My co-worker Alain
just wrote a very handy blog post on that technique :
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

If you can't, then I guess you can for each node (one at a time),
decommission it, wipe it clean and re-bootstrap it after setting the
appropriate rack.
Also, take into account that your keyspaces must use the
NetworkTopologyStrategy so that racks can be taken into account. Change the
strategy prior to adding the new nodes if you're currently using
SimpleStrategy.

You cannot (and shouldn't) try to change the rack on an existing node (the
GossipingPropertyFileSnitch won't allow it).

Cheers,

On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:

> We have a 6 node Cassandra cluster in which all the nodes  are in same
> rack in a dc. We want to take advantage of "multi rack" cluster (example:
> parallel upgrade on all the nodes in same rack without downtime). I would
> like to know what is the recommended process to change an existing cluster
> with single racks configuration to multi rack configuration.
>
>
> I want to introduce 3 racks with 2 nodes in each rack.
>
>
> Regards
> Manish
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-06 Thread manish khandelwal
We have a 6 node Cassandra cluster in which all the nodes  are in same rack
in a dc. We want to take advantage of "multi rack" cluster (example:
parallel upgrade on all the nodes in same rack without downtime). I would
like to know what is the recommended process to change an existing cluster
with single racks configuration to multi rack configuration.


I want to introduce 3 racks with 2 nodes in each rack.


Regards
Manish


Re: About using Ec2MultiRegionSnitch

2019-03-06 Thread Oleksandr Shulgin
On Tue, Mar 5, 2019 at 2:24 PM Jeff Jirsa  wrote:

> Ec2 multi should work fine in one region, but consider using
> GossipingPropertyFileSnitch if there’s even a chance you’ll want something
> other than AWS regions as dc names - multicloud, hybrid, analytics DCs, etc
>

For the record, DC names can be adjusted separately by using
cassandra-rackdc.properties file, without going away from EC2 snitches.
Which doesn't give you full control, but is good enough for setting up
analytical DC or for cross-DC migrations while staying within the same AWS
region.

--
Alex