Re: Log4j vulnerability

2022-01-11 Thread Anthony Grasso
Hi Arvinder,

You are correct; tlp-stress includes Log4j as one of its libraries and
users will need to update the JAR file.

On 16th December 2021, tlp-stress was updated [1] to include Log4j 2.16.0
which fixed CVE-2021-45046. Version 5.0.0 was released which included this
change.

Unfortunately, further security issues were identified in Log4j v2.16.0. On
10th January 2022, tlp-stress was updated again

[2] to include Log4j 2.17.1 which fixed CVE-2021-45105 and CVE-2021-44832
[2]. A new version of tlp-stress will be released soon which will include
these updates.

For now please build and use the latest version of the master branch to get
the latest patch.

Kind regards,
Anthony

[1]
https://github.com/thelastpickle/tlp-stress/commit/298135e2bfc6d4d23f04154f098c3592dd3b32f0
[2]
https://github.com/thelastpickle/tlp-stress/commit/2d4542c27d3f1c0e24899c01247b9a8ee3c9a238

On Tue, 11 Jan 2022 at 16:56, Arvinder Dhillon 
wrote:

> If anyone uses tlp-stress tool, it uses Log4j. It might not be in use most
> of the time, you might want to remove/upgrade the jar.
>
> On Mon, Dec 13, 2021 at 3:58 PM Bowen Song  wrote:
>
>> Do you mean the log4j-over-slf4j-#.jar? If so, please read:
>> http://slf4j.org/log4shell.html
>>
>> On 13/12/2021 23:48, Rahul Reddy wrote:
>>
>> Hello,
>>
>>
>> I see this jar  log4j-over-slf4j-1.7.7.jar does it have any impact on
>> it? Why that jar is used for ?
>>
>>
>>
>> On Sat, Dec 11, 2021 at 12:45 PM Brandon Williams 
>> wrote:
>>
>>> https://issues.apache.org/jira/browse/CASSANDRA-5883
>>>
>>> As that ticket shows, Apache Cassandra has never used log4j2.
>>>
>>> On Sat, Dec 11, 2021 at 11:07 AM Abdul Patel 
>>> wrote:
>>> >
>>> > Hi all,
>>> >
>>> > Any idea if any of open source Cassandra versions are impacted with
>>> log4j vulnerability which was reported on dec 9th
>>>
>>


RE: about memory problem in write heavy system..

2022-01-11 Thread Durity, Sean R
In my experience, the 50% overhead for compaction/upgrade is for the worst case 
scenario systems – where the data is primarily one table and uses size-tiered 
compaction. (I have one of those.) What I really look at is if there is enough 
space to execute upgradesstables on the largest sstable. Granted, it is not fun 
to deal with tight space on a Cassandra cluster.

Sean R. Durity

From: Bowen Song 
Sent: Tuesday, January 11, 2022 6:50 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: about memory problem in write heavy system..


You don't really need 50% of free disk space available if you don't keep 
backups and snapshots on the same server. The DataStax guide recommends 50% 
free space because it recommands you to take a snapshot (which is implemented 
as filesystem hardlink) before upgrading. If you don't have 50% free disk space 
before upgrading Cassandra, you can choose to keep the backup files elsewhere, 
or don't make a backup at all. The later is of course not recommended for a 
production system.
On 11/01/2022 01:36, Eunsu Kim wrote:
Thank you Bowen.

As can be seen from the chart, the memory of existing nodes has increased since 
new nodes were added. And I stopped writing a specific table. Write throughput 
decreased by about 15%. And memory usage began to decrease.
I'm not sure if this was done by natural resolution or by reducing writing.
What is certain is that the addition of new nodes has increased the native 
memory usage of some existing nodes.

After reading the 3.x to 4.x migration guide of DataStax, it seems that more 
than 50% of disk availability is required for upgrade. This is likely to be a 
major obstacle to upgrading the cluster in operation.


Many thanks.


2022. 1. 10. 오후 8:53, Bowen Song mailto:bo...@bso.ng>> 작성:

Anything special about the table you stopped writing to? I'm wondering how did 
you locate the table was the cause of the memory usage increase.
> For the latest version (3.11.11) upgrade, can the two versions coexist in the 
> cluster for a while?
>
> Can the 4.x version coexist as well?

Yes and yes. It is expected that two different versions of Cassandra will be 
running in the same cluster at the same time while upgrading. This process is 
often called zero downtime upgrade or rolling upgrade. You can perform such 
upgrade from 3.11.4 to 3.11.11 or directly to 4.0.1, both are supported. 
Surprisingly, I can't find any documentation related to this on the 
cassandra.apache.org 
[cassandra.apache.org]
 website (if you found it, please send me a link). Some other sites have brief 
guides on this process, such as DataStax 
[datastax.com]
 and Instaclustr 
[instaclustr.com],
 and you should always read the release notes 
[github.com]
 which includes breaking changes and new features before you perform an upgrade.

On 10/01/2022 00:18, Eunsu Kim wrote:
Thank you for your response

Fortunately, memory usage came back down over the weekend. I removed the 
writing of a specific table last Friday.

<붙여넣은 그래픽-2.png>


For the latest version (3.11.11) upgrade, can the two versions coexist in the 
cluster for a while?

Can the 4.x version coexist as well?


2022. 1. 8. 오전 1:26, Jeff Jirsa mailto:jji...@gmail.com>> 작성:

3.11.4 is a very old release, with lots of known bugs. It's possible the memory 
is related to that.

If you bounce one of the old nodes, where does the memory end up?


On Thu, Jan 6, 2022 at 3:44 PM Eunsu Kim 
mailto:eunsu.bil...@gmail.com>> wrote:

Looking at the memory usage chart, it seems that the physical memory usage of 
the existing node has increased since the new node was added with 
auto_bootstrap=false.

<붙여넣은 그래픽-1.png>




On Fri, Jan 7, 2022 at 1:11 AM Eunsu Kim 
mailto:eunsu.bil...@gmail.com>> wrote:
Hi,

I have a Cassandra cluster(3.11.4) that does heavy writing work. (14k~16k write 
throughput per second per node)

Nodes are physical machine in data center. Number of nodes are 30. Each node 
has three data disks mounted.


A few days ago, a QueryTimeout problem occurred due to Full GC.
So, referring to this 
blog(https://thelastpickle.com/blog/2018/04/11/gc-tuning.html 

Re: about memory problem in write heavy system..

2022-01-11 Thread Bowen Song
You don't really need 50% of free disk space available if you don't keep 
backups and snapshots on the same server. The DataStax guide recommends 
50% free space because it recommands you to take a snapshot (which is 
implemented as filesystem hardlink) before upgrading. If you don't have 
50% free disk space before upgrading Cassandra, you can choose to keep 
the backup files elsewhere, or don't make a backup at all. The later is 
of course not recommended for a production system.


On 11/01/2022 01:36, Eunsu Kim wrote:

Thank you Bowen.

As can be seen from the chart, the memory of existing nodes has 
increased since new nodes were added. And I stopped writing a specific 
table. Write throughput decreased by about 15%. And memory usage began 
to decrease.
I'm not sure if this was done by natural resolution or by reducing 
writing.
What is certain is that the addition of new nodes has increased the 
native memory usage of some existing nodes.


After reading the 3.x to 4.x migration guide of DataStax, it seems 
that more than 50% of disk availability is required for upgrade. This 
is likely to be a major obstacle to upgrading the cluster in operation.



Many thanks.


2022. 1. 10. 오후 8:53, Bowen Song  작성:

Anything special about the table you stopped writing to? I'm 
wondering how did you locate the table was the cause of the memory 
usage increase.


/> For the latest version (3.11.11) upgrade, can the two versions 
coexist in the cluster for a while?//

//> //
//> Can the 4.x version coexist as well?/

Yes and yes. It is expected that two different versions of Cassandra 
will be running in the same cluster at the same time while upgrading. 
This process is often called zero downtime upgrade or rolling 
upgrade. You can perform such upgrade from 3.11.4 to 3.11.11 or 
directly to 4.0.1, both are supported. Surprisingly, I can't find any 
documentation related to this on the cassandra.apache.org 
 website (if you found it, please send 
me a link). Some other sites have brief guides on this process, such 
as DataStax 
 
and Instaclustr 
, 
and you should always read the release notes 
 which 
includes breaking changes and new features before you perform an upgrade.



On 10/01/2022 00:18, Eunsu Kim wrote:

Thank you for your response

Fortunately, memory usage came back down over the weekend. I removed 
the writing of a specific table last Friday.


<붙여넣은 그래픽-2.png>


For the latest version (3.11.11) upgrade, can the two versions 
coexist in the cluster for a while?


Can the 4.x version coexist as well?


2022. 1. 8. 오전 1:26, Jeff Jirsa  작성:

3.11.4 is a very old release, with lots of known bugs. It's 
possible the memory is related to that.


If you bounce one of the old nodes, where does the memory end up?


On Thu, Jan 6, 2022 at 3:44 PM Eunsu Kim  
wrote:



Looking at the memory usage chart, it seems that the physical
memory usage of the existing node has increased since the new
node was added with auto_bootstrap=false.

<붙여넣은 그래픽-1.png>




On Fri, Jan 7, 2022 at 1:11 AM Eunsu Kim
 wrote:

Hi,

I have a Cassandra cluster(3.11.4) that does heavy writing
work. (14k~16k write throughput per second per node)

Nodes are physical machine in data center. Number of nodes
are 30. Each node has three data disks mounted.


A few days ago, a QueryTimeout problem occurred due to
Full GC.
So, referring to this
blog(https://thelastpickle.com/blog/2018/04/11/gc-tuning.html),
it seemed to have been solved by changing the
memtable_allocation_type to offheap_objects.

But today, I got an alarm saying that some nodes are using
more than 90% of physical memory. (115GiB /125GiB)

Native memory usage of some nodes is gradually increasing.



All tables use TWCS, and TTL is 2 weeks.

Below is the applied jvm option.

-Xms31g
-Xmx31g
-XX:+UseG1GC
-XX:G1RSetUpdatingPauseTimePercent=5
-XX:MaxGCPauseMillis=500
-XX:InitiatingHeapOccupancyPercent=70
-XX:ParallelGCThreads=24
-XX:ConcGCThreads=24
…


What additional things can I try?

I am looking forward to the advice of experts.

Regards.