Re: Memory Map settings for Cassandra

2021-04-16 Thread Jai Bheemsen Rao Dhanwada
Thank you

On Thu, Apr 15, 2021 at 7:20 PM Kane Wilson  wrote:

> Yes that warning will still appear because it's a startup check and
> doesn't take into account the disk_access_mode setting.
>
> You may be able to cope with just indexes. Note this is still not an ideal
> solution as you won't be making full use of your available memory.
>
> raft.so - Cassandra consulting, support, and managed services
>
>
> On Fri, Apr 16, 2021 at 10:10 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Also,
>>
>> I just restarted my Cassandra process by setting "disk_access_mode:
>> mmap_index_only"  and I still see the same WARN message, I believe it's
>> just a startup check and doesn't rely on the disk_access_mode value
>>
>> WARN  [main] 2021-04-16 00:08:00,088 StartupChecks.java:311 - Maximum
>>> number of memory map areas per process (vm.max_map_count) 65530 is too low,
>>> recommended value: 1048575, you can change it with sysctl.
>>
>>
>> On Thu, Apr 15, 2021 at 4:45 PM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Thank you Kane and Jeff.
>>>
>>> can I survive with a low mmap value of 65530 with "disk_acces_mode =
>>> mmap_index_only" ? does this hold true even for higher workloads with
>>> larger datasets like ~1TB per node?
>>>
>>> On Thu, Apr 15, 2021 at 4:43 PM Jeff Jirsa  wrote:
>>>
 disk_acces_mode = mmap_index_only to use fewer maps (or disable it
 entirely as appropriate).



 On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson  wrote:

> Cassandra mmaps SSTables into memory, of which there can be many files
> (including all their indexes and what not). Typically it'll do so greedily
> until you run out of RAM. 65k map areas tends to be quite low and can
> easily be exceeded - you'd likely need very low density nodes to avoid
> going over 65k, and thus you'd require lots of nodes (making management
> harder). I'd recommend figuring out a way to up your limits as the first
> course of action.
>
> raft.so - Cassandra consulting, support, and managed services
>
>
> On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Hello All,
>>
>> The recommended settings for Cassandra suggests to have a higher
>> value for vm.max_map_count than the default 65530
>>
>> WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
>>> number of memory map areas per process (vm.max_map_count) 65530 is
>>> too low, recommended value: 1048575, you can change it with sysctl.
>>
>>
>> However, I am running Cassandra process as a container, where I don't
>> have access to change the value on Kubernetes worker node and the 
>> cassandra
>> pod runs with less privileges.  I would like to understand why Cassandra
>> needs a higher value of memory map? and is there a way to restrict
>> Cassandra to not use beyond the default value of 65530. If there is a way
>> please let me know how to restrict and also any side effects in making 
>> that
>> change?
>>
>> Thanks in advance
>>
>


Re: Memory Map settings for Cassandra

2021-04-15 Thread Kane Wilson
Yes that warning will still appear because it's a startup check and doesn't
take into account the disk_access_mode setting.

You may be able to cope with just indexes. Note this is still not an ideal
solution as you won't be making full use of your available memory.

raft.so - Cassandra consulting, support, and managed services


On Fri, Apr 16, 2021 at 10:10 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Also,
>
> I just restarted my Cassandra process by setting "disk_access_mode:
> mmap_index_only"  and I still see the same WARN message, I believe it's
> just a startup check and doesn't rely on the disk_access_mode value
>
> WARN  [main] 2021-04-16 00:08:00,088 StartupChecks.java:311 - Maximum
>> number of memory map areas per process (vm.max_map_count) 65530 is too low,
>> recommended value: 1048575, you can change it with sysctl.
>
>
> On Thu, Apr 15, 2021 at 4:45 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Thank you Kane and Jeff.
>>
>> can I survive with a low mmap value of 65530 with "disk_acces_mode =
>> mmap_index_only" ? does this hold true even for higher workloads with
>> larger datasets like ~1TB per node?
>>
>> On Thu, Apr 15, 2021 at 4:43 PM Jeff Jirsa  wrote:
>>
>>> disk_acces_mode = mmap_index_only to use fewer maps (or disable it
>>> entirely as appropriate).
>>>
>>>
>>>
>>> On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson  wrote:
>>>
 Cassandra mmaps SSTables into memory, of which there can be many files
 (including all their indexes and what not). Typically it'll do so greedily
 until you run out of RAM. 65k map areas tends to be quite low and can
 easily be exceeded - you'd likely need very low density nodes to avoid
 going over 65k, and thus you'd require lots of nodes (making management
 harder). I'd recommend figuring out a way to up your limits as the first
 course of action.

 raft.so - Cassandra consulting, support, and managed services


 On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
 jaibheem...@gmail.com> wrote:

> Hello All,
>
> The recommended settings for Cassandra suggests to have a higher value
> for vm.max_map_count than the default 65530
>
> WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
>> number of memory map areas per process (vm.max_map_count) 65530 is
>> too low, recommended value: 1048575, you can change it with sysctl.
>
>
> However, I am running Cassandra process as a container, where I don't
> have access to change the value on Kubernetes worker node and the 
> cassandra
> pod runs with less privileges.  I would like to understand why Cassandra
> needs a higher value of memory map? and is there a way to restrict
> Cassandra to not use beyond the default value of 65530. If there is a way
> please let me know how to restrict and also any side effects in making 
> that
> change?
>
> Thanks in advance
>



Re: Memory Map settings for Cassandra

2021-04-15 Thread Jai Bheemsen Rao Dhanwada
Also,

I just restarted my Cassandra process by setting "disk_access_mode:
mmap_index_only"  and I still see the same WARN message, I believe it's
just a startup check and doesn't rely on the disk_access_mode value

WARN  [main] 2021-04-16 00:08:00,088 StartupChecks.java:311 - Maximum
> number of memory map areas per process (vm.max_map_count) 65530 is too low,
> recommended value: 1048575, you can change it with sysctl.


On Thu, Apr 15, 2021 at 4:45 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thank you Kane and Jeff.
>
> can I survive with a low mmap value of 65530 with "disk_acces_mode =
> mmap_index_only" ? does this hold true even for higher workloads with
> larger datasets like ~1TB per node?
>
> On Thu, Apr 15, 2021 at 4:43 PM Jeff Jirsa  wrote:
>
>> disk_acces_mode = mmap_index_only to use fewer maps (or disable it
>> entirely as appropriate).
>>
>>
>>
>> On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson  wrote:
>>
>>> Cassandra mmaps SSTables into memory, of which there can be many files
>>> (including all their indexes and what not). Typically it'll do so greedily
>>> until you run out of RAM. 65k map areas tends to be quite low and can
>>> easily be exceeded - you'd likely need very low density nodes to avoid
>>> going over 65k, and thus you'd require lots of nodes (making management
>>> harder). I'd recommend figuring out a way to up your limits as the first
>>> course of action.
>>>
>>> raft.so - Cassandra consulting, support, and managed services
>>>
>>>
>>> On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
>>> jaibheem...@gmail.com> wrote:
>>>
 Hello All,

 The recommended settings for Cassandra suggests to have a higher value
 for vm.max_map_count than the default 65530

 WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
> number of memory map areas per process (vm.max_map_count) 65530 is
> too low, recommended value: 1048575, you can change it with sysctl.


 However, I am running Cassandra process as a container, where I don't
 have access to change the value on Kubernetes worker node and the cassandra
 pod runs with less privileges.  I would like to understand why Cassandra
 needs a higher value of memory map? and is there a way to restrict
 Cassandra to not use beyond the default value of 65530. If there is a way
 please let me know how to restrict and also any side effects in making that
 change?

 Thanks in advance

>>>


Re: Memory Map settings for Cassandra

2021-04-15 Thread Jai Bheemsen Rao Dhanwada
Thank you Kane and Jeff.

can I survive with a low mmap value of 65530 with "disk_acces_mode =
mmap_index_only" ? does this hold true even for higher workloads with
larger datasets like ~1TB per node?

On Thu, Apr 15, 2021 at 4:43 PM Jeff Jirsa  wrote:

> disk_acces_mode = mmap_index_only to use fewer maps (or disable it
> entirely as appropriate).
>
>
>
> On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson  wrote:
>
>> Cassandra mmaps SSTables into memory, of which there can be many files
>> (including all their indexes and what not). Typically it'll do so greedily
>> until you run out of RAM. 65k map areas tends to be quite low and can
>> easily be exceeded - you'd likely need very low density nodes to avoid
>> going over 65k, and thus you'd require lots of nodes (making management
>> harder). I'd recommend figuring out a way to up your limits as the first
>> course of action.
>>
>> raft.so - Cassandra consulting, support, and managed services
>>
>>
>> On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Hello All,
>>>
>>> The recommended settings for Cassandra suggests to have a higher value
>>> for vm.max_map_count than the default 65530
>>>
>>> WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
 number of memory map areas per process (vm.max_map_count) 65530 is too
 low, recommended value: 1048575, you can change it with sysctl.
>>>
>>>
>>> However, I am running Cassandra process as a container, where I don't
>>> have access to change the value on Kubernetes worker node and the cassandra
>>> pod runs with less privileges.  I would like to understand why Cassandra
>>> needs a higher value of memory map? and is there a way to restrict
>>> Cassandra to not use beyond the default value of 65530. If there is a way
>>> please let me know how to restrict and also any side effects in making that
>>> change?
>>>
>>> Thanks in advance
>>>
>>


Re: Memory Map settings for Cassandra

2021-04-15 Thread Jeff Jirsa
disk_acces_mode = mmap_index_only to use fewer maps (or disable it entirely
as appropriate).



On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson  wrote:

> Cassandra mmaps SSTables into memory, of which there can be many files
> (including all their indexes and what not). Typically it'll do so greedily
> until you run out of RAM. 65k map areas tends to be quite low and can
> easily be exceeded - you'd likely need very low density nodes to avoid
> going over 65k, and thus you'd require lots of nodes (making management
> harder). I'd recommend figuring out a way to up your limits as the first
> course of action.
>
> raft.so - Cassandra consulting, support, and managed services
>
>
> On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Hello All,
>>
>> The recommended settings for Cassandra suggests to have a higher value
>> for vm.max_map_count than the default 65530
>>
>> WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
>>> number of memory map areas per process (vm.max_map_count) 65530 is too
>>> low, recommended value: 1048575, you can change it with sysctl.
>>
>>
>> However, I am running Cassandra process as a container, where I don't
>> have access to change the value on Kubernetes worker node and the cassandra
>> pod runs with less privileges.  I would like to understand why Cassandra
>> needs a higher value of memory map? and is there a way to restrict
>> Cassandra to not use beyond the default value of 65530. If there is a way
>> please let me know how to restrict and also any side effects in making that
>> change?
>>
>> Thanks in advance
>>
>


Re: Memory Map settings for Cassandra

2021-04-15 Thread Kane Wilson
Cassandra mmaps SSTables into memory, of which there can be many files
(including all their indexes and what not). Typically it'll do so greedily
until you run out of RAM. 65k map areas tends to be quite low and can
easily be exceeded - you'd likely need very low density nodes to avoid
going over 65k, and thus you'd require lots of nodes (making management
harder). I'd recommend figuring out a way to up your limits as the first
course of action.

raft.so - Cassandra consulting, support, and managed services


On Fri, Apr 16, 2021 at 4:29 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Hello All,
>
> The recommended settings for Cassandra suggests to have a higher value for
> vm.max_map_count than the default 65530
>
> WARN  [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
>> number of memory map areas per process (vm.max_map_count) 65530 is too
>> low, recommended value: 1048575, you can change it with sysctl.
>
>
> However, I am running Cassandra process as a container, where I don't have
> access to change the value on Kubernetes worker node and the cassandra pod
> runs with less privileges.  I would like to understand why Cassandra needs
> a higher value of memory map? and is there a way to restrict Cassandra to
> not use beyond the default value of 65530. If there is a way please let me
> know how to restrict and also any side effects in making that change?
>
> Thanks in advance
>