I'm guessing you're getting 100MB from the comments in the config, which
suggest 100MB per core.  This advice is pretty outdated and should be
updated.

I'd use 8GB total heap and 4GB new gen as a starting point.  I really
suggest reading up on how GC works, I linked to a post in an earlier email.

These are the flags you'd need to set in your jvm.options, or
jvm-server.options depending on the version you're using:

-Xmx8G
-Xms8G
-Xmn4G

1 core is probably going to be a problem, Cassandra creates a lot of
threads and relies on doing work concurrently.  I wouldn't use less than 8
cores in a production environment.

On Sun, Mar 17, 2019 at 3:12 AM Dieudonné Madishon NGAYA <dmng...@gmail.com>
wrote:

> Starting point for me: max_heap_size to 8gb and heap_newsize to 100mb.
> Then restart node by node then watch system.log to see if you are seeing G.C
>
> On Sat, Mar 16, 2019 at 9:56 AM Sundaramoorthy, Natarajan <
> natarajan_sundaramoor...@optum.com> wrote:
>
>> So you guys are suggesting
>>
>>
>>
>> MAX_HEAP_SIZE  by 8/12/16GB
>>
>>
>>
>> And
>>
>>
>>
>> HEAP_NEWSIZE to 100 MB
>>
>>
>>
>> And
>>
>>
>>
>> heap with 50% of that as a starting point? Hw do I do this?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>> *From:* Dieudonné Madishon NGAYA [mailto:dmng...@gmail.com]
>> *Sent:* Saturday, March 16, 2019 12:15 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: read request is slow
>>
>>
>>
>> I agreed with jon haddad , your MAX_HEAP_SIZE is very small. you have lot
>> of RAM (256 GB), you can start your  MAX_HEAP_SIZE  by 8GB and increase if
>> necessary.
>>
>> Since you have only 1 physical core if i understood , you can set your 
>> HEAP_NEWSIZE
>> to 100 MB
>>
>>
>>
>> Best regards
>>
>> _____________________________________________________________
>>
>>
>> [image:
>> https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour]
>> <https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour>
>>    <https://twitter.com/dmnbigdata>   <https://www.instagram.com/>
>> <https://www.linkedin.com/in/dngaya/>
>>
>> *Dieudonne Madishon NGAYA*
>> Datastax, Cassandra Architect
>> *P: *7048580065
>> *w: *www.dmnbigdata.com
>> *E: *dmng...@dmnbigdata.com
>> *Private E: *dmng...@gmail.com
>> *A: *Charlotte,NC,28273, USA
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Mar 16, 2019 at 1:07 AM Jon Haddad <j...@jonhaddad.com> wrote:
>>
>> I can't say I've ever used 100MB new gen with Cassandra, but in my
>> experience I've found small new gen to be incredibly harmful for
>> performance.  It doesn't surprise me at all that you'd hit some serious GC
>> issues.  My guess is you're filling up the new gen very quickly and
>> promoting everything in very quick cycles, leading to memory fragmentation
>> and soon after full GCs.  2GB is a tiny heap and I would never, under any
>> circumstances, run a 2GB heap in a production environment.  I'd only use
>> under 8 GB in a circle CI free tier for integration tests.
>>
>>
>>
>> I suggest you use a minimum of 8, preferably 12-16GB of total heap with
>> 50% of that as a starting point.  There's a bunch of posts floating around
>> on the topic, here's one I wrote:
>> http://thelastpickle.com/blog/2018/04/11/gc-tuning.html
>>
>>
>>
>> Jon
>>
>>
>>
>> On Sat, Mar 16, 2019 at 5:49 PM Sundaramoorthy, Natarajan <
>> natarajan_sundaramoor...@optum.com> wrote:
>>
>> Here you go. Thanks
>>
>>             - name: MAX_HEAP_SIZE
>>
>>               value: 2048M
>>
>>             - name: MY_POD_NAMESPACE
>>
>>               valueFrom:
>>
>>                 fieldRef:
>>
>>                   apiVersion: v1
>>
>>                   fieldPath: metadata.namespace
>>
>>             - name: HEAP_NEWSIZE
>>
>>               value: 100M
>>
>>
>>
>>
>>
>>
>>
>> *From:* Dieudonné Madishon NGAYA [mailto:dmng...@gmail.com]
>> *Sent:* Friday, March 15, 2019 11:18 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: read request is slow
>>
>>
>>
>> Is it possible to have these parameters from cassandra-env.sh if they are
>> set:
>>
>> MAX_HEAP_SIZE and HEAP_NEWSIZE
>>
>>
>>
>> Best regards
>>
>> _____________________________________________________________
>>
>>
>> [image:
>> https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour]
>> <https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour>
>>    <https://twitter.com/dmnbigdata>   <https://www.instagram.com/>
>> <https://www.linkedin.com/in/dngaya/>
>>
>> *Dieudonne Madishon NGAYA*
>> Datastax, Cassandra Architect
>> *P: *7048580065
>> *w: *www.dmnbigdata.com
>> *E: *dmng...@dmnbigdata.com
>> *Private E: *dmng...@gmail.com
>> *A: *Charlotte,NC,28273, USA
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Mar 16, 2019 at 12:10 AM Sundaramoorthy, Natarajan <
>> natarajan_sundaramoor...@optum.com> wrote:
>>
>> Thanks for the quick response.
>>
>>
>>
>> Here is the cassandra.yaml attached.
>>
>>
>>
>> 1.      What was the read request?  Are you fetching a single row, a
>> million, something else?
>>
>>
>>
>> *Trying to get the details*
>>
>>
>>
>> 2. What are your GC settings?
>>
>>
>>
>> *I have no name!@cassandra-0:/etc/cassandra$ nodetool gcstats*
>>
>> *       Interval (ms) Max GC Elapsed (ms)Total GC Elapsed (ms)Stdev GC
>> Elapsed (ms)   GC Reclaimed (MB)         Collections      Direct Memory
>> Bytes*
>>
>> *               54292                 157
>> 157                   0           317432560
>>            1                       -1*
>>
>> *I have no name!@cassandra-0:/etc/cassandra$*
>>
>> #
>>
>>
>>
>> 3. What's the hardware in use?  What resources have been allocated to
>> each instance?
>>
>>
>>
>> *CPU: 1 core to 1 core*
>>
>> *Memory: 4 GiB to 4 GiB*
>>
>>
>>
>> *have no name!@cassandra-0:/etc/cassandra$ free -h*
>>
>> *              total        used        free      shared  buff/cache
>> available*
>>
>> *Mem:           251G         79G         39G        122M
>> 132G        169G*
>>
>> *Swap:            0B          0B          0B*
>>
>> *I have no name!@cassandra-0:/etc/cassandra$*
>>
>>
>>
>>
>>
>> 4. Did you see this issue after a single request or is the cluster under
>> heavy load?
>>
>>
>>
>> *It was sporadic server was not under heavy load at that time…*
>>
>>
>>
>> 5. do you know on which table are you getting these reads timeout ?
>>
>>
>>
>> *Getting details*
>>
>>
>>
>> 6. if yes, can you see if you don't have  Excessive tombstone activity
>>
>>
>>
>> PFA file tombstone
>>
>>
>>
>> 7. how often do you run repair ?
>>
>>
>>
>> Getting details for it
>>
>>
>>
>> 8. can you send a system.log and also report of nodetool tpstats
>>
>>
>>
>> I have no name!@cassandra-0:/etc/cassandra$ nodetool tpstats
>>
>> Pool Name                    Active   Pending      Completed   Blocked
>> All time blocked
>>
>> MutationStage                     0         0            851
>> 0                 0
>>
>> ViewMutationStage                 0         0              0
>> 0                 0
>>
>> ReadStage                         0         0          13576
>> 0                 0
>>
>> RequestResponseStage              0         0           1557
>> 0                 0
>>
>> ReadRepairStage                   0         0            422
>>   0                 0
>>
>> CounterMutationStage              0         0              0
>> 0                 0
>>
>> MiscStage                         0         0              0
>> 0                 0
>>
>> CompactionExecutor                0         0          62606
>> 0                 0
>>
>> MemtableReclaimMemory             0         0            101
>> 0                 0
>>
>> PendingRangeCalculator            0         0              7
>> 0                 0
>>
>> GossipStage                       0         0         383968
>> 0                 0
>>
>> SecondaryIndexManagement          0         0              0
>> 0                 0
>>
>> HintsDispatcher                   0         0              0
>> 0                 0
>>
>> MigrationStage                    0         0           1221
>> 0                 0
>>
>> MemtablePostFlush                 0         0            119
>> 0                 0
>>
>> ValidationExecutor                0         0              0
>>   0                 0
>>
>> Sampler                           0         0              0
>> 0                 0
>>
>> MemtableFlushWriter               0         0            100
>> 0                 0
>>
>> InternalResponseStage             0         0           1221
>> 0                 0
>>
>> AntiEntropyStage                  0         0              0
>> 0                 0
>>
>> CacheCleanupExecutor              0         0              0
>> 0                 0
>>
>> Native-Transport-Requests         0         0           7062
>> 0                 0
>>
>>
>>
>> Message type           Dropped
>>
>> READ                         0
>>
>> RANGE_SLICE                  0
>>
>> _TRACE                       0
>>
>> HINT                         0
>>
>> MUTATION                     0
>>
>> COUNTER_MUTATION             0
>>
>> BATCH_STORE                  0
>>
>> BATCH_REMOVE                 0
>>
>> REQUEST_RESPONSE             0
>>
>> PAGED_RANGE                  0
>>
>> READ_REPAIR                  0
>>
>> I have no name!@cassandra-0:/etc/cassandra$
>>
>> # accounted against the cache capacity. This overhead is usually small
>> compared to the whole capacity.
>>
>> 9.  Swap is enabled   or not ?
>>
>>
>>
>> Swap is disabled
>>
>>
>>
>> have no name!@cassandra-0:/etc/cassandra$ free -h
>>
>>               total        used        free      shared  buff/cache
>> available
>>
>> Mem:           251G         79G         39G        122M
>> 132G        169G
>>
>> Swap:            0B          0B          0B
>>
>> I have no name!@cassandra-0:/etc/cassandra$
>>
>>
>>
>> *From:* Jon Haddad [mailto:j...@jonhaddad.com]
>> *Sent:* Friday, March 15, 2019 10:32 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: read request is slow
>>
>>
>>
>>
>>
>> 1. What was the read request?  Are you fetching a single row, a million,
>> something else?
>>
>> 2. What are your GC settings?
>>
>> 3. What's the hardware in use?  What resources have been allocated to
>> each instance?
>>
>> 4. Did you see this issue after a single request or is the cluster under
>> heavy load?
>>
>>
>>
>> If you're going to share a config it's much easier to read as an actual
>> text file rather than a double spaced paste into the ML.  In the future if
>> you could share a link to the yaml you might get more eyes on it.
>>
>>
>>
>> Jon
>>
>>
>>
>> On Sat, Mar 16, 2019 at 3:57 PM Sundaramoorthy, Natarajan <
>> natarajan_sundaramoor...@optum.com> wrote:
>>
>> 3 pod deployed in openshift. Read request timed out due to GC collection.
>> Can you please look at below parameters and value to see if anything is out
>> of place? Thanks
>>
>>
>>
>>
>>
>> cat cassandra.yaml
>>
>>
>>
>> num_tokens: 256
>>
>>
>>
>>
>>
>>
>>
>> hinted_handoff_enabled: true
>>
>>
>>
>> hinted_handoff_throttle_in_kb: 1024
>>
>>
>>
>> max_hints_delivery_threads: 2
>>
>>
>>
>> hints_directory: /cassandra_data/hints
>>
>>
>>
>> hints_flush_period_in_ms: 10000
>>
>>
>>
>> max_hints_file_size_in_mb: 128
>>
>>
>>
>>
>>
>> batchlog_replay_throttle_in_kb: 1024
>>
>>
>>
>> authenticator: PasswordAuthenticator
>>
>>
>>
>> authorizer: AllowAllAuthorizer
>>
>>
>>
>> role_manager: CassandraRoleManager
>>
>>
>>
>> roles_validity_in_ms: 2000
>>
>>
>>
>>
>>
>> permissions_validity_in_ms: 2000
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>>
>>
>>
>> data_file_directories:
>>
>>     - /cassandra_data/data
>>
>>
>>
>> commitlog_directory: /cassandra_data/commitlog
>>
>>
>>
>> disk_failure_policy: stop
>>
>>
>>
>> commit_failure_policy: stop
>>
>>
>>
>> key_cache_size_in_mb:
>>
>>
>>
>> key_cache_save_period: 14400
>>
>>
>>
>>
>>
>>
>>
>> row_cache_size_in_mb: 0
>>
>>
>>
>> row_cache_save_period: 0
>>
>>
>>
>>
>>
>> counter_cache_size_in_mb:
>>
>>
>>
>> counter_cache_save_period: 7200
>>
>>
>>
>>
>>
>> saved_caches_directory: /cassandra_data/saved_caches
>>
>>
>>
>> commitlog_sync: periodic
>>
>> commitlog_sync_period_in_ms: 10000
>>
>>
>>
>> commitlog_segment_size_in_mb: 32
>>
>>
>>
>>
>>
>> seed_provider:
>>
>>     - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>>
>>       parameters:
>>
>>           - seeds:
>> "cassandra-0.cassandra.ihr-ei.svc.cluster.local,cassandra-1.cassandra.ihr-ei.svc.cluster.local"
>>
>>
>>
>> concurrent_reads: 32
>>
>> concurrent_writes: 32
>>
>> concurrent_counter_writes: 32
>>
>>
>>
>> concurrent_materialized_view_writes: 32
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> disk_optimization_strategy: ssd
>>
>>
>>
>>
>>
>>
>>
>> memtable_allocation_type: heap_buffers
>>
>>
>>
>> commitlog_total_space_in_mb: 2048
>>
>>
>>
>>
>>
>> index_summary_capacity_in_mb:
>>
>>
>>
>> index_summary_resize_interval_in_minutes: 60
>>
>>
>>
>> trickle_fsync: false
>>
>> trickle_fsync_interval_in_kb: 10240
>>
>>
>>
>> storage_port: 7000
>>
>>
>>
>> ssl_storage_port: 7001
>>
>>
>>
>> listen_address: 10.130.7.245
>>
>>
>>
>> broadcast_address: 10.130.7.245
>>
>>
>>
>>
>>
>>
>>
>> start_native_transport: true
>>
>> native_transport_port: 9042
>>
>>
>>
>>
>>
>>
>>
>> start_rpc: true
>>
>>
>>
>> rpc_address: 0.0.0.0
>>
>>
>>
>> rpc_port: 9160
>>
>>
>>
>> broadcast_rpc_address: 10.130.7.245
>>
>>
>>
>> rpc_keepalive: true
>>
>>
>>
>> rpc_server_type: sync
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> thrift_framed_transport_size_in_mb: 15
>>
>>
>>
>> incremental_backups: false
>>
>>
>>
>> snapshot_before_compaction: false
>>
>>
>>
>> auto_snapshot: true
>>
>>
>>
>> tombstone_warn_threshold: 1000
>>
>> tombstone_failure_threshold: 100000
>>
>>
>>
>> column_index_size_in_kb: 64
>>
>>
>>
>>
>>
>> batch_size_warn_threshold_in_kb: 5
>>
>>
>>
>> batch_size_fail_threshold_in_kb: 50
>>
>>
>>
>>
>>
>> compaction_throughput_mb_per_sec: 16
>>
>>
>>
>> compaction_large_partition_warning_threshold_mb: 100
>>
>>
>>
>> sstable_preemptive_open_interval_in_mb: 50
>>
>>
>>
>>
>>
>>
>>
>> read_request_timeout_in_ms: 50000
>>
>> range_request_timeout_in_ms: 100000
>>
>> write_request_timeout_in_ms: 20000
>>
>> counter_write_request_timeout_in_ms: 5000
>>
>> cas_contention_timeout_in_ms: 1000
>>
>> truncate_request_timeout_in_ms: 60000
>>
>> request_timeout_in_ms: 100000
>>
>>
>>
>> cross_node_timeout: false
>>
>>
>>
>>
>>
>> phi_convict_threshold: 12
>>
>>
>>
>> endpoint_snitch: GossipingPropertyFileSnitch
>>
>>
>>
>> dynamic_snitch_update_interval_in_ms: 100
>>
>> dynamic_snitch_reset_interval_in_ms: 600000
>>
>> dynamic_snitch_badness_threshold: 0.1
>>
>>
>>
>> request_scheduler: org.apache.cassandra.scheduler.NoScheduler
>>
>>
>>
>>
>>
>>
>>
>> server_encryption_options:
>>
>>     internode_encryption: none
>>
>>     keystore: conf/.keystore
>>
>>     truststore: conf/.truststore
>>
>>
>>
>> client_encryption_options:
>>
>>     enabled: false
>>
>>     optional: false
>>
>>     keystore: conf/.keystore
>>
>>
>>
>> internode_compression: all
>>
>>
>>
>> inter_dc_tcp_nodelay: false
>>
>>
>>
>> tracetype_query_ttl: 86400
>>
>> tracetype_repair_ttl: 604800
>>
>>
>>
>> gc_warn_threshold_in_ms: 1000
>>
>>
>>
>> enable_user_defined_functions: false
>>
>>
>>
>> enable_scripted_user_defined_functions: false
>>
>>
>>
>> windows_timer_interval: 1
>>
>>
>>
>>
>>
>> auto_bootstrap: false
>>
>>
>> This e-mail, including attachments, may include confidential and/or
>> proprietary information, and may be used only by the person or entity
>> to which it is addressed. If the reader of this e-mail is not the intended
>> recipient or his or her authorized agent, the reader is hereby notified
>> that any dissemination, distribution or copying of this e-mail is
>> prohibited. If you have received this e-mail in error, please notify the
>> sender by replying to this message and delete this e-mail immediately.
>>
>>
>> This e-mail, including attachments, may include confidential and/or
>> proprietary information, and may be used only by the person or entity
>> to which it is addressed. If the reader of this e-mail is not the intended
>> recipient or his or her authorized agent, the reader is hereby notified
>> that any dissemination, distribution or copying of this e-mail is
>> prohibited. If you have received this e-mail in error, please notify the
>> sender by replying to this message and delete this e-mail immediately.
>>
>>
>>
>>
>> ---------- Forwarded message ----------
>> From: "Sundaramoorthy, Natarajan" <natarajan_sundaramoor...@optum.com>
>> To: "Sundaramoorthy, Natarajan" <natarajan_sundaramoor...@optum.com>
>> Cc:
>> Bcc:
>> Date: Sat, 16 Mar 2019 02:40:12 +0000
>> Subject: cassandra.yaml
>>
>>
>>
>> cat cassandra.yaml
>>
>> # Cassandra storage config YAML
>>
>>
>>
>> # NOTE:
>>
>> #   See http://wiki.apache.org/cassandra/StorageConfiguration for
>>
>> #   full explanations of configuration directives
>>
>> # /NOTE
>>
>>
>>
>> # The name of the cluster. This is mainly used to prevent machines in
>>
>> # one logical cluster from joining another.
>>
>> cluster_name: K8Demo
>>
>>
>>
>> # This defines the number of tokens randomly assigned to this node on the
>> ring
>>
>> # The more tokens, relative to other nodes, the larger the proportion of
>> data
>>
>> # that this node will store. You probably want all nodes to have the same
>> number
>>
>> # of tokens assuming they have equal hardware capability.
>>
>> #
>>
>> # If you leave this unspecified, Cassandra will use the default of 1
>> token for legacy compatibility,
>>
>> # and will use the initial_token as described below.
>>
>> #
>>
>> # Specifying initial_token will override this setting on the node's
>> initial start,
>>
>> # on subsequent starts, this setting will apply even if initial token is
>> set.
>>
>> #
>>
>> # If you already have a cluster with 1 token per node, and wish to
>> migrate to
>>
>> # multiple tokens per node, see
>> http://wiki.apache.org/cassandra/Operations
>>
>> num_tokens: 256
>>
>>
>>
>> # Triggers automatic allocation of num_tokens tokens for this node. The
>> allocation
>>
>> # algorithm attempts to choose tokens in a way that optimizes replicated
>> load over
>>
>> # the nodes in the datacenter for the replication strategy used by the
>> specified
>>
>> # keyspace.
>>
>> #
>>
>> # The load assigned to each node will be close to proportional to its
>> number of
>>
>> # vnodes.
>>
>> #
>>
>> # Only supported with the Murmur3Partitioner.
>>
>> # allocate_tokens_for_keyspace: KEYSPACE
>>
>>
>>
>> # initial_token allows you to specify tokens manually.  While you can use
>> # it with
>>
>> # vnodes (num_tokens > 1, above) -- in which case you should provide a
>>
>> # comma-separated list -- it's primarily used when adding nodes # to
>> legacy clusters
>>
>> # that do not have vnodes enabled.
>>
>> # initial_token:
>>
>>
>>
>> # See http://wiki.apache.org/cassandra/HintedHandoff
>>
>> # May either be "true" or "false" to enable globally
>>
>> hinted_handoff_enabled: true
>>
>> # When hinted_handoff_enabled is true, a black list of data centers that
>> will not
>>
>> # perform hinted handoff
>>
>> # hinted_handoff_disabled_datacenters:
>>
>> #    - DC1
>>
>> #    - DC2
>>
>> # this defines the maximum amount of time a dead host will have hints
>>
>> # generated.  After it has been dead this long, new hints for it will not
>> be
>>
>> # created until it has been seen alive and gone down again.
>>
>> max_hint_window_in_ms: 10800000 # 3 hours
>>
>>
>>
>> # Maximum throttle in KBs per second, per delivery thread.  This will be
>>
>> # reduced proportionally to the number of nodes in the cluster.  (If there
>>
>> # are two nodes in the cluster, each delivery thread will use the maximum
>>
>> # rate; if there are three, each will throttle to half of the maximum,
>>
>> # since we expect two nodes to be delivering hints simultaneously.)
>>
>> hinted_handoff_throttle_in_kb: 1024
>>
>>
>>
>> # Number of threads with which to deliver hints;
>>
>> # Consider increasing this number when you have multi-dc deployments,
>> since
>>
>> # cross-dc handoff tends to be slower
>>
>> max_hints_delivery_threads: 2
>>
>>
>>
>> # Directory where Cassandra should store hints.
>>
>> # If not set, the default directory is $CASSANDRA_HOME/data/hints.
>>
>> hints_directory: /cassandra_data/hints
>>
>>
>>
>> # How often hints should be flushed from the internal buffers to disk.
>>
>> # Will *not* trigger fsync.
>>
>> hints_flush_period_in_ms: 10000
>>
>>
>>
>> # Maximum size for a single hints file, in megabytes.
>>
>> max_hints_file_size_in_mb: 128
>>
>>
>>
>> # Compression to apply to the hint files. If omitted, hints files
>>
>> # will be written uncompressed. LZ4, Snappy, and Deflate compressors
>>
>> # are supported.
>>
>> #hints_compression:
>>
>> #   - class_name: LZ4Compressor
>>
>> #     parameters:
>>
>> #         -
>>
>>
>>
>> # Maximum throttle in KBs per second, total. This will be
>>
>> # reduced proportionally to the number of nodes in the cluster.
>>
>> batchlog_replay_throttle_in_kb: 1024
>>
>>
>>
>> # Authentication backend, implementing IAuthenticator; used to identify
>> users
>>
>> # Out of the box, Cassandra provides
>> org.apache.cassandra.auth.{AllowAllAuthenticator,
>>
>> # PasswordAuthenticator}.
>>
>> #
>>
>> # - AllowAllAuthenticator performs no checks - set it to disable
>> authentication.
>>
>> # - PasswordAuthenticator relies on username/password pairs to
>> authenticate
>>
>> #   users. It keeps usernames and hashed passwords in
>> system_auth.credentials table.
>>
>> #   Please increase system_auth keyspace replication factor if you use
>> this authenticator.
>>
>> #   If using PasswordAuthenticator, CassandraRoleManager must also be
>> used (see below)
>>
>> authenticator: PasswordAuthenticator
>>
>>
>>
>> # Authorization backend, implementing IAuthorizer; used to limit
>> access/provide permissions
>>
>> # Out of the box, Cassandra provides
>> org.apache.cassandra.auth.{AllowAllAuthorizer,
>>
>> # CassandraAuthorizer}.
>>
>> #
>>
>> # - AllowAllAuthorizer allows any action to any user - set it to disable
>> authorization.
>>
>> # - CassandraAuthorizer stores permissions in system_auth.permissions
>> table. Please
>>
>> #   increase system_auth keyspace replication factor if you use this
>> authorizer.
>>
>> authorizer: AllowAllAuthorizer
>>
>>
>>
>> # Part of the Authentication & Authorization backend, implementing
>> IRoleManager; used
>>
>> # to maintain grants and memberships between roles.
>>
>> # Out of the box, Cassandra provides
>> org.apache.cassandra.auth.CassandraRoleManager,
>>
>> # which stores role information in the system_auth keyspace. Most
>> functions of the
>>
>> # IRoleManager require an authenticated login, so unless the configured
>> IAuthenticator
>>
>> # actually implements authentication, most of this functionality will be
>> unavailable.
>>
>> #
>>
>> # - CassandraRoleManager stores role data in the system_auth keyspace.
>> Please
>>
>> #   increase system_auth keyspace replication factor if you use this role
>> manager.
>>
>> role_manager: CassandraRoleManager
>>
>>
>>
>> # Validity period for roles cache (fetching granted roles can be an
>> expensive
>>
>> # operation depending on the role manager, CassandraRoleManager is one
>> example)
>>
>> # Granted roles are cached for authenticated sessions in
>> AuthenticatedUser and
>>
>> # after the period specified here, become eligible for (async) reload.
>>
>> # Defaults to 2000, set to 0 to disable caching entirely.
>>
>> # Will be disabled automatically for AllowAllAuthenticator.
>>
>> roles_validity_in_ms: 2000
>>
>>
>>
>> # Refresh interval for roles cache (if enabled).
>>
>> # After this interval, cache entries become eligible for refresh. Upon
>> next
>>
>> # access, an async reload is scheduled and the old value returned until it
>>
>> # completes. If roles_validity_in_ms is non-zero, then this must be
>>
>> # also.
>>
>> # Defaults to the same value as roles_validity_in_ms.
>>
>> # roles_update_interval_in_ms: 2000
>>
>>
>>
>> # Validity period for permissions cache (fetching permissions can be an
>>
>> # expensive operation depending on the authorizer, CassandraAuthorizer is
>>
>> # one example). Defaults to 2000, set to 0 to disable.
>>
>> # Will be disabled automatically for AllowAllAuthorizer.
>>
>> permissions_validity_in_ms: 2000
>>
>>
>>
>> # Refresh interval for permissions cache (if enabled).
>>
>> # After this interval, cache entries become eligible for refresh. Upon
>> next
>>
>> # access, an async reload is scheduled and the old value returned until it
>>
>> # completes. If permissions_validity_in_ms is non-zero, then this must be
>>
>> # also.
>>
>> # Defaults to the same value as permissions_validity_in_ms.
>>
>> # permissions_update_interval_in_ms: 2000
>>
>>
>>
>> # Validity period for credentials cache. This cache is tightly coupled to
>>
>> # the provided PasswordAuthenticator implementation of IAuthenticator. If
>>
>> # another IAuthenticator implementation is configured, this cache will not
>>
>> # be automatically used and so the following settings will have no effect.
>>
>> # Please note, credentials are cached in their encrypted form, so while
>>
>> # activating this cache may reduce the number of queries made to the
>>
>> # underlying table, it may not  bring a significant reduction in the
>>
>> # latency of individual authentication attempts.
>>
>> # Defaults to 2000, set to 0 to disable credentials caching.
>>
>> # credentials_validity_in_ms: 2000
>>
>>
>>
>> # Refresh interval for credentials cache (if enabled).
>>
>> # After this interval, cache entries become eligible for refresh. Upon
>> next
>>
>> # access, an async reload is scheduled and the old value returned until it
>>
>> # completes. If credentials_validity_in_ms is non-zero, then this must be
>>
>> # also.
>>
>> # Defaults to the same value as credentials_validity_in_ms.
>>
>> # credentials_updat
>>
>> --
>
> Best regards
> _____________________________________________________________
>
> [image:
> https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour]
> <https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour>
>    <https://twitter.com/dmnbigdata>   <https://www.instagram.com/>
> <https://www.linkedin.com/in/dngaya/>
>
> *Dieudonne Madishon NGAYA*
> Datastax, Cassandra Architect
> *P: *7048580065
> *w: *www.dmnbigdata.com
> *E: *dmng...@dmnbigdata.com
> *Private E: *dmng...@gmail.com
> *A: *Charlotte,NC,28273, USA
>

Reply via email to