I am using cassandra version 2.1.16. I have verified that the cli.history
files contains max 501 lines. However I don't see there is a limit for
cqlsh_history file. Any idea on that ?
--
regards,
Laxmikant Upadhyay
slice (last five minutes): 20
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 0 bytes
--
regards,
Laxmikant Upadhyay
.@zjqunshuo.com
>> wrote:
>>
>>> How large is your row? You may meet reading wide row problem.
>>>
>>> -Simon
>>>
>>> *From:* Laxmikant Upadhyay
>>> *Date:* 2018-09-05 01:01
>>> *To:* user
>>> *Subject:* High IO and p
It seems your partition size is more..what is the size of value field ? Try
to keep your partition size within 100 mb.
On Sat, Apr 7, 2018, 9:45 AM onmstester onmstester
wrote:
>
> I've defained a table like this
>
> create table test (
> hours int,
> key1 int,
> value1
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
--
regards,
Laxmikant Upadhyay
e attached the output with trace enabled (with actual timestamp) for
both correct and incorrect counter update .
What is the reason of this weird behavior ?
--
regards,
Laxmikant Upadhyay
==
sfully upgraded OR when 2 nodes gets
upgraded to 3.11.2 and 3rd non-upgraded node is down then we don't see such
issue and counter update works as expected.
On Tue, Nov 6, 2018 at 4:58 PM Laxmikant Upadhyay
wrote:
> Hi All,
>
> I have 3 node (A,B,C) 2.1.16 cassandra cluster which
read_request_timeout_in_ms: 1
>>> range_request_timeout_in_ms: 1
>>> write_request_timeout_in_ms: 6
>>> counter_write_request_timeout_in_ms: 1
>>> cas_contention_timeout_in_ms: 1000
>>> truncate_request_timeout_in_ms: 6
>>> request_timeout_in_ms: 1
>>> slow_query_log_timeout_in_ms: 500
>>> cross_node_timeout: false
>>> phi_convict_threshold: 12
>>> endpoint_snitch: GossipingPropertyFileSnitch
>>> dynamic_snitch_update_interval_in_ms: 100
>>> dynamic_snitch_reset_interval_in_ms: 60
>>> dynamic_snitch_badness_threshold: 0.5
>>> request_scheduler: org.apache.cassandra.scheduler.NoScheduler
>>> server_encryption_options:
>>> internode_encryption: none
>>> keystore: conf/.keystore
>>> keystore_password: cassandra
>>> truststore: conf/.truststore
>>> truststore_password: cassandra
>>> client_encryption_options:
>>> enabled: false
>>> optional: false
>>> keystore: conf/.keystore
>>> keystore_password: cassandra
>>> internode_compression: dc
>>> inter_dc_tcp_nodelay: false
>>> tracetype_query_ttl: 86400
>>> tracetype_repair_ttl: 604800
>>> enable_user_defined_functions: false
>>> enable_scripted_user_defined_functions: false
>>> enable_materialized_views: true
>>> windows_timer_interval: 1
>>> transparent_data_encryption_options:
>>> enabled: false
>>> chunk_length_kb: 64
>>> cipher: AES/CBC/PKCS5Padding
>>> key_alias: testing:1
>>> key_provider:
>>>- class_name: org.apache.cassandra.security.JKSKeyProvider
>>> parameters:
>>>- keystore: conf/.keystore
>>> keystore_password: cassandra
>>> store_type: JCEKS
>>> key_password: cassandra
>>> tombstone_warn_threshold: 1000
>>> tombstone_failure_threshold: 10
>>> batch_size_warn_threshold_in_kb: 5
>>> batch_size_fail_threshold_in_kb: 50
>>> unlogged_batch_across_partitions_warn_threshold: 10
>>> compaction_large_partition_warning_threshold_mb: 10
>>> gc_warn_threshold_in_ms: 1000
>>> back_pressure_enabled: false
>>> back_pressure_strategy:
>>> - class_name: org.apache.cassandra.net.RateBasedBackPressure
>>>parameters:
>>> - high_ratio: 0.90
>>>factor: 5
>>>flow: FAST
>>>
>>>
>>>
>>> *A lot of maps, 200K maps of cassandra process,*:
>>>
>>> [root@cass063 ~]# wc -l /proc/`ps -ef | grep CassandraDaemon | grep -v grep
>>> | awk '{print $2}'`/maps
>>> 239587 /proc/202664/maps
>>>
>>> Thanks,
>>> Roy
>>>
>>
--
regards,
Laxmikant Upadhyay
>> cluster with single racks configuration to multi rack configuration.
>>
>>
>> I want to introduce 3 racks with 2 nodes in each rack.
>>
>>
>> Regards
>> Manish
>>
>> --
> -
> Alexander Dejanovski
> France
> @alexanderdeja
>
> Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
--
regards,
Laxmikant Upadhyay
> some negative implications when not at a proper node count.
>
>
>
> What features are you trying to make use of with going with multirack?
>
>
>
> - Justin Sanciangco
>
>
>
>
>
> *From: *Laxmikant Upadhyay
> *Reply-To: *"user@cassandra.apache.org"
>
My settings:
> Cassandra v3.0.9
> 2 DCs (4/3 nodes respectively) (RF=2)
> endpoint_snitch: PropertyFileSnitch
> vnodes setup: num_tokens: 265
>
> Thank you,
>
> Robert,
>
>
>
>
>
--
regards,
Laxmikant Upadhyay
M: +61459911436
>
> <https://www.instaclustr.com>
>
> <https://www.facebook.com/instaclustr> <https://twitter.com/instaclustr>
><https://www.linkedin.com/company/instaclustr>
>
> Read our latest technical blog posts here
> <https://www.instaclustr.com/blog/>.
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information. If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
> Instaclustr values your privacy. Our privacy policy can be found at
> https://www.instaclustr.com/company/policies/privacy-policy
>
--
regards,
Laxmikant Upadhyay
bstone ?
On Thu, Jun 13, 2019 at 12:30 PM Laxmikant Upadhyay
wrote:
> HI Michael,
>
> Thanks for your reply.
> I don't think this issue is related to CASSANDRA-12765
> <https://issues.apache.org/jira/browse/CASSANDRA-12765> as in my case the
> sstable whi
Does range query ignore purgable tombstone (which crossed grace period) in
some cases?
On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay
wrote:
> In a 3 node cassandra 2.1.16 cluster where, one node has old mutation and
> two nodes have evict-able (crossed gc grace period) tombstone pr
present (not both the node)
cqlsh> select * from test.vouchers where token (key) > 3074457345618258602
and token (key) < -9223372036854775808 ;
key | col | val
-+-----+-
(0 rows)
--
regards,
Laxmikant Upadhyay
t; Michael
>
> On 6/11/19 9:58 PM, Laxmikant Upadhyay wrote:
> > Does range query ignore purgable tombstone (which crossed grace period)
> > in some cases?
> >
> > On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay
> > mailto:laxmikant@gmail.com>> wrote:
CF) so since it is going to take 2
> months (according ETA in Reaper) does that mean that when this repair will
> finish the entropy will be again high in this CF ?
>
> How I can speed up the process ? Is there any way to diagnose bottlenecs?
>
> Thank you,
>
> W
>
>
--
regards,
Laxmikant Upadhyay
;
4. Now remove superuser permission from user1
ALTER ROLE user1 with SUPERUSER=false;
On Thu, May 9, 2019 at 12:34 PM Laxmikant Upadhyay
wrote:
> I think you will get below exception while executing GRANT with
> AllowAllAuthorizer
> ServerError: java.lang.UnsupportedOperation
iguration from `AllowAllAuthorizer` to
> `CassandraAuthorizer`, you need to grant enough permissions to the user
> that allow all the accessed tables by that user. I think that should fix
> the problem.
>
> Thanks
>
> On Thu, May 9, 2019 at 12:02 PM Laxmikant Upadhyay <
connecting with cassandra user:
UnauthorizedException: User user1 has no SELECT permission on
Is there a way to avoid this error at all in the above situation?
--
regards,
Laxmikant Upadhyay
1)
> ~[apache-cassandra-2.1.15.jar:2.1.15]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:379)
> ~[apache-cassandra-2.1.15.jar:2.1.15]```
>
> Thanks for your time...
>
> Thanks
> Murali Gutha
>
--
regards,
Laxmikant Upadhyay
Few for reasons:
Sudden Power cut
Disk full
Issue in casandra version like Cassandra-13752
On Wed, Aug 7, 2019, 4:16 PM Philip Ó Condúin
wrote:
> Hi All,
>
> I am currently experiencing multiple datafile corruptions across most
> nodes in my cluster, there seems to be no pattern to the
.java:437 -
>>> Received: EXECUTE e779e97bc0de5e5e121db71c5cb2b727 with 11 values at
>>> consistency LOCAL_QUORUM, v=3
>>> DEBUG [SharedPool-Worker-66] 2019-09-25 06:29:16,635 Message.java:437 -
>>> Received: EXECUTE 447fdb9c8dfae53fafd78c7583aeb0f1 with 3 values at
>>> consistency LOCAL_QUORUM, v=3
>>> DEBUG [SharedPool-Worker-65] 2019-09-25 06:29:16,623 Message.java:437 -
>>> Received: EXECUTE d67e6a07c24b675f492686078b46c997 with 3 values at
>>> consistency LOCAL_ONE, v=3
>>> DEBUG [SharedPool-Worker-61] 2019-09-25 06:29:16,621 Message.java:437 -
>>> Received: QUERY SELECT column4 FROM ks2.tbl2 WHERE column1='' AND
>>> column2='' AND ts1>1569358692193;, v=3
>>> DEBUG [SharedPool-Worker-62] 2019-09-25 06:29:16,618 Message.java:437 -
>>> Received: EXECUTE d67e6a07c24b675f492686078b46c997 with 3 values at
>>> consistency LOCAL_ONE, v=3
>>>
>>>
--
regards,
Laxmikant Upadhyay
----
> >>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> >>> For additional commands, e-mail: user-h...@cassandra.apache.org
> >>>
> >>
> >> -
> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> >> For additional commands, e-mail: user-h...@cassandra.apache.org
> >>
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
--
regards,
Laxmikant Upadhyay
Raised a ticket https://issues.apache.org/jira/browse/CASSANDRA-15159 for
the same.
On Thu, Jun 13, 2019 at 3:55 PM Laxmikant Upadhyay
wrote:
> This issue is reproducible on *3.11.4 and 2.1.21* as well. (not yet
> checked on 3.0)
>
> Range query could be : select * from
> Will nothing happen? Or will the channel be closed?
>
>
>
> Please share your experience.
>
>
>
> Thank you.
>
--
regards,
Laxmikant Upadhyay
ice of root.
>>> Jul 30 15:55:57 x systemd: Started Session c165288 of user root.
>>> Jul 30 15:55:57 x audispd: node=x. type=USER_START
>>> msg=audit(1564498557.294:457958): pid=19687 uid=0 auid=4294967295
>>> ses=4294967295 msg='op=PAM:session_open
>>> grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_tty_audit,pam_systemd,pam_unix
>>> acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
>>> Jul 30 15:55:57 x audispd: node=x. type=USER_START
>>> msg=audit(1564498557.298:457959): pid=19690 uid=0 auid=4294967295
>>> ses=4294967295 msg='op=PAM:session_open
>>> grantors=pam_keyinit,pam_systemd,pam_keyinit,pam_limits,pam_unix
>>> acct="cass_b" exe="/usr/sbin/runuser" hostname=? addr=? terminal=?
>>> res=success'
>>> Jul 30 15:55:58 x systemd: Removed slice User Slice of root.
>>> Jul 30 15:56:02 x cassandra: INFO 14:56:02 Writing
>>> Memtable-compactions_in_progress@1532791194(0.008KiB serialized bytes,
>>> 1 ops, 0%/0% of on/off-heap limit)
>>> Jul 30 15:56:02 x cassandra: INFO 14:56:02 Cannot perform a full major
>>> compaction as repaired and unrepaired sstables cannot be compacted
>>> together. These two set of sstables will be compacted separately.
>>> Jul 30 15:56:02 x cassandra: INFO 14:56:02 Writing
>>> Memtable-compactions_in_progress@1455399453(0.281KiB serialized bytes,
>>> 16 ops, 0%/0% of on/off-heap limit)
>>> Jul 30 15:56:04 x tag_audit_log: type=USER_CMD
>>> msg=audit(1564498555.190:457951): pid=19294 uid=509 auid=4294967295
>>> ses=4294967295 msg='cwd="/"
>>> cmd=72756E75736572202D73202F62696E2F62617368202D6C20636173735F62202D632063617373616E6472612D6D6574612F63617373616E6472612F62696E2F6E6F6465746F6F6C2074707374617473
>>> terminal=? res=success'
>>>
>>>
>>>
>>> We have checked a number of other things like NTP setting etc but
>>> nothing is telling us what could cause so many corruptions across the
>>> entire cluster.
>>> Things were healthy with this cluster for months, the only thing I can
>>> think is that we started loading data from a load of 20GB per instance up
>>> to 200GB where it sits now, maybe this just highlighted the issue.
>>>
>>>
>>>
>>> Compaction and Compression on Keyspace CL's [mixture]
>>> All CF's are using compression.
>>>
>>> AND compaction = {'min_threshold': '4', 'class':
>>> 'org.apache.cassandra.db.compaction.*SizeTieredCompactionStrategy*',
>>> 'max_threshold': '32'}
>>> AND compression = {'sstable_compression':
>>> 'org.apache.cassandra.io.compress.*SnappyCompressor*'}
>>>
>>> AND compaction = {'min_threshold': '4', 'class':
>>> 'org.apache.cassandra.db.compaction.*SizeTieredCompactionStrategy*',
>>> 'max_threshold': '32'}
>>> AND compression = {'sstable_compression':
>>> 'org.apache.cassandra.io.compress.*LZ4Compressor*'}
>>>
>>> AND compaction = {'class': 'org.apache.cassandra.db.compaction.
>>> *LeveledCompactionStrategy*'}
>>> AND compression = {'sstable_compression':
>>> 'org.apache.cassandra.io.compress.*SnappyCompressor*'}
>>>
>>> --We are also using internode network compression:
>>> internode_compression: all
>>>
>>>
>>>
>>> Does anyone have any idea what I should check next?
>>> Our next theory is that there may be an issue with Checksum, but I'm not
>>> sure where to go with this.
>>>
>>>
>>>
>>> Any help would be very much appreciated before I lose the last bit of
>>> hair I have on my head.
>>>
>>>
>>>
>>> Kind Regards,
>>>
>>> Phil
>>>
>>>
>>>
>>> On Wed, 7 Aug 2019 at 20:51, Nitan Kainth wrote:
>>>
>>> Repair during upgrade have caused corruption too.
>>>
>>>
>>>
>>> Also, dropping and adding columns with same name but different type
>>>
>>>
>>>
>>> Regards,
>>>
>>> Nitan
>>>
>>> Cell: 510 449 9629
>>>
>>>
>>> On Aug 7, 2019, at 2:42 PM, Jeff Jirsa wrote:
>>>
>>> Is compression enabled?
>>>
>>>
>>>
>>> If not, bit flips on disk can corrupt data files and reads + repair may
>>> send that corruption to other hosts in the cluster
>>>
>>>
>>> On Aug 7, 2019, at 3:46 AM, Philip Ó Condúin
>>> wrote:
>>>
>>> Hi All,
>>>
>>>
>>>
>>> I am currently experiencing multiple datafile corruptions across most
>>> nodes in my cluster, there seems to be no pattern to the corruption. I'm
>>> starting to think it might be a bug, we're using Cassandra 2.2.13.
>>>
>>>
>>>
>>> Without going into detail about the issue I just want to confirm
>>> something.
>>>
>>>
>>>
>>> Can someone share with me a list of scenarios that would cause
>>> corruption?
>>>
>>>
>>>
>>> 1. OS failure
>>>
>>> 2. Cassandra disturbed during the writing
>>>
>>>
>>>
>>> etc etc.
>>>
>>>
>>>
>>> I need to investigate each scenario and don't want to leave any out.
>>>
>>>
>>>
>>> --
>>>
>>> Regards,
>>>
>>> Phil
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Regards,
>>>
>>> Phil
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Phil
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Phil
>>>
>>
>
> --
> Regards,
> Phil
>
--
regards,
Laxmikant Upadhyay
Is dc1 a simple standby DC? Or you run some operations(e.g. compute for
analysis) on the same? Have you found the root cause of the oom? Do you
see any specific Cassandra operation (e.g repair) is causing oom?
One tip: try upgrading to 3.11.6 as lots of bugs has been fixed since 3.11.0
On Wed,
;
>>>> Cell: 510 449 9629
>>>>
>>>> On Jan 21, 2020, at 5:36 AM, manish khandelwal <
>>>> manishkhandelwa...@gmail.com> wrote:
>>>>
>>>>
>>>> Hi Team
>>>>
>>>> I am observing some obsolete files in Cassandra 2.0.14 which are
>>>> already compacted but not removed from the system after compaction.
>>>> As per CASSANDRA-7872
>>>> <https://issues.apache.org/jira/browse/CASSANDRA-7872> , after GC
>>>> grace period has passed the sstables are open for read again and can lead
>>>> to data resurrection. I am facing disk crunch (90% full ) as well and need
>>>> to remove those obsolete files ASAP.
>>>>
>>>>
>>>> To avoid this what should be our strategy? I am thinking on following
>>>> lines
>>>> 1. Stop the Cassandra server.
>>>> 2. Remove the obsolete files manually.
>>>> 3. Start the Cassandra server.
>>>>
>>>> Regards
>>>> Manish
>>>>
>>>>
>>>>
>>>>
>>>>
--
regards,
Laxmikant Upadhyay
to switch without running repair.
I am interested to know how other people are solving this issue and make
fast switch-over assuring consistency.
--
regards,
Laxmikant Upadhyay
Thu, Jan 16, 2020 at 1:04 PM Laxmikant Upadhyay <
> laxmikant@gmail.com> wrote:
>
>> Hi,
>> What I meant fromActive/standby model is that even though data is being
>> replicated (asynchronously) to standby DC , client will only access the
>> data from active
the second DC ?
>
> Le jeu. 16 janv. 2020 à 09:35, Laxmikant Upadhyay
> a écrit :
>
>> We have 2 dc in active/standby model. At any given point if we want to
>> switch to standby dc, how will we make sure that data is consistent with
>> active site? Note that repair runs at
It is OS page cache used during read..your os will leverage memory if not
being used by any other applications and it improves your read performance.
On Sat, Apr 11, 2020, 12:47 PM HImanshu Sharma
wrote:
> Hi
>
> I am very new to the use of cassandra. In a cassandra cluster of 3 nodes,
> I am
on cluster.
>> How can I find actual memory usage by Cassandra process. If it is OS page
>> cache then how to find how much is page cache and how much is used by
>> process?
>>
>> Thanks
>> Himanshu
>>
>> On Sat, Apr 11, 2020 at 9:07 PM Laxmikant
will
create tombstone (however number of tombstones will be limited in a
partition) but it will not require application side filtering.
I think that we should avoid tombstones specially row-level so should go
with option-1. Kindly suggest on above or any other better approach ?
--
regards,
Laxmikant
) ..so still
it will be an issue if I read the partition only once a day ? Even with
update status and don't delete the row?
On Sat, May 23, 2020, 4:36 PM Gábor Auth wrote:
> Hi,
>
> On Sat, May 23, 2020 at 4:09 PM Laxmikant Upadhyay <
> laxmikant@gmail.com> wrote:
>
>
ou’d use for this. Understand it’s nontrivial to setup but it’s also
> nontrivial to do this properly.
>
>
>
> On May 23, 2020, at 9:26 AM, Laxmikant Upadhyay
> wrote:
>
>
> Thanks you so much for quick response. I completely agree with Jeff and
> Gabor that it is an an
abled';
> name| value
> -+---
> allow_filtering_enabled | false
>
> (1 rows)
>
> > SELECT * FROm stackoverflow.movies WHERE title='Sneakers (1992)' ALLOW
> FILTERING;
> id | genre | title
> --++-
> 1396 | Crime|Drama|Sci-Fi | Sneakers (1992)
>
> (1 rows)
>
> Is there like some main "guardrails enabled" setting that I missed?
>
>
>
> Thanks,
>
>
> Aaron
>
>
>
>
>
> INTERNAL USE
>
>
--
regards,
Laxmikant Upadhyay
You need to set both in case of lwt. your regular non -serial consistency
level will only applied during commit phase of lwt.
On Wed, 6 Mar, 2024, 03:30 Weng, Justin via user,
wrote:
> Hi Cassandra Community,
>
>
>
> I’ve been investigating Cassandra Paxos v2 (as implemented in CEP-14
>
39 matches
Mail list logo