Re: Remove folders of deleted tables

2023-12-04 Thread Dipan Shah
Hello Sebastien,

There are no inbuilt tools that will automatically remove folders of deleted 
tables.


Thanks,

Dipan Shah


From: Sébastien Rebecchi 
Sent: 04 December 2023 13:54
To: user@cassandra.apache.org 
Subject: Remove folders of deleted tables

Hello,

When we delete a table with Cassandra, it lets the folder of that table on file 
system, even if there is no snapshot (auto snapshots disabled).
So we end with the empty folder {data folder}/{keyspace name}/{table name-table 
id} containing only 1  subfolder, backups, which is itself empty.
Is there a way to automatically remove folders of deleted tables?

Sébastien.


Re: Delete too wide partitions

2023-07-16 Thread Dipan Shah
Hello Sébastien,

No there is no in-built solution to perform such an operation in Cassandra.

Thanks,
Dipan

On Sun, 16 Jul 2023 at 4:03 PM, Sébastien Rebecchi 
wrote:

> Hi everyone
>
> Is there a way to tell Cassandra to automatically delete a partition when
> its size increase a given threshold?
>
> Best regard
>
> Sébastien
>
-- 

Thanks,

*Dipan Shah*

*Data Engineer*

[image: https://www.anant.us/Home.aspx]

3 Washington Circle NW, Suite 301

Washington, D.C. 20037

*Check out our **blog* <https://blog.anant.us/>*!*

This email and any attachments to it may be confidential and are intended
solely for the use of the individual to whom it is addressed. Any views or
opinions expressed are solely those of the author and do not necessarily
represent those of Anant Corporation. If you are not the intended recipient
of this email, you must neither take any action based upon its contents,
nor copy or show it to anyone. Please contact the sender if you believe you
have received this email in error.


Re: write on ONE node vs replication factor

2023-07-16 Thread Dipan Shah
Hello Anurag,

In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is
Read consistency, W is Write consistency and N is the Replication Factor.

So in your case, R(2) + W(1) = 3 which is NOT greater than your replication
factor(3) so you will not be able to guarantee strong consistency. This is
because you will write to 1 replica but your immediate read can go to the
other 2(quorum) replicas and they might not be updated yet.

On Sun, Jul 16, 2023 at 8:06 AM Anurag Bisht 
wrote:

> thank you Jeff,
> it makes more sense now. How about I write with ONE consistency,
> replication factor = 3 and read consistency is QUORUM. I am guessing in
> that case, I will not have the empty read even if it is happened
> immediately after the write request, let me know your thoughts ?
>
> Cheers,
> Anurag
>
> On Sat, Jul 15, 2023 at 7:28 PM Jeff Jirsa  wrote:
>
>> Consistency level controls when queries acknowledge/succeed
>>
>> Replication factor is where data lives / how many copies
>>
>> If you write at consistency ONE and replication factor 3, the query
>> finishes successfully when the write is durable on one of the 3 copies.
>>
>> It will get sent to all 3, but it’ll return when it’s durable on one.
>>
>> If you write at ONE and it goes to the first replica, and you read at ONE
>> and it reads from the last replica, it may return without the data:  you
>> may not see a given write right away.
>>
>> > On Jul 15, 2023, at 7:05 PM, Anurag Bisht 
>> wrote:
>> >
>> > 
>> > Hello Users,
>> >
>> > I am new to Cassandra and trying to understand the architecture of it.
>> If I write to ONE node for a particular key and have a replication factor
>> of 3, would the written key will get replicated to the other two nodes ?
>> Let  me know if I am thinking incorrectly.
>> >
>> > Thanks,
>> > Anurag
>>
>

-- 

Thanks,

*Dipan Shah*

*Data Engineer*

[image: https://www.anant.us/Home.aspx]


3 Washington Circle NW, Suite 301

Washington, D.C. 20037


*Check out our **blog* <https://blog.anant.us/>*!*


This email and any attachments to it may be confidential and are intended
solely for the use of the individual to whom it is addressed. Any views or
opinions expressed are solely those of the author and do not necessarily
represent those of Anant Corporation. If you are not the intended recipient
of this email, you must neither take any action based upon its contents,
nor copy or show it to anyone. Please contact the sender if you believe you
have received this email in error.


Re: Cleanup

2023-02-16 Thread Dipan Shah
Hi Marc,

Changes done using "nodetool setcompactionthroughput" will only be
applicable till Cassandra service restart.

The throughput value will revert back to the settings inside cassandra.yaml
post service restart.

On Fri, Feb 17, 2023 at 1:04 PM Marc Hoppins  wrote:

> …and if it is altered via nodetool, is it altered until manually changed
> or service restart, so must be manually put pack?
>
>
>
> *From:* Aaron Ploetz 
> *Sent:* Thursday, February 16, 2023 4:50 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cleanup
>
>
>
> EXTERNAL
>
> So if I remember right, setting compaction_throughput_per_mb to zero
> effectively disables throttling, which means cleanup and compaction will
> run as fast as the instance will allow.  For normal use, I'd recommend
> capping that at 8 or 16.
>
>
>
> Aaron
>
>
>
>
>
> On Thu, Feb 16, 2023 at 9:43 AM Marc Hoppins 
> wrote:
>
> Compaction_throughtput_per_mb is 0 in cassandra.yaml. Is setting it in
> nodetool going to provide any increase?
>
>
>
> *From:* Durity, Sean R via user 
> *Sent:* Thursday, February 16, 2023 4:20 PM
> *To:* user@cassandra.apache.org
> *Subject:* RE: Cleanup
>
>
>
> EXTERNAL
>
> Clean-up is constrained/throttled by compactionthroughput. If your system
> can handle it, you can increase that throughput (nodetool
> setcompactionthroughput) for the clean-up in order to reduce the total time.
>
>
>
> It is a node-isolated operation, not cluster-involved. I often run clean
> up on all nodes in a DC at the same time. Think of it as compaction and
> consider your cluster performance/workload/timelines accordingly.
>
>
>
> Sean R. Durity
>
>
>
> *From:* manish khandelwal 
> *Sent:* Thursday, February 16, 2023 5:05 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Cleanup
>
>
>
> There is no advantage of running cleanup if no new nodes are introduced.
> So cleanup time should remain same when adding new nodes. Cleanup is a
> local to node so network bandwidth should have no effect on reducing
> cleanup time. Dont ignore cleanup
>
>
>
> There is no advantage of running cleanup if no new nodes are introduced.
> So cleanup time should remain same when adding new nodes.
>
>
>
>  Cleanup is a local to node so network bandwidth should have no effect on
> reducing cleanup time.
>
>
>
>  Dont ignore cleanup as it can cause you disks occupied without any use.
>
>
>
>  You should plan to run cleanup in a lean period (low traffic). Also you
> can use suboptions of keyspace and table names to plan it such a way that
> I/O pressure is not much.
>
>
>
>
>
> Regards
>
> Manish
>
>
>
> On Thu, Feb 16, 2023 at 3:12 PM Marc Hoppins 
> wrote:
>
> Hulloa all,
>
>
>
> I read a thing re. adding new nodes where the recommendation was to run
> cleanup on the nodes after adding a new node to remove redundant token
> ranges.
>
>
>
> I timed this way back when we only had ~20G of data per node and it took
> approx. 5 mins per node.  After adding a node on Tuesday, I figured I’d run
> cleanup.
>
>
>
> Per node, it is taking 6+ hours now as we have 2-2.5T per node.
>
>
>
> Should we be running cleanup regularly regardless of whether or not new
> nodes have been added?  Would it reduce cleanup times for when we do add
> new nodes?
>
> If we double the network bandwidth can we effectively reduce this lengthy
> cleanup?
>
> Maybe just ignore cleanup entirely?
>
> I appreciate that cleanup will increase the load but running cleanup on
> one node at a time seems impractical.  How many simultaneous nodes (per
> rack) should we limit cleanup to?
>
>
>
> More experienced suggestions would be most appreciated.
>
>
> Marc
>
>
>
> INTERNAL USE
>
>

-- 

Thanks,

*Dipan Shah*

*Data Engineer*

[image: https://www.anant.us/Home.aspx]


3 Washington Circle NW, Suite 301

Washington, D.C. 20037


*Check out our **blog* <https://blog.anant.us/>*!*


This email and any attachments to it may be confidential and are intended
solely for the use of the individual to whom it is addressed. Any views or
opinions expressed are solely those of the author and do not necessarily
represent those of Anant Corporation. If you are not the intended recipient
of this email, you must neither take any action based upon its contents,
nor copy or show it to anyone. Please contact the sender if you believe you
have received this email in error.


Re: Pulling unreceived schema versions

2023-02-14 Thread Dipan Shah
Hello Joe,

"Pulling unreceived schema versions" in Apache Cassandra means that a node is 
requesting schema updates from other nodes in the cluster that it has not yet 
received. This is a normal part of the Cassandra distributed architecture, as 
each node needs to stay up-to-date with the latest schema changes made in the 
cluster.

If this is just a DEBUG\INFO message and not a WARN\ERROR, you can safely 
ignore it.


Thanks,

Dipan Shah


From: Joe Obernberger 
Sent: Monday, February 13, 2023 9:10 PM
To: user@cassandra.apache.org 
Subject: Pulling unreceived schema versions

Hi all - I'm seeing this message:
"Pulling unreceived schema versions..."

in the debug log being repeated exactly every minute, but I can't find
what this means?
Thank you!

-Joe


--
This email has been checked for viruses by AVG antivirus software.
www.avg.com<http://www.avg.com>


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Dipan Shah
Hello Abdul,

Adding to what Bowen already shared for snapshots.

Assuming that you're not just amplifying disk space by updating\deleting 
existing data many times, these are the following things that you should 
consider:

  *   Manual snapshots
 *   Check (nodetool listsnapshots) and remove (nodetool clearsnapshot) 
unwanted snapshots
  *   Automatic snapshots
 *   You can have unwanted snapshots if auto snapshot is enabled and you're 
frequently dropping, trucating or scrubbing tables. Check if that is the case
  *   Incremental backups
 *   Check if you have enabled incremental backups. Those files do not get 
deleted on their own and need to be cleaned out regularly
  *   Not running cleanup after adding new nodes to the cluster
 *   Check if you have recently added nodes to the cluster and missed 
running cleanups after that
  *   Compaction failing due to low disk space
 *   Cassandra will not be able to compact data (and free up space) if it 
does not have the required disk space to rewrite files. Check system.log for 
compaction error

Thanks,

Dipan Shah


From: Bowen Song 
Sent: Friday, September 17, 2021 4:53 PM
To: user@cassandra.apache.org 
Subject: Re: High disk usage casaandra 3.11.7

Assuming your total disk space is a lot bigger than 50GB in size
(accounting for disk space amplification, commit log, logs, OS data,
etc.), I would suspect the disk space is being used by something else.
Have you checked that the disk space is actually being used by the
cassandra data directory? If so, have a look at 'nodetool listsnapshots'
command output as well.


On 17/09/2021 05:48, Abdul Patel wrote:
> Hello
>
> We have cassandra with leveledcompaction strategy, recently found
> filesystem almost 90% full but the data was only 10m records.
> Manual compaction will work? As not sure its recommended and space is
> also constraint ..tried removing and adding one node and now data is
> at 20GB which looks appropropiate.
> So is only solution to reclaim space is remove/add node?


Re: unable to repair

2021-05-26 Thread Dipan Shah
Hello Sebastien,

Not sure but have you checked the output of "nodetool describecluster"? A 
schema mismatch or node unavailability might result in this.


Thanks,

Dipan Shah


From: Sébastien Rebecchi 
Sent: Wednesday, May 26, 2021 7:35 PM
To: user@cassandra.apache.org 
Subject: unable to repair

Hi,

I have an issue with repairing my Casandra cluster, that was already the case 
with Cassandra 3 and the issue is not solved with Cassandra 4 RC1.

I run in a for loop, one 1 by 1, the following command:

nodetool -h THE_NODE -u jTHE_USER -pw THE_PASSWORD repair --full -pr

and I always get the following error, see message and stack trace for Cassandra 
4 RC1 at the bottom of the message (the same for C3).

I don't know what to do with that. Are there some mistakes I could have made in 
my table design explaining that? I have heard for example that it was not 
recommended to have big partitions, so I changed my data model to remove 
clustering keys I add before and then split big partitions in many independent 
one, and now the partitions are max 500KB each (the vast majority of them are 
max 100KB). But it did not change anything. Also my partition key was a 
compound of 9 columns and I changed that to have only 1 column for partition 
key by generating ids by myself, the same, no improvement.

Thank you for your help,

Sébastien

--

error: Repair job has failed with the error message: [2021-05-26 15:54:19,981] 
Repair command #2 failed with error Got negative replies from endpoints 
[135.181.222.100, 135.181.217.109, 135.181.221.180]
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2021-05-26 15:54:19,981] Repair command #2 failed with error Got negative 
replies from endpoints [135.181.222.100, 135.181.217.109, 135.181.221.180]
at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:116)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)


Re: How bottom of cassandra save data efficiently?

2019-12-31 Thread Dipan Shah
Hello lampahome,

Data will be compressed but you will also have to account for the replication 
factor that you will be using.


Thanks,

Dipan Shah


From: lampahome 
Sent: Tuesday, December 31, 2019 8:06 AM
To: user@cassandra.apache.org 
Subject: How bottom of cassandra save data efficiently?

If I use var a as primary key and var b as second key, and a and b are 16 bytes 
and 8 bytes.

And other data are 32 bytes.

In one row, I have a+b+data = 16+8+32 = 56 bytes.

If I have 100,000 rows to store in cassandra, will it occupy space 56x10 
bytes in my disk? Or data will be compressed?

thx


No progress in compactionstats

2019-12-01 Thread Dipan Shah
Hello,

I am running a 5 node Cassandra cluster on V 3.7 and did not understand why the 
following thing was happening. I had altered the compaction strategy of a table 
from Size to Leveled and while running "nodetool compactionstats" found that 
the SSTables were stuck and not getting compacted. This was happening on 
majority of the nodes while the remaining were still showing some progress at 
compacting the SSTables.

[cid:9eca23a8-c11c-4ace-8409-5400b9d93aab]

There were no errors in system.log and a service restart also did not help. I 
reverted the compaction strategy to Size to see what happens and that sent the 
value of pending tasks back to 0.

I have done this earlier for a similar tables and it has worked perfectly fine 
for me. What could have gone wrong over here?


Thanks,

Dipan Shah


Re: MV's stuck in build state

2019-03-04 Thread Dipan Shah
Hello Kenneth,

Apologies for the late reply.

1) On production the value of x was 67 MB and y was 16 MV as value of 
commitlog_segment_size_in_mb is 32.
2) On Dev the value of x was 18 MB and y was 16 MV as value of 
commitlog_segment_size_in_mb was 32 initially. I had bumped up the value of 
commitlog_segment_size_in_mb to 128 when the node eventually crashed.
3) No I did not try org.apache.cassandra.db:type=CompactionManager but I did 
try "nodetool stop" and "nodetool stop VIEW_BUILD".


Thanks,

Dipan Shah


From: Kenneth Brotman 
Sent: Friday, March 1, 2019 8:19 PM
To: user@cassandra.apache.org
Subject: RE: MV's stuck in build state


Dipan,



On your production cluster, when you were first getting the “Mutation of  
bytes …” message, what was the value of x and y?

How about when you got the message on the Dev Cluster, what was the value of x 
and y in that message?

On the Dev cluster, did you try going into JMX and directly hitting the 
org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction operation?





From: Dipan Shah [mailto:dipan@hotmail.com]
Sent: Friday, March 01, 2019 12:56 AM
To: Kenneth Brotman; user@cassandra.apache.org
Subject: Re: MV's stuck in build state



Hello Kenneth,



Thanks for replying.



I had actually tried this on a Dev environment earlier and it caused the node 
to spin out of control. I'll explain what I did over there:



1) Found "Mutation of  bytes is too large for the maxiumum size of " and 
thus increased the value of "commitlog_segment_size_in_mb" to 64

2) This worked for a few minutes and again the view started failing when it hit 
the new limits and the messages now were "Mutation of  bytes is too large 
for the maxiumum size of 2*"

3) So just to try I increased the value to 128

4) Now after this change the node started crashing as soon as I brought the 
service online. I was not able to recover even after restoring the value of 
"commitlog_segment_size_in_mb" to 32



Now there is a key differences to that issue and what I am facing currently:



The views were not dropped on the earlier environment whereas I have already 
dropped the view on the current environment (and cant experiment much as the 
current environment is in production).



I know this is a bit tricky but I'm pretty much stuck over here and thinking of 
finding a non-problem creating solution over here.



Thanks,

Dipan Shah



From: Kenneth Brotman 
Sent: Friday, March 1, 2019 12:26 AM
To: user@cassandra.apache.org
Subject: RE: MV's stuck in build state



Hi Dipan,



Did you try following the advice in the referenced DataStax article called 
Mutation of  bytes is too large for the maximum size of 
<https://support.datastax.com/hc/en-us/articles/207267063-Mutation-of-x-bytes-is-too-large-for-the-maxiumum-size-of-y->
 as suggested in the stackoverflow.com post you cited?



Kenneth Brotman



From: Dipan Shah [mailto:dipan....@hotmail.com]
Sent: Thursday, February 28, 2019 2:23 AM
To: Dipan Shah; user@cassandra.apache.org
Subject: Re: MV's stuck in build state



Forgot to add version info. This is on 3.7.



[cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 34.2 | Native protocol v4]



Thanks,

Dipan Shah



From: Dipan Shah 
Sent: Thursday, February 28, 2019 3:38 PM
To: user@cassandra.apache.org
Subject: MV's stuck in build state



Hello All,



I have a few MV's that are stuck in build state because of a bad schema design 
and thus getting a lot of messages like this "Mutation xxx is too large for 
maximum size of 16.000MiB".



[cid:image001.png@01D4CFFA.539C41B0]



I have dropped those MV's and I can no longer see their schema in the keyspace. 
But they are visible under "system.views_build_in_progress" and "nodetool 
viewbuildstatus".



I have tried "nodetool stop VIEW_BUILD" as suggested here: 
https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build
 and have also reboot a few nodes in the cluster. This has also not helped.



Is there anything else that can be done over here?

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>


Stop Cassandra Materialized View Build - Stack 
Overflow<https://stackoverflowcom/questions/40553499/stop-cassandra-materialized-view-build>

Its not documented, but nodetool stop actually takes any compaction type, not 
just the ones listed (which the view build is one of). So you can simply: 
nodetool stop VIEW_BUILD Or you can hit JMX directly with the 
org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction 
operation.. All thats really gonna do is set a flag for the view builder to 
stop on its next loop.

stackoverflow.com






Thanks,

Dipan Shah


Re: MV's stuck in build state

2019-03-01 Thread Dipan Shah
Hello Kenneth,

Thanks for replying.

I had actually tried this on a Dev environment earlier and it caused the node 
to spin out of control. I'll explain what I did over there:

1) Found "Mutation of  bytes is too large for the maxiumum size of " and 
thus increased the value of "commitlog_segment_size_in_mb" to 64
2) This worked for a few minutes and again the view started failing when it hit 
the new limits and the messages now were "Mutation of  bytes is too large 
for the maxiumum size of 2*"
3) So just to try I increased the value to 128
4) Now after this change the node started crashing as soon as I brought the 
service online. I was not able to recover even after restoring the value of 
"commitlog_segment_size_in_mb" to 32

Now there is a key differences to that issue and what I am facing currently:

The views were not dropped on the earlier environment whereas I have already 
dropped the view on the current environment (and cant experiment much as the 
current environment is in production).

I know this is a bit tricky but I'm pretty much stuck over here and thinking of 
finding a non-problem creating solution over here.


Thanks,

Dipan Shah


From: Kenneth Brotman 
Sent: Friday, March 1, 2019 12:26 AM
To: user@cassandra.apache.org
Subject: RE: MV's stuck in build state


Hi Dipan,



Did you try following the advice in the referenced DataStax article called 
Mutation of  bytes is too large for the maximum size of 
<https://support.datastax.com/hc/en-us/articles/207267063-Mutation-of-x-bytes-is-too-large-for-the-maxiumum-size-of-y->
 as suggested in the stackoverflow.com post you cited?



Kenneth Brotman



From: Dipan Shah [mailto:dipan@hotmail.com]
Sent: Thursday, February 28, 2019 2:23 AM
To: Dipan Shah; user@cassandra.apache.org
Subject: Re: MV's stuck in build state



Forgot to add version info. This is on 3.7.



[cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native protocol v4]



Thanks,

Dipan Shah



From: Dipan Shah 
Sent: Thursday, February 28, 2019 3:38 PM
To: user@cassandra.apache.org
Subject: MV's stuck in build state



Hello All,



I have a few MV's that are stuck in build state because of a bad schema design 
and thus getting a lot of messages like this "Mutation xxx is too large for 
maximum size of 16.000MiB".



[cid:image001.png@01D4CF54.4F7AFE10]



I have dropped those MV's and I can no longer see their schema in the keyspace. 
But they are visible under "system.views_build_in_progress" and "nodetool 
viewbuildstatus".



I have tried "nodetool stop VIEW_BUILD" as suggested here: 
https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build
 and have also reboot a few nodes in the cluster. This has also not helped.



Is there anything else that can be done over here?

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>


Stop Cassandra Materialized View Build - Stack 
Overflow<https://stackoverflowcom/questions/40553499/stop-cassandra-materialized-view-build>

Its not documented, but nodetool stop actually takes any compaction type, not 
just the ones listed (which the view build is one of). So you can simply: 
nodetool stop VIEW_BUILD Or you can hit JMX directly with the 
org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction 
operation.. All thats really gonna do is set a flag for the view builder to 
stop on its next loop.

stackoverflow.com






Thanks,

Dipan Shah


Re: MV's stuck in build state

2019-02-28 Thread Dipan Shah
Forgot to add version info. This is on 3.7.

[cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native protocol v4]


Thanks,

Dipan Shah


From: Dipan Shah 
Sent: Thursday, February 28, 2019 3:38 PM
To: user@cassandra.apache.org
Subject: MV's stuck in build state

Hello All,

I have a few MV's that are stuck in build state because of a bad schema design 
and thus getting a lot of messages like this "Mutation xxx is too large for 
maximum size of 16.000MiB".

[cid:cde7867a-aa71-4046-97f8-7a16b1a4f3c9]

I have dropped those MV's and I can no longer see their schema in the keyspace. 
But they are visible under "system.views_build_in_progress" and "nodetool 
viewbuildstatus".

I have tried "nodetool stop VIEW_BUILD" as suggested here: 
https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build
 and have also reboot a few nodes in the cluster. This has also not helped.

Is there anything else that can be done over here?
[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>

Stop Cassandra Materialized View Build - Stack 
Overflow<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>
Its not documented, but nodetool stop actually takes any compaction type, not 
just the ones listed (which the view build is one of). So you can simply: 
nodetool stop VIEW_BUILD Or you can hit JMX directly with the 
org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction 
operation.. All thats really gonna do is set a flag for the view builder to 
stop on its next loop.
stackoverflow.com




Thanks,

Dipan Shah


MV's stuck in build state

2019-02-28 Thread Dipan Shah
Hello All,

I have a few MV's that are stuck in build state because of a bad schema design 
and thus getting a lot of messages like this "Mutation xxx is too large for 
maximum size of 16.000MiB".

[cid:cde7867a-aa71-4046-97f8-7a16b1a4f3c9]

I have dropped those MV's and I can no longer see their schema in the keyspace. 
But they are visible under "system.views_build_in_progress" and "nodetool 
viewbuildstatus".

I have tried "nodetool stop VIEW_BUILD" as suggested here: 
https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build
 and have also reboot a few nodes in the cluster. This has also not helped.

Is there anything else that can be done over here?
[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>

Stop Cassandra Materialized View Build - Stack 
Overflow<https://stackoverflow.com/questions/40553499/stop-cassandra-materialized-view-build>
Its not documented, but nodetool stop actually takes any compaction type, not 
just the ones listed (which the view build is one of). So you can simply: 
nodetool stop VIEW_BUILD Or you can hit JMX directly with the 
org.apache.cassandra.db:type=CompactionManager mbean's stopCompaction 
operation.. All thats really gonna do is set a flag for the view builder to 
stop on its next loop.
stackoverflow.com




Thanks,

Dipan Shah


View slow queries in V3

2018-11-01 Thread Dipan Shah
Hi All,

Do we have any inbuilt features to log slow\resource heavy queries?

I tried to check data in system_traces.sessions to check current running 
sessions but even that does not have any data.

I'm asking this because I can see 2 nodes in my cluster going Out Of Memory 
multiple times but we're not able to find the reason for it. I'm suspecting 
that it is a heavy read query but can't find any logs for it.

Thanks,
Dipan


Re: backup/restore cassandra data

2018-03-08 Thread Dipan Shah
Commitlog gets truncated once the relevant data is written to sstables. So you 
cant use to replay all the data stored in the node.

Also, snapshots are not automatic. You need to run snapshot command on all the 
nodes of your cluster. Snapshots only get created automatically if you run a 
truncate command or a split operation (as far as I know).

The positive thing in your scenario is that your data directories are intact 
and  you can use that directly as an effective snapshot as Ben suggested. You 
will only have to ensure that the new node has the same token range and also 
has the table schema.


Thanks,

Dipan Shah


From: onmstester onmstester <onmstes...@zoho.com>
Sent: Thursday, March 8, 2018 1:31 PM
To: user
Subject: Re: backup/restore cassandra data

Thanks
But is'nt there a method to restore the node as it was before the crash, like 
commitlog and every last data inserted?
How often snapshots would be created? Shouldn't they be created manually by 
nodetool?  I haven't created snapshots on the node!


Sent using Zoho Mail<https://www.zoho.com/mail/>


 On Thu, 08 Mar 2018 09:41:29 +0330 Ben Slater <ben.sla...@instaclustr.com> 
wrote 

You should be able to follow the same approach(s) as restoring from a backup as 
outlined here: 
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_backup_snapshot_restore_t.html#ops_backup_snapshot_restore_t

Cheers
Ben

On Thu, 8 Mar 2018 at 17:07 onmstester onmstester 
<onmstes...@zoho.com<mailto:onmstes...@zoho.com>> wrote:

--

Ben Slater
Chief Product Officer
[https://cdn2.hubspot.net/hubfs/2549680/Instaclustr-Navy-logo-new.png]<https://www.instaclustr.com/>

[http://cdn2.hubspot.net/hubfs/184235/dev_images/signature_app/facebook_sig.png]<https://www.facebook.com/instaclustr>
  
[http://cdn2.hubspot.net/hubfs/184235/dev_images/signature_app/twitter_sig.png] 
<https://twitter.com/instaclustr>   
[http://cdn2.hubspot.net/hubfs/184235/dev_images/signature_app/linkedin_sig.png]
 <https://www.linkedin.com/company/instaclustr>

Read our latest technical blog posts here<https://www.instaclustr.com/blog/>.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia) and 
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally privileged 
information.  If you are not the intended recipient, do not copy or disclose 
its content, but please reply to this email immediately and highlight the error 
to the sender and then immediately delete the message.

Would it be possible to copy/paste Cassandra data directory from one of nodes 
(which Its OS partition corrupted) and use it in a fresh Cassandra node? I've 
used rf=1 so that's my only chance!


Sent using Zoho Mail<https://www.zoho.com/mail/>





Re: Error during select query - Found other issues with cluster too

2017-12-20 Thread Dipan Shah
Hello Nicolas,


Here's our data model:


  *   CREATE TABLE hhahistory.history (

 *   tablename text,

 *   columnname text,

 *   tablekey bigint,

 *   updateddate timestamp,

 *   dateyearpart bigint,

 *   historyid bigint,

 *   appname text,

 *   audittype text,

 *   createddate timestamp,

 *   dbsession uuid,

 *   firstname text,

 *   historybatch uuid,

 *   historycassandraid uuid,

 *   hostname text,

 *   isvlm boolean,

 *   lastname text,

 *   loginname text,

 *   newvalue text,

 *   notes text,

 *   oldvalue text,

 *   reason text,

 *   updatedby text,

 *   updatedutcdate timestamp,

 *   dbname text,

 *   PRIMARY KEY (( tablename, columnname,dateyearpart ), tablekey, 
updateddate, historyid));


We are using this to store audit data of our primary SQL Server DB. Our primary 
key consists of the original table name, column name and the month+year 
combination.


I just realized that a script had managed to sneak in more than 100 million 
rows on the same day so that might me the reason for all this data going into 
the same partition. I'll see if I can do something about this.


Thanks,

Dipan Shah



From: Nicolas Guyomar <nicolas.guyo...@gmail.com>
Sent: Wednesday, December 20, 2017 2:48 PM
To: user@cassandra.apache.org
Subject: Re: Error during select query - Found other issues with cluster too

Hi Dipan,

This seems like a really unbalanced modelisation, you have some very wide rows !

Can you share your model and explain a bit what you are storing in this table ? 
Your partition key might not be appropriate

On 20 December 2017 at 09:43, Dipan Shah 
<dipan@hotmail.com<mailto:dipan@hotmail.com>> wrote:

Hello Kurt,


I think I might have found the problem:


Can you please look at the tablehistogram for a table and see if that seems to 
be the problem? I think the Max Partition Size and Cell Count are too high:


Percentile  SSTablesWrite Latency (micros)  Read Latency (micros)   
Partition Size (bytes)  Cell Count
50.00%  0.000.000.0029521   2299
75.00%  0.000.000.00379022  29521
95.00%  0.000.000.005839588 454826
98.00%  0.000.000.00301309922346799
99.00%  0.000.000.00899706607007506
Min 0.000.000.00150 0
Max 0.000.000.0053142810146 1996099046



Thanks,

Dipan Shah


____
From: Dipan Shah <dipan@hotmail.com<mailto:dipan@hotmail.com>>
Sent: Wednesday, December 20, 2017 12:04 PM
To: User
Subject: Re: Error during select query - Found other issues with cluster too


Hello Kurt,


We are using V 3.11.0 and I think this might a part of a bigger problem. I can 
see that nodes are failing in my cluster unexpectedly and also repair commands 
are failing.


Repair command failure error:


INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:02,332 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xacc9a54a, 
L:/10.10.52.17:9042<http://10.10.52.17:9042> ! 
R:/10.10.55.229:58712<http://10.10.55.229:58712>]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:11,056 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xeebf628d, 
L:/10.10.52.17:9042<http://10.10.52.17:9042> ! 
R:/10.10.55.229:58130<http://10.10.55.229:58130>]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer


Node failure error:


ERROR [STREAM-IN-/10.10.52.22:7000<http://10.10.52.22:7000>] 2017-12-20 
01:17:17,691 JVMStabilityInspector.java:142 - JVM state determined to be 
unstable.  Exiting forcefully due to:
java.io.FileNotFoundException: 
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
 (Too many open files)
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_131]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:213) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:101) ~[na:1.8.0_131]
at 
org.apache.cassandra.io<http://org.apache.cassandra.io>.sstable.format.big.BigTableWriter$IndexWriter.flushBf(BigTableWriter.java:486)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io<http://org.apache.cassandra.io>.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:516)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)

Re: Error during select query - Found other issues with cluster too

2017-12-20 Thread Dipan Shah
Hello Adama,


Even I realised this and found over 14k files in the data folder.


I am not sure if this is the ideal solution but i ran a manual compaction over 
there and the number of files came down to 200.


I had the same issue in another node too so I am running a compaction there too 
and after that will update if that solved my problem.


Thanks,

Dipan Shah



From: adama.diab...@orange.com <adama.diab...@orange.com>
Sent: Wednesday, December 20, 2017 3:43 PM
To: user@cassandra.apache.org; Dipan Shah
Subject: RE: Error during select query - Found other issues with cluster too


Hi Dipan,



Your node failure trace said :

java.io.FileNotFoundException: 
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
 (Too many open files)

You are probably crossing the max number of opened files set at OS level for 
Cassandra login.



On linux boxes you can get the number of file handles currently opened by 
Cassandra and compare it to the max number set at OS level.

Can you, please, do the following as Cassandra login :

$ ps –efa | grep –i Cassandra # to get the Cassandra process id; let say 1234

$ lsof –n –p 1234 # change 1234 to your current 
Cassandra process id

$ ulimit –Hn ; ulimit -Sn



What are your OS and its version ?

Thanks,

Adama





De : Dipan Shah [mailto:dipan@hotmail.com]
Envoyé : mercredi 20 décembre 2017 07:34
À : User
Objet : Re: Error during select query - Found other issues with cluster too



Hello Kurt,



We are using V 3.11.0 and I think this might a part of a bigger problem. I can 
see that nodes are failing in my cluster unexpectedly and also repair commands 
are failing.



Repair command failure error:



INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:02,332 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xacc9a54a, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58712]

io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer

at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]

INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:11,056 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xeebf628d, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58130]

io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer



Node failure error:



ERROR [STREAM-IN-/10.10.52.22:7000] 2017-12-20 01:17:17,691 
JVMStabilityInspector.java:142 - JVM state determined to be unstable.  Exiting 
forcefully due to:

java.io.FileNotFoundException: 
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
 (Too many open files)

at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_131]

at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_131]

at java.io.FileOutputStream.(FileOutputStream.java:213) ~[na:1.8.0_131]

at java.io.FileOutputStream.(FileOutputStream.java:101) ~[na:1.8.0_131]

at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.flushBf(BigTableWriter.java:486)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:516)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:364)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:264)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:59)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.io.sstable.format.RangeAwareSSTableWriter.finish(RangeAwareSSTableWriter.java:129)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:110)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:656) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:523)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:317)
 ~[apache-cassandra-3.11

Re: Error during select query - Found other issues with cluster too

2017-12-20 Thread Dipan Shah
Hello Kurt,


I think I might have found the problem:


Can you please look at the tablehistogram for a table and see if that seems to 
be the problem? I think the Max Partition Size and Cell Count are too high:


Percentile  SSTablesWrite Latency (micros)  Read Latency (micros)   
Partition Size (bytes)  Cell Count
50.00%  0.000.000.0029521   2299
75.00%  0.000.000.00379022  29521
95.00%  0.000.000.005839588 454826
98.00%  0.000.000.00301309922346799
99.00%  0.000.000.00899706607007506
Min 0.000.000.00150 0
Max 0.000.000.0053142810146 1996099046



Thanks,

Dipan Shah



From: Dipan Shah <dipan@hotmail.com>
Sent: Wednesday, December 20, 2017 12:04 PM
To: User
Subject: Re: Error during select query - Found other issues with cluster too


Hello Kurt,


We are using V 3.11.0 and I think this might a part of a bigger problem. I can 
see that nodes are failing in my cluster unexpectedly and also repair commands 
are failing.


Repair command failure error:


INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:02,332 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xacc9a54a, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58712]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:11,056 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xeebf628d, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58130]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer


Node failure error:


ERROR [STREAM-IN-/10.10.52.22:7000] 2017-12-20 01:17:17,691 
JVMStabilityInspector.java:142 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.io.FileNotFoundException: 
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
 (Too many open files)
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_131]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:213) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:101) ~[na:1.8.0_131]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.flushBf(BigTableWriter.java:486)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:516)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:364)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:264)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:59)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.RangeAwareSSTableWriter.finish(RangeAwareSSTableWriter.java:129)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:110)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:656) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:523)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:317)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]




Thanks,

Dipan Shah



From: kurt greaves <k...@instaclustr.com>
Sent: Wednesday, December 20, 2017 2:23 AM
To: User
Subject: Re: Error during select query

Can you send through the full stack trace as reported in the Cassandra logs? 
Also, what version are you running?

On 19 Dec. 2017 9:23 pm, "Dipan Shah" 
<dipan@hotmail.com<mailto:dipan@hotmail.com>> wrote:

Hello,


I am getting an error message when I'm running a select query from 1 particular 
node. The error is "ServerError: java.lang.IllegalStateException: Unable to 

Re: Error during select query - Found other issues with cluster too

2017-12-19 Thread Dipan Shah
Hello Kurt,


We are using V 3.11.0 and I think this might a part of a bigger problem. I can 
see that nodes are failing in my cluster unexpectedly and also repair commands 
are failing.


Repair command failure error:


INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:02,332 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xacc9a54a, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58712]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
INFO  [Native-Transport-Requests-2] 2017-12-19 17:06:11,056 Message.java:619 - 
Unexpected exception during request; channel = [id: 0xeebf628d, 
L:/10.10.52.17:9042 ! R:/10.10.55.229:58130]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
Connection reset by peer


Node failure error:


ERROR [STREAM-IN-/10.10.52.22:7000] 2017-12-20 01:17:17,691 
JVMStabilityInspector.java:142 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.io.FileNotFoundException: 
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
 (Too many open files)
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_131]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:213) ~[na:1.8.0_131]
at java.io.FileOutputStream.(FileOutputStream.java:101) ~[na:1.8.0_131]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.flushBf(BigTableWriter.java:486)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:516)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:364)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:264)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:59)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.io.sstable.format.RangeAwareSSTableWriter.finish(RangeAwareSSTableWriter.java:129)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:110)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:656) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:523)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:317)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]




Thanks,

Dipan Shah



From: kurt greaves <k...@instaclustr.com>
Sent: Wednesday, December 20, 2017 2:23 AM
To: User
Subject: Re: Error during select query

Can you send through the full stack trace as reported in the Cassandra logs? 
Also, what version are you running?

On 19 Dec. 2017 9:23 pm, "Dipan Shah" 
<dipan@hotmail.com<mailto:dipan@hotmail.com>> wrote:

Hello,


I am getting an error message when I'm running a select query from 1 particular 
node. The error is "ServerError: java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed".


Has anyone faced this error earlier? I tried to search for this but did not get 
anything that matches my scenario.


Please note, I do not get this error when I run the same query from any other 
node. And I'm connecting to the node using cqlsh.


Thanks,

Dipan Shah


Error during select query

2017-12-19 Thread Dipan Shah
Hello,


I am getting an error message when I'm running a select query from 1 particular 
node. The error is "ServerError: java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed".


Has anyone faced this error earlier? I tried to search for this but did not get 
anything that matches my scenario.


Please note, I do not get this error when I run the same query from any other 
node. And I'm connecting to the node using cqlsh.


Thanks,

Dipan Shah


Repair failing after it was interrupted once

2017-11-15 Thread Dipan Shah
Hello,


I was running a "nodetool repair -pr" command on one node and due to some 
network issues I lost connection to the server.


Now when I am running the same command on that and other servers too, the 
repair job if failing with the following log:


[2017-11-15 03:55:19,965] Some repair failed
[2017-11-15 03:55:19,965] Repair command #1 finished in 0 seconds
error: Repair job has failed with the error message: [2017-11-15 03:55:19,965] 
Some repair failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2017-11-15 03:55:19,965] Some repair failed
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:116)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)

I found a few JIRA issues related to this but they were marked as fixed so I am 
not really sure if this is a bug. I am running Cassandra V 3.11.0.


One stackoverflow post suggested that I should restart all nodes and that seems 
to be overkill.


Can someone please guide me through this?


Thanks,

Dipan Shah


Re: Cassandra 3.10 Bootstrap- Error

2017-10-24 Thread Dipan Shah
Hi Anumod,


I faced the same issue with 3.11 and I'll suggest you first go through this 
link to check if the new node is able to communicate back on forth on the 
required port with the seed node.


https://support.datastax.com/hc/en-us/articles/209691483-Bootstap-fails-with-Unable-to-gossip-with-any-seeds-yet-new-node-can-connect-to-seed-nodes

[http://p8.zdassets.com/hc/settings_assets/31686/200039586/dvv6wj1TrSrpmATlV5ibLw-DataStax-whitelogo.png]<https://support.datastax.com/hc/en-us/articles/209691483-Bootstap-fails-with-Unable-to-gossip-with-any-seeds-yet-new-node-can-connect-to-seed-nodes>

Bootstap fails with "Unable to gossip with any seeds" yet 
...<https://support.datastax.com/hc/en-us/articles/209691483-Bootstap-fails-with-Unable-to-gossip-with-any-seeds-yet-new-node-can-connect-to-seed-nodes>
support.datastax.com
DataStax Support; DataStax Enterprise; Install/Upgrade; Bootstap fails with 
"Unable to gossip with any seeds" yet new node can connect to seed nodes


This will mostly be the issue but even if that is not solving your problem, 
check the following points:


1) Check free disk space on the seed nodes. There should be sufficient free 
space for data migration to the new node.

2) Check logs of the seed nodes and see if there are any errors. I found some 
gossip file corruption on one of the seed nodes.

3) Finally, restart server\cassandra services on the seed nodes and see if that 
helps.


Do let me know if this solved your problem.



Thanks,

Dipan Shah



From: Anumod Mullachery <anumodmullache...@gmail.com>
Sent: Tuesday, October 24, 2017 2:12 AM
To: user@cassandra.apache.org
Subject: Cassandra 3.10 Bootstrap- Error

Hi,

We are using cassandra 3.10 , with Network topology strategy , & 2 DC having 
only 1 node each.

We are trying to add New nodes (auto_bootstrap: true) in yaml ,  but getting 
the below error-

In the Seed nodes list, we have provided both the existing nodes from both 
DC(total -2 nodes). & tried with different option, by keeping only 1 node, but 
no hope.


2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip] WARN  
 SSLFactory.java:221 - Filtering out [TLS_RSA_WITH_AES_256_CBC_SHA] as it isn't 
supported by the socket
2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip] ERROR 
 OutboundTcpConnection.java:487 - SSL handshake error for outbound connection 
to 15454e08[SSL_NULL_WITH_NULL_NULL: 
Socket[addr=/96.115.209.92<http://96.115.209.92>,port=10145,localport=60859]]
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is 
disabled or cipher suites are inappropriate)

2017-10-23 20:06:32,655 [main] ERROR  CassandraDaemon.java:752 - Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds

2017-10-23 20:06:32,666 [StorageServiceShutdownHook] INFO   
HintsService.java:221 - Paused hints dispatch
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] WARN   Gossiper.java:1514 
- No local state, state is in silent shutdown, or node hasn't joined, not 
announcing shutdown
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] INFO   
MessagingService.java:964 - Waiting for messaging service to quiesce
2017-10-23 20:06:32,667 [ACCEPT-/96.115.208.150<http://96.115.208.150>] INFO   
MessagingService.java:1314 - MessagingService has terminated the accept() thread
2017-10-23 20:06:33,134 [StorageServiceShutdownHook] INFO   
HintsService.java:221 - Paused hints dispatch

Can some one able to put some light on this issue, will be a great help.

thanks in advance,

- regards

Anumod.