Hello
the jira 12849 has already a patch dispo. Might someone take a look of this
jira ?
https://issues.apache.org/jira/browse/CASSANDRA-12849
Saludos
Jean Carlo
"The best way to predict the future is to invent it" Alan Kay
Thanks Kurt, I appreciate that feedback.
I’ll investigate the metrics more fully and come back with my finding.
In terms of logs, I did look in the logs of the nodes and found nothing I
am afraid.
On Wed, Jun 28, 2017 at 11:33 PM, kurt greaves wrote:
> I'd say that no, a range query probably i
Hello Folks,
I’m on Cassandra 2.2.8 cluster with 14 nodes , each with around 2TB of data
volume. I’m looking for a criteria /or data points that can help me decide when
or if I should add more nodes to the cluster and by how many nodes.
I’ll really appreciate if you guys can share your insight
Ideally you should maintain 50% disk space.
SLA and Node load is also very important to make the decision.
> On Jun 29, 2017, at 6:45 AM, ZAIDI, ASAD A wrote:
>
> Hello Folks,
>
> I’m on Cassandra 2.2.8 cluster with 14 nodes , each with around 2TB of data
> volume. I’m looking for a criteria
Hi,
I use a jbod setup (2 * 1TB) and the distribution is a little bit
unequal on my three nodes:
270MB and 540MB
150 and 580
290 and 500
SStable size varies between 2GB and 130GB.
Is is possible to move sstables from one disk to another to balance the
disk usage?
Otherwise is a raid-0 setup the
Hi Asad,
First, you need to understand the factors impacting cluster capacity. Some of
the important factors to be considered while doing capacity planning of
Cassandra are:
1. Compaction strategy: It impacts disk space requirements and IO/CPU/memory
overhead for compactions.
2. Replication Fac
Hello Jeff,
Yes 2.1.16 is old version, and we are planning to upgrade in few months.
Only the gossiper info is logged stating that it marked several nodes down
and nothing else.
On Wed, Jun 28, 2017 at 8:15 PM, Jeff Jirsa wrote:
>
>
> On 2017-06-28 18:51 (-0700), Jai Bheemsen Rao Dhanwada <
>
Thanks. I tried with trace option and there is not much info. Here are the
few log lines just before it failed.
[2017-06-29 19:01:54,969] /xx.xx.xx.93: Sending REPAIR_MESSAGE message to
/xx.xx.xx.91
[2017-06-29 19:01:54,969] /xx.xx.xx.92: Appending to commitlog
[2017-06-29 19:01:54,969] /xx.xx.xx
2.1.16 is old, but it's not as old as 2.1.6, which is what you originally put,
and would be much more concerning.
It is true, however, that 'removenode' involves streaming data, and streaming
data can be GC intensive (especially with compression enabled), which means if
your cluster is on the
Thanks for all the responses. It's much clearer now.
2017-06-26 0:59 GMT-03:00 Paulo Motta :
> > Not sure since what version, but in 3.10 at least (I think its since 3.x
> started) full repair does do anti-compactions and marks sstables as
> repaired.
>
> Thanks for the correction, anti-compactio
Balaji,
Are you repairing a specific keyspace/table? if the failure is tied to a table,
try 'verify' and 'scrub' options on .91...see if you get any errors.
On Thursday, June 29, 2017, 12:12:14 PM PDT, Balaji Venkatesan
wrote:
Thanks. I tried with trace option and there is not much info. Her
50% disk free is really only required with STCS (in size tiered compaction, if
you have 4 files of a similar size, they'll be joined together - there are
theoretically times when all of your data is in 4 files of the same size, and
to join them together you'll temporarily double your disk space
Hello Jeff,
Sorry the Version I am using 2.1.16, my first email had typo.
When I say schema out of sync
1. nodetool descriebcluster shows Schema versions same for all nodes.
2. nodetool removenode, shows the node down messages in the logs
3. nodetool describecluster during this 1-2 mins shows sev
The verify and scrub went without any error on the keyspace. I ran it again
with trace mode and still the same issue
[2017-06-29 21:37:45,578] Parsing UPDATE
system_distributed.parent_repair_history SET finished_at =
toTimestamp(now()), successful_ranges = {'} WHERE
parent_id=f1f10af0-5d12-11
On 2017-06-29 13:45 (-0700), Jai Bheemsen Rao Dhanwada
wrote:
> Hello Jeff,
>
> Sorry the Version I am using 2.1.16, my first email had typo.
> When I say schema out of sync
>
> 1. nodetool descriebcluster shows Schema versions same for all nodes.
Ok got it, this is what I was most concerne
Thanks Jeff,
Can you please suggest what value to tweak from the Cassandra side?
On Thu, Jun 29, 2017 at 2:53 PM, Jeff Jirsa wrote:
>
>
> On 2017-06-29 13:45 (-0700), Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
> > Hello Jeff,
> >
> > Sorry the Version I am using 2.1.16, my firs
Hi –
Is it normal for cassand to be shutdown forcefully on timeout exceptions when
using UDFs? We are admittedly trying some load tests on our dev environments
which may be somewhat constrained, but didn’t expect to see forceful shutdowns
such as these when we ran our tests. We’re running Cass
Run the following query and see if it gives you more information:
select * from system_distributed.repair_history;
Also is there any additional logging on the nodes where the error is coming
from. Seems to be xx.xx.xx.94 for your last run.
> On 30/06/2017, at 9:43 AM, Balaji Venkatesan
> wro
By default user_function_timeout_policy is set to die i.e. warn and kill the
JVM. Please find below a source code snippet that outlines possible setting.
/**
* Defines what to do when a UDF ran longer than
user_defined_function_fail_timeout.
* Possible options are:
* - 'die' -
It did not help much. But other issue or error I saw when I repair the
keyspace was it says
"Sync failed between /xx.xx.xx.93 and /xx.xx.xx.94" this was run from .91
node.
On Thu, Jun 29, 2017 at 4:44 PM, Akhil Mehra wrote:
> Run the following query and see if it gives you more information:
>
20 matches
Mail list logo