What's the "one-year gossip bug" in this context?
On Thu, Mar 22, 2018 at 3:26 PM, Carl Mueller
wrote:
> Thanks. The rolling restart triggers the gossip bug so that's a no go.
> We'lre going to migrate off the clsuter. Thanks!
>
>
>
> On Thu, Mar 22, 2018 at 5:04
Subrange repair of only the neighbors is sufficient
Break the range covering the dead node into ~100 splits and repair those splits
individually in sequence. You don’t have to repair the whole range all at once
--
Jeff Jirsa
> On Mar 22, 2018, at 8:08 PM, Peng Xiao <2535...@qq.com&
Why .14? I would consider 3.0.16 to be production worthy.
--
Jeff Jirsa
> On Mar 23, 2018, at 2:01 PM, Nitan Kainth <nitankai...@gmail.com> wrote:
>
> Hi All,
>
> Our repairs are consuming CPU and some research shows that moving to 3.0.14
> will help us fix t
I suspect you're approaching this problem from the wrong side.
The decision of MySQL vs Cassandra isn't usually about performance, it's
about the other features that may impact/enable that performance.
- Will you have a data set that won't fit on any single MySQL Server?
- Will you want to write
> On Mar 5, 2018, at 6:40 AM, Oleksandr Shulgin
> wrote:
>
> Hi,
>
> We were deploying a second DC today with 3 seed nodes (30 nodes in total) and
> we have noticed that all seed nodes reported the following:
>
> INFO 10:20:50 Create new Keyspace:
> On Mar 5, 2018, at 6:52 AM, D. Salvatore wrote:
>
> Hello everyone,
> I am benchmarking a Cassandra installation on Azure composed of 4 nodes
> (Standard_D2S_V3 - 2vCPU and 8GB ram) with a replication factor of 2.
Bit smaller than most people would want to run in
I’d personally be willing to run 3.0.16
3.11.2 or 3 whatever should also be similar, but I haven’t personally tested it
at any meaningful scale
--
Jeff Jirsa
> On Mar 2, 2018, at 2:37 PM, Kenneth Brotman <kenbrot...@yahoo.com.INVALID>
> wrote:
>
> Seems like a lot of
Instaclustr sponsored the 2017 NGCC (Next Gen Cassandra Conference), which
was developer/development focused (vs user focused).
For 2018, we're looking at options for both a developer conference and a
user conference. There's a lot of logistics involved, and I think it's
fairly obvious that most
Same technique works for production, too. Rack aware snitch will protect
against placing replicas on the same host, as long as the rack info is correct
--
Jeff Jirsa
> On Feb 27, 2018, at 8:43 PM, daemeon reiydelle <daeme...@gmail.com> wrote:
>
> Docker will provide less pe
No - they'll hardlink into the snapshot folder on each data directory. They
are true hardlinks, so even if you could move it, it'd still be on the same
filesystem.
Typical behavior is to issue a snapshot, and then copy the data out as
needed (using something like
Try again and this time don’t remove tables during bootstrap (the streaming
code doesn’t handle removing tables very well).
--
Jeff Jirsa
> On Jun 27, 2018, at 7:30 PM, dayu wrote:
>
> Hi everyone
> I am joining a new node to a cluster. but it failed even if I use
> nod
That log message says you did:
CF 53f6d520-2dc6-11e8-948d-ab7caa3c8c36 was dropped during streaming
If you’re absolutely sure you didn’t, you should look for schema mismatches in
your cluster
--
Jeff Jirsa
> On Jun 27, 2018, at 7:49 PM, dayu wrote:
>
> CF 53f6d520-2dc6-
on any real tables?
--
Jeff Jirsa
> On Jun 27, 2018, at 7:58 PM, dayu wrote:
>
> That sound reasonable, I have seen schema mismatch error before.
> So any advise to deal with schema mismatches?
> Dayu
>
> At 2018-06-28 09:50:37, "Jeff Jirsa" wrote:
> >That
The single node in 1e will be a replica for every range (and you won’t be able
to tolerate an outage in 1c), potentially putting it under significant load
--
Jeff Jirsa
> On Jun 28, 2018, at 7:02 AM, Randy Lynn wrote:
>
> I have a 6-node cluster I'm migrating to the new
If this is 2.1 AND you do deletes AND you have a non-zero number of failed
writes (timeouts), it’s possibly short reads
3.0 fixes this ( https://issues.apache.org/jira/browse/CASSANDRA-12872 ), it
won’t be backported to 2.1 because it’s a significant change to how reads are
executed
--
Jeff
Should be fine, just get the java and kernel versions and kernel tuning params
as close as possible
--
Jeff Jirsa
> On Oct 14, 2018, at 5:09 PM, Eyal Bar wrote:
>
> Hi all,
>
> Did anyone installed a Cassandra cluster with mixed Linux OSs where some of
> the nodes we
This is great!
--
Jeff Jirsa
> On Oct 16, 2018, at 5:47 PM, Hiroyuki Yamada wrote:
>
> Hi all,
>
> # Sorry, I accidentally emailed the following to dev@, so re-sending to here.
>
> We have been working on ACID-compliant transaction library on top of
> Ca
resurrected (due to lack of repair before gc grace seconds) is not a
problem for you
--
Jeff Jirsa
> On Oct 19, 2018, at 3:57 AM, "wxn...@zjqunshuo.com"
> wrote:
>
> > Is the repair not necessary to get data files remove from filesystem ?
> The answer is no
> On Oct 19, 2018, at 10:37 AM, Oleksandr Shulgin
> wrote:
>
>> On Fri, Oct 19, 2018 at 10:23 AM Jeff Jirsa wrote:
>> It depends on your yaml settings - in newer versions you can have cassandra
>> only purge repaired tombstones (and ttl’d data is a tombston
Nodetool will eventually return when it’s done
You can also watch nodetool compactionstats
--
Jeff Jirsa
> On Oct 22, 2018, at 10:53 AM, Ian Spence wrote:
>
> Environment: Cassandra 2.2.9, GNU/Linux CentOS 6 + 7. Two DCs, 3 RACs in DC1
> and 6 in DC2.
>
> We recently
Are you SURE there are no writes to that table coming from another DC?
--
Jeff Jirsa
> On Oct 15, 2018, at 5:34 PM, Naik, Ninad wrote:
>
> Thanks Jeff. We're not doing deletes, but I will take a look at this jira.
> From: Jeff Jirsa
> Sent: Sunday, October 14, 2018 12:55:1
3.5 is probably not a version you should be using in production in 2018 - it
was a feature release and has had no bug fixes for years. Going up to 3.11.3
will likely fix many serious bugs you’re not noticing, and maybe the bug below
you are noticing
--
Jeff Jirsa
> On Oct 24, 2018, a
I don't have time to reply to your stackoverflow post, but what you
proposed is a great idea for a server that size.
You can use taskset or numactl to bind each JVM to the appropriate
cores/zones.
Setup a data directory on each SSD for the data
There are two caveats you need to think about:
1)
version) that can be far more impactful.
--
Jeff Jirsa
> On Oct 30, 2018, at 8:21 AM, Carl Mueller
> wrote:
>
> We are about to finally embark on some version upgrades for lots of clusters,
> 2.1.x and 2.2.x targetting eventually 3.11.x
>
> I have seen recipes that do th
This isn’t true if your clustering is time based because the read path can
selectively include/exclude sstables based on the clustering keys
--
Jeff Jirsa
> On Oct 25, 2018, at 12:26 PM, Dor Laor wrote:
>
> TWCS is good for time series but if your workload updates the same keys
r/src/github/twcs/src/main/java/com/jeffjirsa/cassandra/db/compaction/SizeTieredCompactionStrategy.java:[104,67]
> cannot find symbol
> symbol: class SizeComparator
> location: class org.apache.cassandra.io.sstable.format.SSTableReader
> [INFO] 4 errors
>
>
>
> On Fri, Nov 2, 20
If you mean encryption at rest: no, it’s not currently supported. It’ll
eventually be implemented in
https://issues.apache.org/jira/browse/CASSANDRA-9633 , but that ticket is
currently unassigned and there’s no ETA.
--
Jeff Jirsa
> On Nov 12, 2018, at 10:21 PM, Goutham reddy
>
Easiest approach is to build the 3.11 jar from my repo, upgrade, then ALTER
table to use the official TWCS (org.apache.cassandra) jar
Sorry for the headache. I hope I have a 3.11 branch for you.
--
Jeff Jirsa
> On Nov 2, 2018, at 11:28 AM, Brian Spindler wrote:
>
> Hi all, we're
There’s a chance it will fail to work - possible method signatures changed
between 3.0 and 3.11. Try it in a test cluster before prod
--
Jeff Jirsa
> On Nov 2, 2018, at 11:49 AM, Brian Spindler wrote:
>
> Nevermind, I spoke to quickly. I can change the cass version in the pom.xml
Definitely don’t go to 3.10, go to 3.11.3 or newest 3.0 instead
--
Jeff Jirsa
On Sep 30, 2018, at 5:29 PM, Nate McCall wrote:
>> I have a cluster on v3.0.11 I am planning to upgrade this to 3.10.
>> Is rolling back the binaries a viable solution?
>
> What's the goal wi
sstable version alone isn’t sufficient - there can be other surprises that will
break the lower version (commitlog format change, new types or concepts like
UDTs that may appear in the schema, etc)
I think 3.11 to 3.0 still works but I’m not certain of it personally
--
Jeff Jirsa
> On
In both cases:
Do your partitions span time windows? Is there a single partition that exists
in all 800 of those sstables?
--
Jeff Jirsa
> On Sep 24, 2018, at 1:20 AM, Martin Mačura wrote:
>
> Hi,
> I can confirm the same issue in Cassandra 3.11.2.
>
> As an exam
> On Sep 24, 2018, at 3:47 AM, Oleksandr Shulgin
> wrote:
>
>> On Mon, Sep 24, 2018 at 10:50 AM Jeff Jirsa wrote:
>> Do your partitions span time windows?
>
> Yes.
>
The data structure used to know if data needs to be streamed (the merkle tree)
is only g
You can select the token for the key (select token()), and then repair the
surrounding range
Don’t try to repair a single token, try to repair some small range like 2^10
above/below the token you care about.
--
Jeff Jirsa
> On Jan 1, 2019, at 12:31 PM, Rahul Reddy wrote:
>
&
Read repair due to digest mismatch and speculative retry can both cause
some behaviors that are hard to reason about (usually seen if a host stops
accepting writes due to bad disk, which you havent described, but generally
speaking, there are times when reads will block on writing to extra
(hint throttle is quite low in 3.11, you may want
to increase it).
--
Jeff Jirsa
> On Jan 1, 2019, at 11:51 AM, Vlad wrote:
>
> Hi, thanks for answer.
>
> what I don't understand is:
>
> - why there are attempts of read repair if repair chances are 0.0 ?
> - w
The reason big rows are painful in Cassandra is that by default, we index
it every 64kb. With 300k objects, it may or may not have a lot of those
little index blocks/objects. How big is each row?
If you try to read it and it's very wide, you may see heap pressure / GC.
If so, you could try
> On Jan 23, 2019, at 8:00 AM, Nitan Kainth wrote:
>
> Hi,
>
> Why does nodetool compactionstats not show time remaining when
> compactionthroughput is set to 0?
Because we don’t have a good estimate if we’re not throttling (could be added,
just not tracked now)
>
> If the node is
The read repair you have disabled is the probabilistic background repair -
foreground repair due to mismatch still happens
Streaming should respect windows. Streaming doesn’t write to the memtable, only
the write path puts data into the memtable.
--
Jeff Jirsa
> On Dec 18, 2018, at 1:49
0
> 1386179893 0 0
> 1663415872 0 0
> 1996099046 0 0
> 2395318855 0 0
> 2874382626 0
> 3449259151 0
> 41391
can force sstables to be dropped at expiration regardless of
overlaps, but you have to set some properties because it’s technically unsafe
(if you write to the table with anything other than ttls).
--
Jeff Jirsa
> On Dec 24, 2018, at 12:05 AM, Eunsu Kim wrote:
>
>
What compaction strategy are you using ?
What consistency level do you use on writes? Reads?
--
Jeff Jirsa
> On Dec 23, 2018, at 11:53 PM, Eunsu Kim wrote:
>
> Merry Christmas
>
> The Cassandra cluster I operate consists of two datacenters.
>
> Most data has a TTL
Remove node will stream data from all windows to remote nodes , so some
compaction is expected
Would need to see the sstablemetadata to understand what’s happening there.
--
Jeff Jirsa
> On Dec 13, 2018, at 10:26 PM, Roy Burstein wrote:
>
> Hi all ,
> My colleague opened
t; 30130992 0 0
>> 36157190 0 0
>> 43388628 0 0
>> 52066354 0 0
>> 62479625 0 0
>> 74975550
Are you sure you’re blocked on internode and not commitlog? Batch is typically
not what people expect (group commitlog in 4.0 is probably closer to what you
think batch does).
--
Jeff Jirsa
> On Nov 27, 2018, at 10:55 PM, Yuji Ito wrote:
>
> Hi,
>
> Thank you for th
> On Dec 2, 2018, at 12:40 PM, Shravan R wrote:
>
> Marc/Dimitry/Jon - greatly appreciate your feedback. I will look into the
> version part that you suggested. The reason to go direct to 3.x is to take a
> bi leap and reduce overall effort to upgrade a large cluster (development
>
Schema won’t be transferred cross-majors
--
Jeff Jirsa
> On Dec 4, 2018, at 10:51 PM, Shravan R wrote:
>
> Thanks Jeff. I tried to bootstrap a 3.x node to a partially upgraded cluster
> (2.1.9 + 3.x) and I was not able to do so. The schema never settled.
>
> How does
on, which is unfortunate because it’s a fair amount of effort.
--
Jeff Jirsa
> On Dec 9, 2018, at 2:09 AM, Devaki, Srinivas wrote:
>
> Hi Guys,
>
> Since the start of our org, cassandra used to be a SPOF, due to recent
> priorities we changed our code base so that cassa
I suspect some of the intermediate queries (determining role, etc) happen at
quorum in 2.2+, but I don’t have time to go read the code and prove it.
In any case, RF > 10 per DC is probably excessive
Also want to crank up the validity times so it uses cached info longer
--
Jeff Ji
Could also be the app not detecting the host is down and it keeps trying to use
it as a coordinator
--
Jeff Jirsa
> On Nov 27, 2018, at 6:33 PM, Ben Slater wrote:
>
> In what way does the cluster become unstable (ie more specifically what are
> the symptoms)? My first t
This violates any consistency guarantees you have and isn’t the right approach
unless you know what you’re giving up (correctness, typically)
--
Jeff Jirsa
> On Nov 28, 2018, at 2:40 AM, Vitali Dyachuk wrote:
>
> You can use auto_bootstrap set to false to add a new node to
and the more aggressive expiration logic there)
--
Jeff Jirsa
> On Nov 28, 2018, at 11:24 AM, Adam Smith wrote:
>
> Hi All,
>
> I need to use C* somehow as fluent data storage - maybe this is different to
> the queue antipattern? Lots of data come in (10MB/sec/node), rem
> On Jan 7, 2019, at 6:37 AM, Jonathan Ballet wrote:
>
> Hi,
>
> I'm trying to understand how seed nodes are working, when and how do they
> play a part in a Cassandra cluster, and how they should be managed and
> propagated to other nodes.
>
> I have a cluster of 6 Cassandra nodes
> On Jan 7, 2019, at 8:23 AM, Jeff Jirsa wrote:
>
>
>
>
>> On Jan 7, 2019, at 6:37 AM, Jonathan Ballet wrote:
>>
>> Hi,
>>
>> I'm trying to understand how seed nodes are working, when and how do they
>> play a part in
I encourage you to try all of these in a lab/non-prod environment before
you do this in production. And take backups. This is risky and you should
think about what you're doing before you do it.
The most practical way to do this with no downtime is to spin up a new
cluster in Azure and either do
> On Dec 28, 2018, at 2:17 AM, Jinhua Luo wrote:
>
> Hi All,
>
> While the pending node get streaming of token ranges from other nodes,
> all coordinator would send new writes to it so that it would not miss
> any new data, correct?
>
> I have two (maybe silly) questions here:
> Given the
off based on the
estimated number of keys.
Are you sure that’s not what you’re seeing? If it is, dropping bloom filter FP
ratio or increasing compression chunk size may help (and probably saves you
some disk, you’ll get better ratios but slightly slower by increasing that
chunk size)
--
Jeff
trying to do
/ learn so we can answer the real question?
Few other notes inline.
--
Jeff Jirsa
> On Jan 8, 2019, at 10:51 PM, Jinhua Luo wrote:
>
> Thanks. Let me clarify my questions more.
>
> 1) For memtable, if the selected columns (assuming they are in simple
> typ
Given Consul's popularity, seems like someone could make an argument that
we should be shipping a consul-aware seed provider.
On Tue, Jan 8, 2019 at 7:39 AM Jonathan Ballet wrote:
> On Mon, 7 Jan 2019 at 16:51, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
>> On Mon, Jan 7, 2019
On Mon, 7 Jan 2019 at 17:23, Jeff Jirsa wrote:
>
>> > On Jan 7, 2019, at 6:37 AM, Jonathan Ballet wrote:
>> >
>> [...]
>>
>> > In essence, in my example that would be:
>> >
>> > - decide that #2 and #3 will be the new seed nodes
&g
First:
Compaction controls how sstables are combined but not how they’re read. The
read path (with one tiny exception) doesn’t know or care which compaction
strategy you’re using.
A few more notes inline.
> On Jan 8, 2019, at 3:04 AM, Jinhua Luo wrote:
>
> Hi All,
>
> The compaction
https://issues.apache.org/jira/browse/CASSANDRA-14672 is almost certainly
due to pre-existing corruption . That the user is seeing 14672 is due to
extra guards added in 3.11.3, but 14672 isn't likely going to hit you
unless you're subject to
https://issues.apache.org/jira/browse/CASSANDRA-14515 ,
Repair or read-repair
On Tue, Sep 11, 2018 at 12:58 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, Sep 11, 2018 at 9:47 AM Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
>> On Tue, Sep 11, 2018 at 9:31 AM Steinmaurer, Thomas <
>>
CASSANDA-13004 (fixed in recent 3.0 and 3.11 builds)
On Thu, Sep 13, 2018 at 1:12 PM Max C. wrote:
> I ran “alter table” today to add the “task_output_capture_state” column
> (see below), and we found a few rows inserted around the time of the ALTER
> TABLE did not contain the same values when
> On Sep 17, 2018, at 2:34 AM, Oleksandr Shulgin
> wrote:
>
>> On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin
>> wrote:
>>> On Tue, 11 Sep 2018, 19:26 Jeff Jirsa, wrote:
>>> Repair or read-repair
>>
>>
>> Could you be mo
> On Sep 17, 2018, at 7:29 AM, Oleksandr Shulgin
> wrote:
>
> On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
>>> Again, given that the tables are not updated anymore from the application
>>> and we have repaired them successfully multiple times alread
pretty meaningful
allocations for these. Also, if you have an unusually low compression chunk
size or a very low bloom filter FP ratio, those will be larger.
--
Jeff Jirsa
> On Jan 26, 2019, at 12:11 PM, Ayub M wrote:
>
> Cassandra node went down due to OOM, and checking the /var/lo
The issue in 14861 doesn’t manifest itself in the data file (so you won’t see
it in the sstable json), it’s in the min/max clustering of the metadata used in
the read path.
--
Jeff Jirsa
> On Jan 28, 2019, at 7:08 AM, Ahmed Eljami wrote:
>
> Hi Alain,
>
> Just to
Probably lowest effort is to run the select with tracing enabled - may give
some easy hints
--
Jeff Jirsa
> On Jan 28, 2019, at 7:54 AM, Jonathan Haddad wrote:
>
> Your fastest route might be to run a profiler on Cassandra and get some flame
> graphs. I'm a fan of the as
een this yet. So we have this
> enabled, I guess it will just take time to finally chew through it all?
>
>
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
> *Sent:* Tuesday, March 26, 2019 9:41 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: TWCS Compactions &
This is CASSANDRA-14861
--
Jeff Jirsa
> On Apr 4, 2019, at 8:23 AM, Léo FERLIN SUTTON
> wrote:
>
> Hello !
>
> I have noticed something since I upgraded to cassandra 3.0.18.
>
> Before all my Sstable used to be named this way :
> ```
> mc-130817-big-Comp
How long ago did you remove this host from the cluster?
--
Jeff Jirsa
> On Apr 4, 2019, at 8:09 AM, Nick Hatfield wrote:
>
> This will sound a little silly but, have you tried rolling the cluster?
>
> $> nodetool flush; nodetool drain; service cassandra stop
> $>
Yes it can race; if you don't want to race, you'd want to use SERIAL or
LOCAL_SERIAL.
On Thu, Mar 28, 2019 at 3:04 PM Richard Xin
wrote:
> Hi,
> Our Cassandra Consistency level is currently set to LOCAL_ONE, we have
> script doing followings
> 1) insert one record into table_A
> 2) select
Or Upgrade to a version with
https://issues.apache.org/jira/browse/CASSANDRA-13418 and enable that feature
--
Jeff Jirsa
> On Mar 26, 2019, at 6:23 PM, Rahul Singh wrote:
>
> What's your timewindow? Roughly how much data is in each window?
>
> If you examine the sstab
is probably causing a fair bit of unnecessary compactions and
probably is very slow to expire data).
--
Jeff Jirsa
> On Feb 23, 2019, at 6:31 PM, Rahul Reddy wrote:
>
> Do you see anything wrong with this metric.
>
> metric to scan tombst
probably win some
with basic perf and GC tuning, but can’t really do that via email.
Cassandra-8150 has some pointers.
--
Jeff Jirsa
> On Feb 23, 2019, at 6:52 PM, Jeff Jirsa wrote:
>
> You’ll only ever have one tombstone per read, so your load is based on normal
> read rate no
if that’s your only concern,
I’d ignore it.
--
Jeff Jirsa
> On Feb 23, 2019, at 7:26 PM, Rahul Reddy wrote:
>
> ```jvm setting
>
> -XX:+UseThreadPriorities
> -XX:ThreadPriorityPolicy=42
> -XX:+HeapDumpOnOutOfMemoryError
> -Xss256k
> -XX:StringTableSize=103
t; 75% 0.00 24.60 20.50 124
> 1
> 95% 0.00 35.43 29.52 124
> 1
> 98% 0.00 35.43 42.51 124
> 1
I’m not parsing this - did the lower gcgs help or not ? Seeing the table
histograms is the next step if this is still a problem
The table level TTL doesn’t matter if you set a TTL on each insert
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
Would also be good to see your schema (anonymized if needed) and the select
queries you’re running
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
>
> I'm having gcgs set to 10 mins and changed the table ttl also to 5 hours
> compared
from reading expired rows. But on the plus side, this type of tombstone
read is not expensive and not concerning at all.
--
Jeff Jirsa
> On Feb 24, 2019, at 5:36 AM, Rahul Reddy wrote:
>
> Thanks Jeff. I'm trying to figure out why the tombstones scans are happening
> if possib
that cache
fills up.
--
Jeff Jirsa
> On Mar 6, 2019, at 11:40 AM, Jonathan Haddad wrote:
>
> That’s not an error. To the left of the log message is the severity, level
> INFO.
>
> Generally, I don’t recommend running Cassandra on only 2GB ram or for small
> dataset
encourage compaction to grab sstables just because they’re full of tombstones
which will probably help you.
--
Jeff Jirsa
> On Feb 22, 2019, at 8:37 AM, Kenneth Brotman
> wrote:
>
> Can we see the histogram? Why wouldn’t you at times have that many
> tombstones? Makes sense
Ec2 multi should work fine in one region, but consider using
GossipingPropertyFileSnitch if there’s even a chance you’ll want something
other than AWS regions as dc names - multicloud, hybrid, analytics DCs, etc
--
Jeff Jirsa
> On Mar 5, 2019, at 5:12 AM, Jean Carlo wrote:
>
&
> On Mar 5, 2019, at 5:32 AM, Jean Carlo wrote:
>
> Hello Jeff, thank you for the answer. But what will be the advantage of
> GossipingPropertyFileSnitch over Ec2MultiRegionSnitch exactly ? The
> possibility to name the DCs ?
Yes
And if you ever move out of aws you won’t have any
> On Mar 5, 2019, at 7:08 AM, Pranay akula wrote:
>
> When a co-ordinator node request a replica node for data will it be requested
> over port 9042 or 7000
7000
>
> Recently I ran a query with allow filtering in lower environments as soon as
> I ran saw a spike in NTP active threads. I
That’s client to server - internode is different
Don’t think it’s possible without code modifications - please opens JIRA
--
Jeff Jirsa
> On Feb 27, 2019, at 10:21 PM, Hannu Kröger wrote:
>
> Is server encryption option ”require_client_auth: false” what you are after?
>
>
Not in any released version, but something similar to that is coming in 4.0
--
Jeff Jirsa
> On Feb 25, 2019, at 7:22 AM, Abdul Patel wrote:
>
> Do we have any sustem table which stores all config details which we have in
> yaml or cass
SSTableReader and CQLSSTableWriter if you’re comfortable with Java
--
Jeff Jirsa
> On Mar 14, 2019, at 1:28 PM, Nick Hatfield wrote:
>
> Bummer but, reasonable. Any cool tricks I could use to make that process
> easier? I have many TB of data on a live cluster and was hoping t
vnodes
>
> Le mer. 13 mars 2019 à 17:31, Jeff Jirsa a écrit :
>
>> Do you use vnodes? How many vnodes per machine?
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Mar 13, 2019, at 3:58 PM, Ahmed Eljami wrote:
>>
>> Hi,
>>
>> We are plan
It does not impact existing data
The data gets an expiration time stamp when you write it. Changing the default
only impacts newly written data
If you need to change the expiration time on existing data, you must update it
--
Jeff Jirsa
> On Mar 14, 2019, at 1:16 PM, Nick Hatfield wr
Also https://github.com/aragozin/jvm-tools
Especially
https://github.com/aragozin/jvm-tools/blob/master/sjk-core/docs/TTOP.md
On Sun, Mar 17, 2019 at 9:04 AM Dieudonné Madishon NGAYA
wrote:
> Hi,
> Below some different tools to monitor cassandra:
> 1) Nodetool
> Nodetool has many options
>
Two things that wouldn't be a bug:
You could have run removenode
You could have run assassinate
Also could be some new bug, but that's much less likely.
On Thu, Mar 14, 2019 at 2:50 PM Fd Habash wrote:
> I have a node which I know for certain was a cluster member last week. It
> showed in
Do you use vnodes? How many vnodes per machine?
--
Jeff Jirsa
> On Mar 13, 2019, at 3:58 PM, Ahmed Eljami wrote:
>
> Hi,
>
> We are planning to add a third datacenter to our cluster (already has 2
> datacenter, every datcenter has 50 nodes, so 100 nodes in to
On Tue, Mar 12, 2019 at 5:28 PM Justin Sanciangco
wrote:
> I would recommend that you do not go into a 3 rack single dc
> implementation with only 6 nodes. If a node goes down in this situation,
> the node that is paired with the node that is downed will have to service
> all of the load instead
-dev, +user
Datadog worked pretty well last time I used it.
--
Jeff Jirsa
> On Mar 14, 2019, at 11:38 PM, Sundaramoorthy, Natarajan
> wrote:
>
> Can someone share knowledge on good monitoring tool for cassandra? Thanks
>
> This e-mail, including attachments, may in
Are your IPs changing as you restart the cluster? Kubernetes or Mesos or
something where your data gets scheduled on different machines? If so, if it
gets an IP that was previously in the cluster, it’ll stomp on the old entry in
the gossiper maps
--
Jeff Jirsa
> On Mar 14, 2019, at 3
this is the likely scenario …
>>
>>
>>
>> If you have a cluster of three nodes 1,2,3 …
>>
>>- If 3 shows as DN
>>- Restart C* on 1 & 2
>>- Nodetool status should NOT show node 3 IP at all.
>>
>>
>>
>> Restarting t
On Wed, Feb 6, 2019 at 5:47 AM Antoine d'Otreppe
wrote:
> Hi all,
>
> New to Cassandra, I'm trying to wrap my head around how dead nodes should
> be revived.
>
>
> Specifically, we deployed our cluster in Kubernetes, which means that
> nodes that go down will lose their IP address. When
This will probably work if you’re comfortable writing java, but not if you’re
running DSE
If you’re using OSS Cassandra, recall that we publish jars with all of the
internal classes. You can hook into the index and sstablereader classes and
iterate the keys directly (offline).
--
Jeff Jirsa
601 - 700 of 1011 matches
Mail list logo