1.the node added to the ring will calculate the token range it owns, then
get the data of the range from the nodes originally owned the data.
2.then the streamed sstable and the range of the sstable should be
estimated.
3.then streaming begins .secondary index will be build afther sstabte
streamed
If I understand correctly this is controlled by setting `auto_bootstrap`.
If it is set to true (the default), once the node joins the cluster it will
have some portion of the data assigned to it, and its data will be streamed
to it from the other nodes. Once the data has finished streaming only
Hi experts,
When a new node was added, how can the coordinator find data that has been not
yet streamed?
Or is new nodes not used until all data is streamed?
Thanks in advance
-
To unsubscribe, e-mail:
Thanks Jon!
I used that tool and I did a test to compare LCS and STCS and it works
great. However, I was referring to the JVM flags that you use since there
are a lot of flags that I found as default and I would like to exclude the
unused or wrong ones from the current configuration.
I have also
tlp-stress comes with workloads pre-baked, so there's not much
configuration to do. The main flags you'll want are going to be:
-d : duration, I highly recommend running your test for a few days
--compaction
--compression
-p: number of partitions
-r: % of reads, 0-1
For example, you might run:
The TCP settings are basically "how much RAM to use to buffer data for TCP
sessions, per session", which translates roughly to maximum TCP window
size. You can actually calculate approximately what you need by just
multiplying bandwidth and latency (10,000,000,000bps * .0001s * 1GB/8Gb =
125KB
Sergio, if you do some online searching about ‘bufferbloat’ in networking,
you’ll find the background to help explain what motivates networking changes.
Actual investigation of network performance can get a bit gnarly. The TL;DR
summary is that big buffers function like big queues, and thus
Thanks Elliott!
How do you know if there is too much RAM used for those settings?
Which metrics do you keep track of?
What would you recommend instead?
Best,
Sergio
On Mon, Oct 21, 2019, 1:41 PM Elliott Sims wrote:
> Based on my experiences, if you have a new enough kernel I'd strongly
>
Based on my experiences, if you have a new enough kernel I'd strongly
suggest switching the TCP scheduler algorithm to BBR. I've found the rest
tend to be extremely sensitive to even small amounts of packet loss among
cluster members where BBR holds up well.
High ulimits for basically everything
Think of GB to OS as something intended to support file caching. As such the
amount is whatever suits your usage. If your use is almost exclusively
reading, then file cache memory doesn’t matter that much if you’re operating
with your storage as those nvme ssd drives that the i3’s come with.
Hello!
This is the kernel that I am using
Linux 4.16.13-1.el7.elrepo.x86_64 #1 SMP Wed May 30 14:31:51 EDT 2018
x86_64 x86_64 x86_64 GNU/Linux
Best,
Sergio
Il giorno lun 21 ott 2019 alle ore 07:30 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> I don't know which distro and
Thanks, guys!
I just copied and paste what I found on our test machines but I can confirm
that we have the same settings except for 8GB in production.
I didn't select these settings and I need to verify why these settings are
there.
If any of you want to share your flags for a read-heavy workload
Since the instance size is < 32gb, hopefully swap isn’t being used, so it
should be moot.
Sergio, also be aware that -XX:+CMSClassUnloadingEnabled probably doesn’t do
anything for you. I believe that only applies to CMS, not G1GC. I also
wouldn’t take it as gospel truth that -XX:+UseNUMA
One thing to note, if you're going to use a big heap, cap it at 31GB, not
32. Once you go to 32GB, you don't get to use compressed pointers [1], so
you get less addressable space than at 31GB.
[1]
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
On Mon, Oct
I don’t disagree with Jon, who has all kinds of performance tuning experience.
But for ease of operation, we only use G1GC (on Java 8), because the tuning of
ParNew+CMS requires a high degree of knowledge and very repeatable testing
harnesses. It isn’t worth our time. As a previous writer
I still use ParNew + CMS over G1GC with Java 8. I haven't done a
comparison with JDK 11 yet, so I'm not sure if it's any better. I've heard
it is, but I like to verify first. The pause times with ParNew + CMS are
generally lower than G1 when tuned right, but as Chris said it can be
tricky. If
I don't know which distro and version you are using, but watch out for
surprises in what vm.swappiness=0 means. In older kernels it means "only use
swap when desperate". I believe that newer kernels changed to have 1 mean
that, and 0 means to always use the oomkiller. Neither situation is
An i3x large has 30.5 gb of RAM but you’re using less than 4gb for C*. So
minus room for other uses of jvm memory and for kernel activity, that’s about
25 gb for file cache. You’ll have to see if you either want a bigger heap to
allow for less frequent gc cycles, or you could save money on
18 matches
Mail list logo