You can use auto_bootstrap set to false to add a new node to the ring, it
will calculate the token range for the new node, but will not start
streaming the data.
In this case you can add several nodes into the ring quickly. After that
you can start nodetool rebuild -dc <> to start streaming data.
is probably excessive
>
> Also want to crank up the validity times so it uses cached info longer
>
>
> --
> Jeff Jirsa
>
>
> On Nov 23, 2018, at 10:18 AM, Vitali Dyachuk wrote:
>
> no its not a cassandra user and as i understood all other users login
> local_
no its not a cassandra user and as i understood all other users login
local_one.
On Fri, 23 Nov 2018, 19:30 Jonathan Haddad Any chance you’re logging in with the Cassandra user? It uses quorum
> reads.
>
>
> On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk
> wrote:
>
>>
Hi,
We have recently met a problem when we added 60 nodes in 1 region to the
cluster
and set an RF=60 for the system_auth ks, following this documentation
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
However we've started to see increased login latencies in the cluste
oken range finds the difference and then streams. 6 hours for 10Gb of
data
If we are scaling out the existing data centers then the bootstrap process
will take care of streaming data to a new node, we just need to add a new
node to the region.
Vitali
On Sun, Sep 16, 2018 at 11:02 PM Vitali Dya
On Sep 16, 2018, at 2:07 AM, Vitali Dyachuk wrote:
>
> Both stream throughput settings are set to 0, meaning that there is no
> stream throttling on the C* side. Yes, i see high cpu used by STREAM-IN
> thread, sstables are compressed up to 80%
> What about copying sstables with rsync a
by STREAM-IN thread then your streaming is CPU bound. In this
> situation a powerful CPU will definitely help. Dropping internode
> compression and encryption will also help. Are your SSTables compressed?
>
> Dinesh
>
>
> On Friday, September 14, 2018, 4:15:28 AM PDT, Vitali Dy
ee if there is any improvement and later
> set the value if u can’t leave these values to 0.
>
> On Wed, Sep 12, 2018 at 5:42 AM Vitali Dyachuk wrote:
>
>> Hi,
>> I'm currently streaming data with nodetool rebuild on 2 nodes, each node
>> is streaming from dif
Hi,
I'm currently streaming data with nodetool rebuild on 2 nodes, each node is
streaming from different location. The problem is that it takes ~7 days to
stream 4Tb of data to 1 node, the speed on each side is ~150Mbit/s so it
should take around
~2,5 days . Although there are resources on the des
ar 8, 1397 AP, at 19:54, Jeff Jirsa wrote:
>> >
>> > Either of those are options, but there’s also sstablesplit to break it
>> up a bit
>> >
>> > Switching to LCS can be a problem depending on how many sstables
>> /overlaps you have
>> >
>
Hi,
Some of the sstables got too big 100gb and more so they are not compactiong
any more so some of the disks are running out of space. I'm running C*
3.0.17, RF3 with 10 disks/jbod with STCS.
What are my options? Completely delete all data on this node and rejoin it
to the cluster, change CS to LC
ch information as possible. Can you reduce this
> issue to a short, repeatable set of steps that we can reproduce? That'll be
> helpful to debug this problem.
>
> Dinesh
> On Wednesday, August 15, 2018, 1:07:21 AM PDT, Vitali Dyachuk <
> vdjat...@gmail.com> wrote:
>
>
&g
I've upgraded to 3.0.17 and the issue is still there, Is there a jira
ticket for that bug or should i create one?
On Wed, Jul 25, 2018 at 2:57 PM Vitali Dyachuk wrote:
> I'm using 3.0.15. I see that there is some fix for sstable metadata in
> 3.0.16 https://issues.apache
Hello,
I'm going to follow this documentation to add a new datacenter to the C*
cluster
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
The main step is to run nodetool rebuild which will sync data to the new
datacenter,
this will load cluster badly since the
ra are you running? There is a bug in 3.10.0 and
> certain 3.0.x that occurs in certain conditions and corrupts that file.
>
> Hannu
>
> Vitali Dyachuk kirjoitti 25.7.2018 kello 10.48:
>
> Hi,
> I have noticed in the cassandra system.log that there is some issue with
>
Hi,
I have noticed in the cassandra system.log that there is some issue with
sstable metadata, the messages says:
WARN [Thread-6] 2018-07-25 07:12:47,928 SSTableReader.java:249 - Reading
cardinality from Statistics.db failed for
/opt/data/disk5/data/keyspace/table/mc-big-Data.db
Although there is
ng ticket:
> https://issues.apache.org/jira/browse/CASSANDRA-13404
>
>
> On 09.07.2018 17:12, Vitali Dyachuk wrote:
> > Hi,
> > There is a certificate validation based on the mutual CA this is a 1st
> > factor, the 2nd factor could be checking the common name of the client
> &g
Hi,
There is a certificate validation based on the mutual CA this is a 1st
factor, the 2nd factor could be checking the common name of the client
certificate, probably this requires writing a patch, but probably some has
already done that ?
Vitali Djatsuk.
18 matches
Mail list logo