uuid PRIMARY KEY,
A counter,
B counter
)
.
ALTER TABLE keyspace.table DROP A USING TIMESTAMP 1569582510859000;
If I do a select in system_schema.dropped_columns, it shows the
dropped counter column.
How can I properly remove all references to it?
Felipe Esteves
Tecnologia
felipe.este
Hi, I've got the same problem
Upgraded from 2.2.11 to 3.11.3 and since then the DEBUG logs shows this
messages.
Seems to me that after the repair the messages were reduced a little, but
it's still happening.
In my case is a 3-node cluster, RF3, reads and writes with LOCAL_QUORUM
Felipe Esteves
on and only the intended
> addresses have the right to use it as is, or any part of it. A wrong
> transmission does not break its confidentiality. If you've received it
> because of a mistake or erroneous transmission, please notify the sender
> and delete it from your system immediately. This communication environment
> is controlled and monitored.
>
> B2W Digital
>
>
>
--
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com
Tel.: (21) 3504-7162 ramal 57162
--
Ioannis,
As some people already said, there's one or two keyspaces that uses
EverywhereStrategy, dse_system is one of them, if I'm not wrong.
You must remember to change them to a community strategy or it will fail.
--
compactions graph.
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
2017-07-15 3:23 GMT-03:00 Petrus Gomes <petru...@gmail.com>:
> Hi Felipe,
>
> Yes, try it and let us know how it goes.
>
> Thanks,
> Petrus Silva.
>
&
it high, maybe it will
explain the latency in reads. I will try to run a repair on cluster to see
how it goes.
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
Tel.: (21) 3504-7162 ramal 57162
Skype: felipe2esteves
2017-07-13 15:02 GMT-03:00 Petru
instance, row cache is enabled and with almost 100%
hit rate.
The logs from Cassandra instances doesn't have any errors, nor tombstone
messages or something liked that. It's mostly compactions and G1GC
operations.
Any hints on where to investigate more?
Felipe Esteves
--
s] message="Username and/or password are incorrect"
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
--
Hi,
Just a feedback from my scenario, it all went well, no downtime. In my
case, I had authentication enabled from the beginning, just needed to
change the authorizer.
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
Tel.: (21) 3504-7162 ramal 57162
Thank you, Sean!
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
Tel.: (21) 3504-7162 ramal 57162
Skype: felipe2esteves
2016-06-07 14:20 GMT-03:00 <sean_r_dur...@homedepot.com>:
> I answered a similar question here:
>
> https://gr
a litte concerned about the performance of the cluster while
I'm restarting all the nodes. Is it possible to have any downtime (access
errors, maybe), as all the data was created with AllowAllAuthorizer?
--
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
Hi Jeff,
Thanks for the info, you're right!
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com <seu.em...@b2wdigital.com>
Tel.: (21) 3504-7162 ramal 57162
2016-02-26 17:38 GMT-03:00 Jeff Jirsa <jeff.ji...@crowdstrike.com>:
> Cassandra is streaming it at a near constant r
.
In the instance logs, I have only stream messages from when I've started
the rebuild.
My point is, is it normal to Cassandra accumulate this amount of data and
then send it? I was hoping that it was more of a gradual and incremental
proccess.
thanks,
Felipe Esteves
Tecnologia
felipe.este
13 matches
Mail list logo