Sebastien,
Another thing to keep in mind when writing/updating a map column is that it
is internally (in the memtable) backed by a synchronized data structure -
if the rate of writes/updates is sufficiently high, the resulting CPU load
will cripple the nodes (see CASSANDRA-15464
Amit,
W/ould you be able to provide the full stacktrace?
Arvydas
On Thu, Dec 8, 2022 at 8:07 AM Amit Patel via user <
user@cassandra.apache.org> wrote:
> Hi,
>
>
>
> I have installed cassandra-4.0.7-1.noarch - repo ( baseurl=
> https://redhat.cassandra.apache.org/40x/noboolean/) on Redhat
Keep in mind that hints will accumulate for this node across the cluster
for *max_hint_window_in_ms* (default: 3hrs)
On Fri, Sep 8, 2017 at 8:20 AM, Jeff Jirsa wrote:
> nodetool disablebinary (will make the node down for native cql clients,
> stopping client apps from writing
Run *nodetool cleanup* on the *4.4.4.5* DC node(s). Changing network
topology does not *remove* data - it's a manual task.
But it should prevent it from replicating over to the undesired DC.
Also make sure your LoadBalancingStrategy is set to DCAwareRoundRobinPolicy,
with *4.4.4.4* DC set as the
anks, got it working now :)
>
> Just wish that an error like:
>"Table x not found in keyspace y"
> would have been much better than:
>"Table x not configured".
>
>
> On Sat, Mar 25, 2017 at 6:13 AM, Arvydas Jonusonis <
> arvydas.jo
Make sure to prefix the table with the keyspace.
On Sat, Mar 25, 2017 at 13:28 Anuj Wadehra wrote:
> Ensure that all the nodes are on same schema version such that table2
> schema is replicated properly on all the nodes.
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on
;
> Smith is : EC2 snitch. As we deployed cluster on EC2 instances.
>
> I was worried that CL=ALL have more read latency and read failures. But
> won't rule out trying it.
>
> Should I switch select count (*) to select partition_key column? Would
> that be of any help.?
>
&g
What are your replication strategy and snitch settings?
Have you tried doing a read at CL=ALL? If it's an actual inconsistency
issue (missing data), this should cause the correct results to be returned.
You'll need to run a repair to fix the inconsistencies.
If all the data is actually there,
You can experiment quite easily without even needing to restart the
Cassandra service.
The caches (row and key) can be enabled on a table-by-table basis via a
schema directive. But the cache capacity (which is the one that you
referred to in your original post, set to 0 in cassandra.yaml) is a
Do not change the cluster name - the cassandra service will not start on
the same sstables if the cluster name is changed.
Arvydas
On Wed, Mar 8, 2017 at 4:57 PM, Chuck Reynolds
wrote:
> I was hoping I could do the following
>
> · Change seeds
>
> ·
That's a good point - a snapshot is certainly in order ASAP, if not already
done.
One more thing I'd add about "data has to be consolidated from all the
nodes" (from #3 below):
- EITHER run the sstable2json ops on each node
- OR if size permits, copy the relevant sstables (containing the
Use nodetool getsstables to discover which sstables contain the data and
then dump it with sstable2json -k to explore the content of the
data/mutations for those keys.
Arvydas
On Tue, Mar 7, 2017 at 4:13 AM, Michael Fong <
michael.f...@ruckuswireless.com> wrote:
> Hi, all,
>
>
>
>
>
> We
12 matches
Mail list logo