rics about your
VPSs (CPU, memory, load, IO stat, disk throughput, network traffic, etc.)?
I think, some process (on another virtual machine or host) steals your
resources and your Cassandra cannot process the request and the other
instance need to put data to hints.
--
Bye,
Gábor Auth
Hi,
On Tue, Dec 6, 2022 at 12:41 PM Lapo Luchini wrote:
> I'm trying to change IP address of an existing live node (possibly
> without deleting data and streaming terabytes all over again) following
> these steps:
https://stackoverflow.com/a/57455035/166524
> 1. echo 'auto_bootstrap: false' >>
Hi,
On Tue, Jun 29, 2021 at 12:34 PM Erick Ramirez
wrote:
> You definitely shouldn't perform manual compactions -- you should let the
> normal compaction tasks take care of it. It is unnecessary to manually run
> compactions since it creates more problems than it solves as I've explained
> in
Hi,
On Tue, Nov 10, 2020 at 6:29 PM Alex Ott wrote:
> What about using "per partition limit 1" on that table?
>
Oh, it is almost a good solution, but actually the key is ((epoch_day,
name), timestamp), to support more distributed partitioning, so... it is
not good... :/
--
Bye,
Auth Gábor
Hi,
On Tue, Nov 10, 2020 at 5:29 PM Durity, Sean R
wrote:
> Updates do not create tombstones. Deletes create tombstones. The above
> scenario would not create any tombstones. For a full solution, though, I
> would probably suggest a TTL on the data so that old/unchanged data
> eventually gets
Hi,
On Tue, Nov 10, 2020 at 3:18 PM Durity, Sean R
wrote:
> My answer would depend on how many “names” you expect. If it is a
> relatively small and constrained list (under a few hundred thousand), I
> would start with something like:
>
At the moment, the number of names is more than 10,000
Hi,
Short story: storing time series of measurements (key(name, timestamp),
value).
The problem: get the list of the last `value` of every `name`.
Is there a Cassandra friendly solution to store the last value of every
`name` in a separate metadata table? It will come with a lot of
Hi,
On Sat, May 23, 2020 at 6:26 PM Laxmikant Upadhyay
wrote:
> Thanks you so much for quick response. I completely agree with Jeff and
> Gabor that it is an anti-pattern to build queue in Cassandra. But plan is
> to reuse the existing Cassandra infrastructure without any additional cost
>
Hi,
On Sat, May 23, 2020 at 4:09 PM Laxmikant Upadhyay
wrote:
> I think that we should avoid tombstones specially row-level so should go
> with option-1. Kindly suggest on above or any other better approach ?
>
Why don't you use a queue implementation, like AcitiveMQ, Kafka and
something?
Hi,
On Tue, May 1, 2018 at 10:27 PM Gábor Auth <auth.ga...@gmail.com> wrote:
> One or two years ago I've tried the CDC feature but switched off... maybe
> is it a side effect of switched off CDC? How can I fix it? :)
>
Okay, I've worked out. Updated the schema of the aff
Hi,
On Tue, May 1, 2018 at 7:40 PM Gábor Auth <auth.ga...@gmail.com> wrote:
> What can I do? Any suggestion? :(
>
Okay, I've diffed the good and the bad system_scheme tables. The only
difference is the `cdc` field in three keyspaces (in `tables` and `views`):
- the value of
Hi,
On Mon, Apr 30, 2018 at 11:11 PM Gábor Auth <auth.ga...@gmail.com> wrote:
> On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail <ali.hub...@petrolink.com>
> wrote:
>
>> What steps have you performed to add the new DC? Have you tried to follow
>> certain
ml
>
Yes, exactly. :/
Bye,
Gábor Auth
Hi,
On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth <auth.ga...@gmail.com> wrote:
> 've just tried to add a new DC and new node to my cluster (3 DCs and 10
> nodes) and the new node has a different schema version:
>
Is it normal? Node is marked down but doing a repair succ
restart (node-by-node)
The MigrationManager constantly running on the new node and try to migrate
schema:
DEBUG [NonPeriodicTasks:1] 2018-04-30 09:33:22,405
MigrationManager.java:125 - submitting migration task for /x.x.x.x
What also can I do? :(
Bye,
Gábor Auth
et', the Cassandra is not a 'drop in' replacement
of MySQL. Maybe it will be faster, maybe it will be totally unusable, based
on your use-case and database scheme.
Is there some good more recent material?
>
Are you able to completely redesign your database schema? :)
Bye,
Gábor Auth
the whole MV feature later be withdrawn (the issue can't be fixable)?
:)
Bye,
Gábor Auth
Hi,
On Wed, Oct 4, 2017 at 8:39 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> If you have migrated ALL the data from the old CF, you could just use
> TRUNCATE or DROP TABLE, followed by "nodetool clearsnapshot" to reclaim the
> disk space (this step has to be done per-node).
>
big-Data.db
-rw-r--r-- 1 cassandra cassandra 24734857 Oct 2 19:53 mc-48440-big-Data.db
Two of them untouched and one rewritten with the same content. :/
Bye,
Gábor Auth
92 Oct 2 01:16 mc-48281-big-TOC.txt
Now check both the list results. If they have some common sstables then we
> can say that C* is not compacting sstables.
>
Yes, exactly. How can I fix it?
Bye,
Gábor Auth
possible you could experience zombie
> data (i.e. data that was previously deleted coming back to life)
>
It is a test cluster with test keyspaces. :)
Bye,
Gábor Auth
>
lect gc_grace_seconds from system_schema.tables
where keyspace_name = 'mat' and table_name = 'number_item';
gc_grace_seconds
--
3600
(1 rows)
Bye,
Gábor Auth
> Could you please explain?
>
I've tried the test case that you described and it is works (the compact
removed the marked_deleted rows) on a newly created CF. But the same
gc_grace_seconds settings has no effect in the `number_item` CF (millions
of rows has been deleted during a last week migration).
Bye,
Gábor Auth
andra 3.11.0, two DC (with 4-4 nodes).
Bye,
Gábor Auth
Hi,
On Sun, Oct 1, 2017 at 6:53 PM Jonathan Haddad <j...@jonhaddad.com> wrote:
> The TTL is applied to the cells on insert. Changing it doesn't change the
> TTL on data that was inserted previously.
>
Is there any way to purge out these tombstoned data?
Bye,
Gábor Auth
e_seconds value and the repair will be remove it. Am I right?
Bye,
Gábor Auth
},
"cells" : [ ]
}
How can I purge these old rows? :)
I've tried: compact, scrub, cleanup, clearsnapshot, flush and full repair.
Bye,
Gábor Auth
Hi,
The `alter table number_item with gc_grace_seconds = 3600;` is sets the
grace seconds of tombstones of the future modification of number_item
column family or affects all existing data?
Bye,
Gábor Auth
on_window_size':'1'
} AND default_time_to_live = 2592000;
Is it affect the previous contents in the table or I need to truncate
manually? Is the 'TRUNCATE' safe? :)
Bye,
Gábor Auth
done by default on CASSANDRA-12701).
>
> 2017-03-17 8:36 GMT-03:00 Gábor Auth <auth.ga...@gmail.com>:
>
> Hi,
>
> I've discovered a relative huge size of data in the system_distributed
> keyspace's repair_history table:
>Table: repair_history
Hi,
On Wed, Mar 15, 2017 at 11:35 AM Ben Slater <ben.sla...@instaclustr.com>
wrote:
> When you say you’re running repair to “rebalance” do you mean to populate
> the new DC? If so, the normal/correct procedure is to use nodetool rebuild
> rather than repair.
>
Oh, thank you! :)
Bye,
Gábor Auth
>
to purge? :)
Bye,
Gábor Auth
session
aae06160-0943-11e7-9c1f-f5ba092c6aea for range
[(-7542303048667795773,-7300899534947316960]] finished (progress: 34%)
[2017-03-15 06:03:17,786] Repair completed successfully
[2017-03-15 06:03:17,787] Repair command #4 finished in 10 minutes 39
seconds
Bye,
Gábor Auth
way you have isolation from
> Production. Plus no operational overhead.
>
I think, this is also an operational overhead... :)
Bye,
Gábor Auth
the
replication factor of old keyspaces from {'class':
'NetworkTopologyStrategy', 'DC01': '3', 'DC02': '3'} to {'class':
'NetworkTopologyStrategy', 'Archive': '1'}, and repair the keyspace.
What do you think? Any other idea? :)
Bye,
Gábor Auth
space).
Bye,
Gábor Auth
36 matches
Mail list logo