Sequentially, and yes - for some definition of "directly" - but not just
because it's sequential, but also because each sstable has cost in reading
(e.g. JVM garbage created when you open/seek that has to be collected after
the read)
On Tue, Oct 25, 2022 at 8:27 AM Grzegorz Pietru
HI all
I can't find any information about how cassandra handles reads involving
multiple sstables. Are sstables read concurrently or sequentially? Is read
latency directly connected to the number of opened sstables?
Regards
Grzegorz
Hi folks,
There was some discussion on here a couple of weeks ago about using the
Apache Arrow in memory format for Cassandra data so I thought I'd share the
following posts / code we just released as alpha (apache 2 license).
Code:
https://github.com/datastax/sstable-to-arrow
Post Part 1
The operation will run in a single anti-compaction thread so it won't
consume more than 1 CPU. The operation will mostly be IO-bound with the
disk being the most bottleneck. Are running it on a direct-attached SSD? It
won't perform well if you're running it on an EBS volume or some other slow
Hi folks,
I'm running a job on an offline node to test how long it takes to run
sstablesplit several large sstable.
I'm a bit dismayed to see it took about 22 hours to process a 1.5
gigabyte sstable! I worry about the 32 gigabyte sstable that is my
ultimate target to split.
This is running
If you are not deleting or updating data then it should be safe to use 2nd
approach.
Regards,
Nitan
Cell: 510 449 9629
> On Aug 13, 2020, at 11:48 AM, Pushpendra Rajpoot
> wrote:
>
>
> Hi,
>
> I have a cluster of 2 DC, each DC has 5 nodes in production. This cluster is
> based on
Hi,
I have a cluster of 2 DC, each DC has 5 nodes in production. This cluster
is based on active-passive model i.e. application is writing data on one DC
(Active) & it's replicated to other DC (Passive).
My Passive DC has corrupt sstables (3 nodes out of 5 nodes) whereas there
are no corrupt
Thanks Jeff. Appreciate your reply. as you said , looks like some there
were entries in commitlogs and when cassandra was brought up after deleting
sstables, data from commitlog replayed. May be next time I will let the
replay happen after deleting sstable and then truncate table using CQL
was supposed to be few MBs. During nodetool repair, one of
> the cassandra went down. Even after multiple restart, one of the node was
> going down after coming up for few mins. We decided to truncate the table
> by removing the corresponding sstable from the disk since truncating a
> ta
the table by
removing the corresponding sstable from the disk since truncating a table
from cqlsh needs all the nodes to be up which was not the case in our env.
After deleting sstable from disk on all the 3 nodes, we brought up
cassandra and all the nodes came up fine and dont see any issue , but we
not be a problem if the repair happens before the
>>>>>> corrupted node is brought back online, right?
>>>>>> 2) in this case, is option (3) equivalent to replacing the node? where
>>>>>> we repair the two live nodes and then bring up the third
> the corrupted node is brought back online, right?
>>>> 2) in this case, is option (3) equivalent to replacing the node? where
>>>> we repair the two live nodes and then bring up the third node with no data
>>>>
>>>> Leon
>>>>
>>>>
ode is brought back online, right?
>>> 2) in this case, is option (3) equivalent to replacing the node? where
>>> we repair the two live nodes and then bring up the third node with no data
>>>
>>> Leon
>>>
>>> On Tue, May 26, 2020 at 10:11 PM Jeff Jirs
r the two live nodes and then bring up the third node with no data
>>>
>>> Leon
>>>
>>>> On Tue, May 26, 2020 at 10:11 PM Jeff Jirsa wrote:
>>>> There’s two problems with this approach if you need strict correctness
>>>>
>>>&
to replacing the node? where we
>> repair the two live nodes and then bring up the third node with no data
>>
>> Leon
>>
>> On Tue, May 26, 2020 at 10:11 PM Jeff Jirsa wrote:
>>
>>> There’s two problems with this approach if you need strict correctness
>
ote:
>
>> There’s two problems with this approach if you need strict correctness
>>
>> 1) after you delete the sstable and before you repair you’ll violate
>> consistency, so you’ll potentially serve incorrect data for a while
>>
>> 2) The sstable May have a tombs
, May 26, 2020 at 10:11 PM Jeff Jirsa wrote:
> There’s two problems with this approach if you need strict correctness
>
> 1) after you delete the sstable and before you repair you’ll violate
> consistency, so you’ll potentially serve incorrect data for a while
>
> 2) The sstable Ma
There’s two problems with this approach if you need strict correctness
1) after you delete the sstable and before you repair you’ll violate
consistency, so you’ll potentially serve incorrect data for a while
2) The sstable May have a tombstone past gc grace that’s shadowing data in
another
Stop the node
Delete as per option 2
Run repair
Regards,
Nitan
Cell: 510 449 9629
> On May 26, 2020, at 6:46 PM, Leon Zaruvinsky wrote:
>
>
> Hi all,
>
> I'm looking to understand Cassandra's behavior in an sstable corruption
> scenario, and what the minimum amount
Hi all,
I'm looking to understand Cassandra's behavior in an sstable corruption
scenario, and what the minimum amount of work is that needs to be done to
remove a bad sstable file.
Consider: 3 node, RF 3 cluster, reads/writes at quorum
SStable corruption exception on one node at
keyspace1/table1
Thanks all for your support.
I executed the discussed process (barring repair, as table was read for
reporting only) and it worked fine in production.
Regards
Manish
>
The risk is you violate consistency while you run repair
Assume you have three replicas for that range, a b c
At some point b misses a write, but it’s committed on a and c for quorum
Now c has a corrupt sstable
You empty c and bring it back with no data and start repair
Then the app reads
Thanks Jeff for your response.
Do you see any risk in following approach
1. Stop the node.
2. Remove all sstable files from
*/var/lib/cassandra/data/keyspace/tablename-23dfadf32adf33d33s333s33s3s33 *
directory.
3. Start the node.
4. Run full repair on this particular table
I wanted to go
Agree this is both strictly possible and more common with LCS. The only
thing that's strictly correct to do is treat every corrupt sstable
exception as a failed host, and replace it just like you would a failed
host.
On Thu, Feb 13, 2020 at 10:55 PM manish khandelwal <
manishkhande
Thanks Erick
I would like to explain how data resurrection can take place with single
SSTable deletion.
Consider this case of table with Levelled Compaction Strategy
1. Data A written a long time back.
2. Data A is deleted and tombstone is created.
3. After GC grace tombstone is purgeable.
4
The log shows that the the problem occurs when decompressing the SSTable
but there's not much actionable info from it.
I would like to know what will be "ordinary hammer" in this case. Do you
> want to suggest that deleting only corrupt sstable file ( in this case
> mc-1234
Hi Erick
Thanks for your quick response. I have attached the full stacktrace which
show exception during validation phase of table repair.
I would like to know what will be "ordinary hammer" in this case. Do you
want to suggest that deleting only corrupt sstable file ( in this cas
It will achieve the outcome you are after but I doubt anyone would
recommend that approach. It's like using a sledgehammer when an ordinary
hammer would suffice. And if you were hitting some bug then you'd run into
the same problem anyway.
Can you post the full stack trace? It might provide us
(LazilyInitializedUnfilteredRowIterator.java:32)
~[apache-cassandra-3.11.2.jar:3.11.2] at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.2.jar:3.11.2]*
Regarding you question about removing all SSTable files of a table(column
family). I
You need to stop C* in order to run the offline sstable scrub utility.
That's why it's referred to as "offline". :)
Do you have any idea on what caused the corruption? It's highly unusual
that you're thinking of removing all the files for just one table.
Typically if the corruption wa
Hi
I see a corrupt SSTable in one of my keyspace table on one node. Cluster is
3 nodes with replication 3. Cassandra version is 3.11.2.
I am thinking on following lines to resolve the corrupt SSTable issue.
1. Run nodetool scrub.
2. If step 1 fails, run offline sstabablescrub.
3. If step 2 fails
riting large partitions
> during compaction.
>
>
>
>
>
> On Thu, Nov 21, 2019 at 6:33 PM Sergio Bilello
> wrote:
>
> > Hi guys!
> > Just for curiosity do you know anything beside
> > https://github.com/tolbertam/sstable-
:
> Hi guys!
> Just for curiosity do you know anything beside
> https://github.com/tolbertam/sstable-tools to find a large partition?
> Best,
>
> Sergio
>
> -
> To unsubscribe, e-mail: user-unsub
It's apache licensed:
https://github.com/instaclustr/cassandra-sstable-tools/blob/cassandra-3.11/LICENSE
On Fri, Nov 22, 2019 at 12:06 AM Ahmed Eljami
wrote:
> I found this project on instaclustr github but I dont have any idea about
> license:
>
>
> https://github.com/instac
I found this project on instaclustr github but I dont have any idea about
license:
https://github.com/instaclustr/cassandra-sstable-tools/blob/cassandra-3.11/README.md
Le ven. 22 nov. 2019 à 03:33, Sergio Bilello a
écrit :
> Hi guys!
> Just for curiosity do you know anything beside
&
Hi guys!
Just for curiosity do you know anything beside
https://github.com/tolbertam/sstable-tools to find a large partition?
Best,
Sergio
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands
running the
>>> alter statement, we ran a major compaction without understanding the
>>> implications.
>>>
>>> Now, while new sstables are properly being created according to the time
>>> window, there is a giant sstable sitting around waiting for exp
s are properly being created according to the time
>> window, there is a giant sstable sitting around waiting for expiration.
>>
>> Is there a way we can break it up again? Running the alter statement
>> again doesn’t seem to be touching it.
>>
>> Thanks,
>> Leon
r compaction without understanding the
> implications.
>
> Now, while new sstables are properly being created according to the time
> window, there is a giant sstable sitting around waiting for expiration.
>
> Is there a way we can break it up again? Running the alter statement
>
Hi,
We are switching a table to run using TWCS. However, after running the alter
statement, we ran a major compaction without understanding the implications.
Now, while new sstables are properly being created according to the time
window, there is a giant sstable sitting around waiting
for most most
>> operations.
>>
>> Network looks good. Any other ideas?
>>
>>
>> Regards,
>> Nitan
>> Cell: 510 449 9629
>>
>>> On May 28, 2019, at 11:21 AM, Alain RODRIGUEZ wrote:
>>>
>>> Hello Nitan,
>
tool status shows all nodes up and read writes are working for most
> most operations.
>
> Network looks good. Any other ideas?
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On May 28, 2019, at 11:21 AM, Alain RODRIGUEZ wrote:
>
> Hello Nitan,
&
most
operations.
Network looks good. Any other ideas?
Regards,
Nitan
Cell: 510 449 9629
> On May 28, 2019, at 11:21 AM, Alain RODRIGUEZ wrote:
>
> Hello Nitan,
>
>> 1. Can sstable corruption in application tables cause schema mismatch?
>
> I would say it sho
Hello Nitan,
1. Can sstable corruption in application tables cause schema mismatch?
>
I would say it should not. I could imagine in the case that the corrupted
table hits some 'system' keyspace sstable. If not I don' see how corrupted
data can impact the schema on the node.
> 2. Do w
Hi,
Two questions:
1. Can sstable corruption in application tables cause schema mismatch?
2. Do we need to disable repair while adding storage while Cassandra is down?
Regards,
Nitan
Cell: 510 449 9629
Thank you guys !
On Thu, Apr 4, 2019 at 5:49 PM Dmitry Saprykin
wrote:
> Hello,
>
> I think it was done in the following issue: Sstable min/max metadata can
> cause data loss (CASSANDRA-14861)
>
>
> https://github.com/apache/cassandra/commit/d60c78358b6f599a83f3c112bfd6ce
Hello,
I think it was done in the following issue: Sstable min/max metadata can
cause data loss (CASSANDRA-14861)
https://github.com/apache/cassandra/commit/d60c78358b6f599a83f3c112bfd6ce72c1129c9f
src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
<https://github.com/apa
This is CASSANDRA-14861
--
Jeff Jirsa
> On Apr 4, 2019, at 8:23 AM, Léo FERLIN SUTTON
> wrote:
>
> Hello !
>
> I have noticed something since I upgraded to cassandra 3.0.18.
>
> Before all my Sstable used to be named this way :
> ```
> mc-130817-big-Comp
Hello !
I have noticed something since I upgraded to cassandra 3.0.18.
Before all my Sstable used to be named this way :
```
mc-130817-big-CompressionInfo.db
mc-130817-big-Data.db
mc-130817-big-Digest.crc32
mc-130817-big-Filter.db
mc-130817-big-Index.db
mc-130817-big-Statistics.db
mc-130817-big
Running:
SSTablemetadata /THE_KEYSPACE_DIR/mc-1421-big-Data.db
result was:
Estimated droppable tombstones: 1.2
Having STCS and data disk usage of 80% (do not have enough free space for
normal compaction), Is it OK to just: 1. stop Cassandra, 2. delete mc-1421* and
then 3. start Cassandra?
fields of the cdt is collected.
For counter, it needs to merge all mutations distributed in all
sstables to give a final state of counter value.
Another related question, since the sstable only contains partitioning
key index, clustering key index (inline within the index file), but no
index for coll
levels beyond
>> the filtering done by timestamp.
>>
>>>
>>> For STCS, it would search sstables in buckets from smallest to largest?
>>
>> Nope. No attempt to do this.
>>
>>>
>>> What about other compaction cases? They would iterate a
from smallest to largest?
>
> Nope. No attempt to do this.
>
> >
> > What about other compaction cases? They would iterate all sstables?
>
> In all cases, we’ll use a combination of bloom filters and sstable metadata
> and indices to include / exclude sstables. If the b
we’ll use a combination of bloom filters and sstable metadata and
indices to include / exclude sstables. If the bloom filter hits, we’ll consider
things like timestamps and whether or not the min/max clustering of the sstable
matches the slice we care about. We don’t consult the compaction
the selected
columns are simple cell) are collected and satisfied, it would search
both memtable and all sstables, regardless of the compaction strategy.
Why?
Moreover, for collection/cdt (non-frozen) and counter types, it would
need to iterate all sstable to ensure the whole set of the fields
which versions of cassandra 2.x and 3.x are best for avoiding sstable
corruption and schema migration slowness?
is this a "cassandra is not a set it and forget it system" concept?
On Tue, Sep 18, 2018 at 10:38 AM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:
>
> any indications in Cassandra log about insufficient disk space during
> compactions?
>
Bingo! The following was logged around the time compaction was started
(and I only looked around when it was
Alex,
any indications in Cassandra log about insufficient disk space during
compactions?
Thomas
From: Oleksandr Shulgin
Sent: Dienstag, 18. September 2018 10:01
To: User
Subject: Major compaction ignoring one SSTable? (was Re: Fresh SSTable files
(due to repair?) in a static table (was Re
On Mon, Sep 17, 2018 at 4:29 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> Thanks for your reply! Indeed it could be coming from single-SSTable
> compaction, this I didn't think about. By any chance looking into
> compaction_history table could be usefu
On Mon, Sep 17, 2018 at 4:41 PM Jeff Jirsa wrote:
> Marcus’ idea of row lifting seems more likely, since you’re using STCS -
> it’s an optimization to “lift” expensive reads into a single sstable for
> future reads (if a read touches more than - I think - 4? sstables, we copy
y, how can it
>>> be that any inconsistency would be found by read-repair or normal repair?
>>>
>>> We have seen this on a number of nodes, including SSTables written at the
>>> time there was guaranteed no repair running.
>> Not obvious to me where the s
t; We have seen this on a number of nodes, including SSTables written at the
> time there was guaranteed no repair running.
>
> Not obvious to me where the sstable is coming from - you’d have to look in
> the logs. If it’s read repair, it’ll be created during a memtable flush. If
&g
read-repair or normal repair?
>
> We have seen this on a number of nodes, including SSTables written at the
> time there was guaranteed no repair running.
>
>
> Not obvious to me where the sstable is coming from - you’d have to look in
> the logs. If it’s read repair, it’ll be created d
ltiple times already, how can it be that
> any inconsistency would be found by read-repair or normal repair?
>
> We have seen this on a number of nodes, including SSTables written at the
> time there was guaranteed no repair running.
>
Not obvious to me where the sstable is coming fr
On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, 11 Sep 2018, 19:26 Jeff Jirsa, wrote:
>
>> Repair or read-repair
>>
>
> Could you be more specific please?
>
> Why any data would be streamed in if there is no (as far as I can see)
> possibilities
on, D.C. 20007
>
> We build and manage digital business technology platforms.
>> On Sep 11, 2018, 2:55 AM -0400, Steinmaurer, Thomas
>> , wrote:
>> Hello,
>>
>>
>>
>> is there a way to Online scrub a particular SSTable file only and not the
>&
. 20007
We build and manage digital business technology platforms.
On Sep 11, 2018, 2:55 AM -0400, Steinmaurer, Thomas
, wrote:
> Hello,
>
> is there a way to Online scrub a particular SSTable file only and not the
> entire column family?
>
> According to the Cassandra logs w
e.com> wrote:
>>>
>>>> As far as I remember, in newer Cassandra versions, with STCS, nodetool
>>>> compact offers a ‘-s’ command-line option to split the output into files
>>>> with 50%, 25% … in size, thus in this case, not a single largish SSTable
>
Thomas <
>> thomas.steinmau...@dynatrace.com> wrote:
>>
>>> As far as I remember, in newer Cassandra versions, with STCS, nodetool
>>> compact offers a ‘-s’ command-line option to split the output into files
>>> with 50%, 25% … in size, thus in this case, not a si
gt; compact offers a ‘-s’ command-line option to split the output into files
>> with 50%, 25% … in size, thus in this case, not a single largish SSTable
>> anymore. By default, without -s, it is a single SSTable though.
>>
>
> Thanks Thomas, I've also spotted the option whi
Hello,
is there a way to Online scrub a particular SSTable file only and not the
entire column family?
According to the Cassandra logs we have a corrupted SSTable smallish compared
to the entire data volume of the column family in question.
To my understanding, both, nodetool scrub
n size after compression!
>
>
>
> -Original Message-
> From: Vitaliy Semochkin [mailto:vitaliy...@gmail.com]
> Sent: Tuesday, August 28, 2018 12:03 PM
> To: user@cassandra.apache.org
> Subject: SSTable Compression Ratio -1.0
>
> Hello,
>
> nodetool tablestat
To: user@cassandra.apache.org
Subject: SSTable Compression Ratio -1.0
Hello,
nodetool tablestats my_kespace
returns SSTable Compression Ratio -1.0
Can someone explain, what does -1.0 mean?
Regards,
Vitaliy
-
To unsubscribe, e
Hello,
nodetool tablestats my_kespace
returns SSTable Compression Ratio -1.0
Can someone explain, what does -1.0 mean?
Regards,
Vitaliy
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e
Hi Rahul,
the table TTL is 24 months. Oldest data is 22 months, so no
expirations yet. Compacted partition maximum bytes: 17 GB - yeah, I
know that's not good, but we'll have to wait for the TTL to make it go
away. More recent partitions are kept under 100 MB by bucketing.
The data model:
Few questions
What is your maximumcompactedbytes across the cluster for this table ?
What’s your TTL ?
What does your data model look like as in what’s your PK?
Rahul
On Jul 25, 2018, 1:07 PM -0400, James Shaw , wrote:
> nodetool compactionstats --- see compacting which table
> nodetool
nodetool compactionstats --- see compacting which table
nodetool cfstats keyspace_name.table_name --- check partition side,
tombstones
go the data file directories: look the data file size, timestamp, ---
compaction will write to new temp file with _tmplink...,
use sstablemetadata ...
Hi,
we have a table which is being compacted all the time, with no change in size:
Compaction History:
compacted_atbytes_inbytes_out rows_merged
2018-07-25T05:26:48.101 57248063878 57248063878 {1:11655}
2018-07-25T01:09:47.346 57248063878 57248063878
{1:11655}
large partition size.
>
> Thanks/Asad
>
> From: Lucas Benevides [mailto:lu...@maurobenevides.com.br]
> Sent: Wednesday, June 27, 2018 7:02 AM
> To: user@cassandra.apache.org
> Subject: Maximum SSTable size
>
> Hello Community,
>
> Is there a maximum
if your cluster has a
single large table. STCS is the actual Cassandra default but it often
causes more trouble than it solves, because of large SSTables
Hope that helps!
Tom
On Wed, 27 Jun 2018 at 08:02, Lucas Benevides
wrote:
> Hello Community,
>
> Is there a maximum SST
lto:lu...@maurobenevides.com.br]
Sent: Wednesday, June 27, 2018 7:02 AM
To: user@cassandra.apache.org
Subject: Maximum SSTable size
Hello Community,
Is there a maximum SSTable Size?
If there is not, does it go up to the maximum Operational System values?
Thanks in advance,
Lucas Benevides
Hello Community,
Is there a maximum SSTable Size?
If there is not, does it go up to the maximum Operational System values?
Thanks in advance,
Lucas Benevides
ge/(sometimes called create)" is file metadata changes, and a link
> count change is a metadata change. This seems like an odd decision on the
> part of GNU tar, but presumably there's a good reason for it.
>
> When the original sstable is compacted away, it's removed and ther
.
When the original sstable is compacted away, it's removed and therefore the
link count on the snapshot file is decremented. The file's contents
haven't changed so mtime is identical, but ctime does get updated. BSDtar
doesn't seem to interpret link count changes as a file change, so i
I looked at the source code for GNU tar, and it looks for a change in the
create time or (more likely) a change in the size.
This seems very strange to me — I would think that creating a snapshot would
cause a flush and then once the SSTables are written, hardlinks would be
created and the
I've run across this problem before - it seems like GNU tar interprets
changes in the link count as changes to the file, so if the file gets
compacted mid-backup it freaks out even if the file contents are
unchanged. I worked around it by just using bsdtar instead.
On Thu, May 24, 2018 at 6:08
Jeff,
Shouldn't Snapshot get consistent state of sstables? -tmp file shouldn't
impact backup operation right?
Regards,
Nitan K.
Cassandra and Oracle Architect/SME
Datastax Certified Cassandra expert
Oracle 10g Certified
On Wed, May 23, 2018 at 6:26 PM, Jeff Jirsa wrote:
>
In versions before 3.0, sstables were written with a -tmp filename and
copied/moved to the final filename when complete. This changes in 3.0 - we
write into the file with the final name, and have a journal/log to let uss
know when it's done/final/live.
Therefore, you can no longer just watch for
Hi Everyone,
We’ve noticed a few times in the last few weeks that when we’re doing backups,
tar has complained with messages like this:
tar:
/var/lib/cassandra/data/mars/test_instances_by_test_id-6a9440a04cc111e8878675f1041d7e1c/snapshots/backup_20180523_024502/mb-63-big-Data.db:
file changed
the compaction is complete, the count
becomes equal.
Regards,
Vishal Sharma
From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Friday, April 20, 2018 12:27 PM
To: User
Subject: Re: SSTable count in Nodetool tablestats(LevelCompactionStrategy)
I'm currently investigating this issue on one
;
> One of the tables in my keyspace is using LevelCompactionStrategy and when
> I used the nodetool tablestats keyspace.table_name command, I found some
> mismatch in the count of SSTables displayed at 2 different places. Please
> refer the attached image.
>
>
>
> The comma
Dear Community,
One of the tables in my keyspace is using LevelCompactionStrategy and when I
used the nodetool tablestats keyspace.table_name command, I found some mismatch
in the count of SSTables displayed at 2 different places. Please refer the
attached image.
The command is giving SSTable
rtition in two node. So my
>>method to clear expired data doesn't work because of the "overlaps" you
>>mentioned. Is my understanding corrent? One more question, nodetool cleanup
>>may work for me, but how cleanup deal with the sstable files in TWCS mode? I
>>have larg
> Hi All,
> I changed STCS to TWCS months ago and left some old sstable files. Some
> are almost tombstones. To release disk space, I issued compaction command
> on one file by JMX. After the compaction is done, I got one new file with
> almost the same size of the old one. Seems no tombs
Hi All,
I changed STCS to TWCS months ago and left some old sstable files. Some are
almost tombstones. To release disk space, I issued compaction command on one
file by JMX. After the compaction is done, I got one new file with almost the
same size of the old one. Seems no tombstones
you rename the files to have the matching UUID in the file
> names, then you should be able to do what you are talking about.
>
> On Mar 21, 2018, 4:50 AM -0500, Andrew Voumard <andr...@melbpc.org.au>,
> wrote:
>
> Hi All,
>
> I am using Cassandra 3.10
>
> I would l
, 2018, 4:50 AM -0500, Andrew Voumard <andr...@melbpc.org.au>, wrote:
> Hi All,
>
> I am using Cassandra 3.10
>
> I would like to know if the following SSTable row level merging scenario is
> possible:
>
> 1. On a Production Cluster
> - Take a full snapshot on every node
Hi All,
I am using Cassandra 3.10
I would like to know if the following SSTable row level merging scenario is
possible:
1. On a Production Cluster
- Take a full snapshot on every node
2. On a new, empty Secondary Cluster with the same topology
- Create a matching schema (keyspaces + tables
>
> Also, I was wondering if the key cache maintains a count of how many local
> accesses a key undergoes. Such information might be very useful for
> compactions of sstables by splitting data by frequency of use so that those
> can be preferentially compacted.
No we don't currently have metrics
ables.
>
> Has this been exploited... ever? I noticed in some of the patches for the
> archival options on TWCS there are complaints about being able to identify
> sstables that are archived and those that aren't.
>
> I would be interested in order to mark the sstables with metad
1 - 100 of 555 matches
Mail list logo