[
https://issues.apache.org/jira/browse/CASSANDRA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735819#comment-15735819
]
Christian Esken edited comment on CASSANDRA-13005 at 12/9/16 5:13 PM:
----------------------------------------------------------------------
I have imported some of the old defective SSTables in a test installation via
sstableloader:
{code}
# sstableloader -d 127.0.0.1 cachestore/entries
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /home/cesken/cachestore/entries/mc-50789-big-Data.db
/home/cesken/cachestore/entries/mc-51223-big-Data.db
/home/cesken/cachestore/entries/mc-51351-big-Data.db to [/127.0.0.1]
progress: [/127.0.0.1]0:0/3 0 % total: 0% 3,152MiB/s (avg: 3,152MiB/s)
progress: [/127.0.0.1]0:0/3 0 % total: 0% 1,908GiB/s (avg: 6,294MiB/s)
progress: [/127.0.0.1]0:0/3 0 % total: 0% 1,599GiB/s (avg: 9,423MiB/s)
[...]
progress: [/127.0.0.1]0:2/3 99 % total: 99% 3,177MiB/s (avg: 6,227MiB/s)
progress: [/127.0.0.1]0:3/3 100% total: 100% 3,436MiB/s (avg: 6,214MiB/s)
progress: [/127.0.0.1]0:3/3 100% total: 100% 0,000KiB/s (avg: 6,102MiB/s)
Summary statistics:
Connections per host : 1
Total files transferred : 3
Total bytes transferred : 3,783GiB
Total duration : 634779 ms
Average transfer rate : 6,102MiB/s
Peak transfer rate : 9,423MiB/s
{code}
As seen above, the 3 files were loaded, but Cassandra did not import any rows.
Probably because the files are defective, or because everything in there is
expired. A SELECT on the table also does not return any data.
{code}
# sstableexpiredblockers cachestore entries
No sstables for cachestore.entries
{code}
was (Author: cesken):
I have imported some of the old defective SSTables in a test installation via
sstableloader:
{code}
# sstableloader -d 127.0.0.1 cachestore/entries
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /home/cesken/cachestore/entries/mc-50789-big-Data.db
/home/cesken/cachestore/entries/mc-51223-big-Data.db
/home/cesken/cachestore/entries/mc-51351-big-Data.db to [/127.0.0.1]
progress: [/127.0.0.1]0:0/3 0 % total: 0% 3,152MiB/s (avg: 3,152MiB/s)
progress: [/127.0.0.1]0:0/3 0 % total: 0% 1,908GiB/s (avg: 6,294MiB/s)
progress: [/127.0.0.1]0:0/3 0 % total: 0% 1,599GiB/s (avg: 9,423MiB/s)
[...]
progress: [/127.0.0.1]0:2/3 99 % total: 99% 3,177MiB/s (avg: 6,227MiB/s)
progress: [/127.0.0.1]0:3/3 100% total: 100% 3,436MiB/s (avg: 6,214MiB/s)
progress: [/127.0.0.1]0:3/3 100% total: 100% 0,000KiB/s (avg: 6,102MiB/s)
Summary statistics:
Connections per host : 1
Total files transferred : 3
Total bytes transferred : 3,783GiB
Total duration : 634779 ms
Average transfer rate : 6,102MiB/s
Peak transfer rate : 9,423MiB/s
{code}
As seen above, the 3 files were loaded, but Cassandra did not import any rows.
Probably because the files are defective, of because everything in there is
expired. A SELECT on the table also does not return any data.
{code}
# sstableexpiredblockers cachestore entries
No sstables for cachestore.entries
{code}
> Cassandra TWCS is not removing fully expired tables
> ---------------------------------------------------
>
> Key: CASSANDRA-13005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13005
> Project: Cassandra
> Issue Type: Bug
> Components: Compaction
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version
> 1.8.0_112-b15)
> Linux 3.16
> Reporter: Christian Esken
> Labels: twcs
> Attachments: sstablemetadata-empty-type-that-is-3GB.txt
>
>
> I have a table where all columns are stored with TTL of maximum 4 hours.
> Usually TWCS compaction properly removes expired data via tombstone
> compaction and also removes fully expired tables. The number of SSTables is
> nearly constant since weeks. Good.
> The problem: Suddenly TWCS does not remove old SSTables any longer. They are
> being recreated frequently (judging form the file creation timestamp), but
> the number of tables is growing. Analysis and actions take so far:
> - sstablemetadata shows strange data, as if the table is completely empty.
> - sstabledump throws an Exception when running it on such a SSTable
> - Even triggering a manual major compaction will not remove the old
> SSTable's. To be more precise: They are recreated with new id and timestamp
> (not sure whether they are identical as I cannot inspect content due to the
> sstabledump crash)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)