Hello Kurt,
I think I might have found the problem:
Can you please look at the tablehistogram for a table and see if that seems to
be the problem? I think the Max Partition Size and Cell Count are too high:
Percentile SSTablesWrite Latency (micros) Read Latency (micros)
Hi Dipan,
This seems like a really unbalanced modelisation, you have some very wide
rows !
Can you share your model and explain a bit what you are storing in this
table ? Your partition key might not be appropriate
On 20 December 2017 at 09:43, Dipan Shah wrote:
> Hello
Hi Dipan,
Your node failure trace said :
java.io.FileNotFoundException:
/home/install/cassandra-3.11.0/data/data/hhahistory/history-065e0c90d9be11e7afbcdfeb48785ac5/mc-19095-big-Filter.db
(Too many open files)
You are probably crossing the max number of opened files set at OS level for
Hello Adama,
Even I realised this and found over 14k files in the data folder.
I am not sure if this is the ideal solution but i ran a manual compaction over
there and the number of files came down to 200.
I had the same issue in another node too so I am running a compaction there too
and
Hello Nicolas,
Here's our data model:
* CREATE TABLE hhahistory.history (
* tablename text,
* columnname text,
* tablekey bigint,
* updateddate timestamp,
* dateyearpart bigint,
* historyid bigint,
* appname text,
* audittype
Hi,
Running manual compaction is usually not the right thing to do as you will
end with some huge sstables that won't be compacted for a while.
You should first try to find out why compactions were not happening on your
cluster, because 14k sstables (I assume you are talking about this
particular
>
Somewhere along the line sstabledump tool incorrectly got setup to use tool
initialization, its fixed
https://issues.apache.org/jira/browse/CASSANDRA-13683
Chris
On Tue, Dec 19, 2017 at 5:45 PM, Mounika kale
wrote:
> Hi,
> I'm getting below error for all sstable
Hi,guys
We have a very big table. when I excute "truncate table" it takes such a
long time at last show "request timeout ".
well, if I execute drop table which complete very quickly.
I cannot see the big difference since they both delete the data.
Anyone can explain it to me ?
Assume you’re running 3.0 or 3.x - there’s a patch that’ll be in the next
releases that speed up truncate significantly - there’s some slowish code in
adding the sstables to the transaction log before deleting them, but it’ll be
much faster.
Truncate marks all the data as removed, and then
Detailed explanation, thanks
> 在 2017年12月21日,上午11:49,Jeff Jirsa 写道:
>
> Assume you’re running 3.0 or 3.x - there’s a patch that’ll be in the next
> releases that speed up truncate significantly - there’s some slowish code in
> adding the sstables to the transaction log
11 matches
Mail list logo