[
https://issues.apache.org/jira/browse/CASSANDRA-18131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707718#comment-17707718
]
Michael Semb Wever commented on CASSANDRA-18131:
------------------------------------------------
[~mmuzaf], before I commit, i'm thinking about this…
bq. ~ 45 min for the perThreadTrees = 500
The longest split we have currently in Cassandra-trunk-test-burn is ~30 minutes.
The burn tests are intended here only to be tested that they run, not to
actually perform the burn (there's no point on non-dedicated servers and
homogeneous agents), so short runs are fine (so long as it's easy and obvious
that developers need parameterise it properly).
Can you reduce it sensibly to ~5 minutes? (when running on the ci-cassandra.a.o
agents)
> LongBTreeTest times out after btree improvements from CASSANDRA-15510
> ---------------------------------------------------------------------
>
> Key: CASSANDRA-18131
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18131
> Project: Cassandra
> Issue Type: Bug
> Components: Local/Memtable
> Reporter: Michael Semb Wever
> Assignee: Maxim Muzafarov
> Priority: Normal
> Fix For: 4.0.x, 4.1.x, 5.x
>
> Time Spent: 50m
> Remaining Estimate: 0h
>
> Happening in both ci-cassandra.a.o and circleci.
> LongBTreeTest is timing out on 4.0, 4.1, trunk branches.
> Started back in mid April
> (https://github.com/apache/cassandra/commit/018c8e0d5e and
> https://github.com/apache/cassandra/commit/596daeb7f08).
> Nightlies shows when the failures started, evident by the
> 'jdk=jdk_1.8_latest,label=cassandra,split=7/' subfolder missing in the
> following…
> -
> https://nightlies.apache.org/cassandra/trunk/Cassandra-trunk-test-burn/1254/Cassandra-trunk-test-burn/
> -
> https://nightlies.apache.org/cassandra/cassandra-4.0/Cassandra-4.0-test-burn/343/
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]