[ 
https://issues.apache.org/jira/browse/CASSANDRA-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17331303#comment-17331303
 ] 

Michael Semb Wever commented on CASSANDRA-16604:
------------------------------------------------

added commit to patch.

ARM test: 
[trunk|https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch-test-parallel/46]

> Parallelise docker container runs for tests in ci-cassandra.a.o
> ---------------------------------------------------------------
>
>                 Key: CASSANDRA-16604
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16604
>             Project: Cassandra
>          Issue Type: Task
>          Components: Test/unit
>            Reporter: Michael Semb Wever
>            Assignee: Michael Semb Wever
>            Priority: Normal
>             Fix For: 2.2.x, 3.0.x, 3.11.x, 4.0.x
>
>
> This was raised on the dev ML, where the consensus was to remove it: 
> https://lists.apache.org/thread.html/r1ca3c72b90fa6c57c1cb7dcd02a44221dcca991fe7392abd8c29fe95%40%3Cdev.cassandra.apache.org%3E
> The idea is to then replace ant test parallelism with docker container 
> parallelism.
> PoC patch: 
> https://github.com/apache/cassandra-builds/compare/trunk...thelastpickle:mck/16587-2/trunk
> This is just a quick PoC, aimed at the ci-cassandra agents that have
> 4 cores and 16gb ram available to each executor, but I imagine instead
> something that spawns a number of containers based on system
> resources, like we currently do with get-cores and get-mem. 
> Also worth noting the overhead here, compared with the ant parallelism 
> approach, docker
> builds everything in each container from scratch, but this too can be
> improved easily enough.
> Cleaning up any remnant `-Dtest.runners=` options is also part of this ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to