Hi Robert,

I have tried my proposal on my travis and accelerated my build in 9.4% (see 
results below)

# Test|Present JVM options      |"-Xms256m -Xmx1536m -XX:+UseSerialGC"
-----------------------------------------------------------------               
        
1       35 min  53 sec          |35 min 35 sec
2       38 min  49 sec          |34 min  18 sec
3       35 min  34 sec          |29 min  38 sec
4       34 min  38 sec          |31 min  14 sec
5       35 min  41 sec          |35 min  11 sec
6       36 min  41 sec          |33 min  52 sec
7       49 min  59 sec          |31 min  35 sec
8       37 min  0 sec           |36 min  20 sec
9       32 min  28 sec          |31 min  48 sec
10      38 min  25 sec          |33 min  28 sec
11      36 min  19 sec          |38 min  24 sec
12      25 min  0 sec           |24 min  3 sec
------------------------------------------------------------------
Total   26187 sec               |23726  sec
------------------------------------------------------------------
Acceleration                    |9.40%  

I think almost 10% is good enough.

-----Original Message-----
From: Robert Metzger [mailto:rmetz...@apache.org] 
Sent: Thursday, March 16, 2017 6:26 PM
To: dev@flink.apache.org
Subject: Re: [DISCUSS] Could we Improve tests time and stability?

Hi Dmytro,

I'm happy to hear that you are trying to help us improving the test time 
situation :) We have another discussion here on the dev@ list to split the 
project into two git repositories to resolve the problem.

I agree that your proposed changes could improve the build times, but I'm not 
sure if they are enough to resolve them forever. Some tests just waste time by 
waiting on stuff to happen :) If you want, you can enable travis for your own 
Flink fork on GitHub, add your proposed changes to the travis / maven files and 
see how much they improve the build time.


On Thu, Mar 16, 2017 at 5:06 PM, Dmytro Shkvyra <dmytro_shkv...@epam.com>
wrote:

> Hi everyone,
> May be we should remove -XX:-UseGCOverheadLimit option from 
> maven-surefire-plugin args and increase -Xmx to 1536m for forks?
> We have about 4 GB RAM and 2 cores at test VMs. I think we can make 
> test faster than now. When I tried testing flink-runtime some tests 
> work too slow due to GC overhead.
> May be you also faced to problem when Travis build was fallen by timeout?
> Also we can use GC algorithms explicitly for forks execution.
> BTW, we run tests with java 7 and 8 and these versions use by default 
> different GC algorithms (GC1 for 8 and Parallel GC for 7). IMHO when 
> we have strict limitations of RAM and time of build we should avoid 
> any ambiguity.
> In case when some tests can generate very big datasets very fast, 
> paralel GC can do not have time to clean up. I do not know how G1 work 
> in this case exactly, but may be would better use old good 
> -XX:+UseSerialGC. We have only 1 core per fork so we anyway cant use 
> all advantages of G1 and ParralelGC. If we use SerialGC (use stop the 
> world) we can be sure that GC collect almost all garbage before test continue.
> What do you think about my idea?
> May be someone has another ideas how to improve tests time and stability?
>
>
> Dmytro Shkvyra
> Senior Software Engineer
>
> Office: +380 44 390 5457<tel:+380%2044%20390%205457> x 65346<tel:65346>
>  Cell: +380 50 357 6828<tel:+380%2050%20357%206828>   Email:
> dmytro_shkv...@epam.com<mailto:dmytro_shkv...@epam.com>
> Kyiv, Ukraine (GMT+3)   epam.com<http://www.epam.com>
>
> CONFIDENTIALITY CAUTION AND DISCLAIMER This message is intended only 
> for the use of the individual(s) or
> entity(ies) to which it is addressed and contains information that is 
> legally privileged and confidential. If you are not the intended 
> recipient, or the person responsible for delivering the message to the 
> intended recipient, you are hereby notified that any dissemination, 
> distribution or copying of this communication is strictly prohibited. 
> All unintended recipients are obliged to delete this message and 
> destroy any printed copies.
>
>

Reply via email to