It was in my to do list as well. I recall I ran tests without increasing
limits.

Which test requires 50000 open files? Maybe the one currently disabled?

Also, it does not like a good pattern to open so many files. If we really
use as many files, shouldn't we revisit that area?

On Sun, Jun 13, 2021, 4:02 PM Tim Duesterhus <t...@bastelstu.be> wrote:

> Using 'sudo' required quite a few workarounds in various places. Setting an
> explicit 'ulimit -n' removes the requirement for 'sudo', resulting in a
> cleaner
> workflow configuration.
> ---
>  .github/workflows/vtest.yml | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
> index 786713374..1dc216eeb 100644
> --- a/.github/workflows/vtest.yml
> +++ b/.github/workflows/vtest.yml
> @@ -99,13 +99,14 @@ jobs:
>        run: echo "::add-matcher::.github/vtest.json"
>      - name: Run VTest for HAProxy ${{ steps.show-version.outputs.version
> }}
>        id: vtest
> -      # sudo is required, because macOS fails due to an open files limit.
> -      run: sudo make reg-tests VTEST_PROGRAM=../vtest/vtest
> REGTESTS_TYPES=default,bug,devel
> +      run: |
> +        # This is required for macOS which does not actually allow to
> increase
> +        # the '-n' soft limit to the hard limit, thus failing to run.
> +        ulimit -n 5000
> +        make reg-tests VTEST_PROGRAM=../vtest/vtest
> REGTESTS_TYPES=default,bug,devel
>      - name: Show results
>        if: ${{ failure() }}
> -      # The chmod / sudo is necessary due to the `sudo` while running the
> tests.
>        run: |
> -        sudo chmod a+rX ${TMPDIR}/haregtests-*/
>          for folder in ${TMPDIR}/haregtests-*/vtc.*; do
>            printf "::group::"
>            cat $folder/INFO
> @@ -115,6 +116,6 @@ jobs:
>          shopt -s nullglob
>          for asan in asan.log*; do
>            echo "::group::$asan"
> -          sudo cat $asan
> +          cat $asan
>            echo "::endgroup::"
>          done
> --
> 2.32.0
>
>

Reply via email to