Hi Frank, On Thu, 2021-12-09 at 19:10 -0500, Frank Ch. Eigler via Elfutils-devel wrote: > So I would recommend to simply add a testcase that just uses no- > > option, > > -C and -C 256 (or take any arbitrary number, 1 might be an interesting > > corner case) and see that if you have 4 (8, 16, ...?) debuginfod-find > > (or curl metric) processes doing some parallel queries works as > > expected. > > OK, here's the current version, with a test, and with one more small > improvement to the error message printer (to add timestamp & batching).
Looks good, but two nitpicks. > - > +static void // error logging callback from libmicrohttpd internals > +error_cb (void *arg, const char *fmt, va_list ap) > +{ > + (void) arg; > + inc_metric("error_count","libmicrohttpd",fmt); > + char errmsg[512]; > + (void) vsnprintf (errmsg, sizeof(errmsg), fmt, ap); // ok if slightly > truncated > + obatched(cerr) << "libmicrohttpd error: " << errmsg; // MHD_DLOG calls > already include \n > +} That vsnprintf and truncation comment had me scared for a minute. But vsnprintf always terminates the string with a '\0' so that should be fine. diff --git a/tests/ChangeLog b/tests/ChangeLog > diff --git a/tests/debuginfod-subr.sh b/tests/debuginfod-subr.sh > index 59033f35980e..0b59b5b87e55 100755 > --- a/tests/debuginfod-subr.sh > +++ b/tests/debuginfod-subr.sh > @@ -158,3 +158,13 @@ PORT1=0 > PORT2=0 > PID1=0 > PID2=0 > + > + > +# run $1 as a sh -c command, invert result code > +xfail() { > + if sh -c "$1"; then > + false > + else > + true > + fi > +} Aha, nice. We should use that in other tests too. > diff --git a/tests/run-debuginfod-webapi-concurrency.sh b/tests/run- > debuginfod-webapi-concurrency.sh > new file mode 100755 > index 000000000000..513f834f40d5 > --- /dev/null > +++ b/tests/run-debuginfod-webapi-concurrency.sh > @@ -0,0 +1,60 @@ > +#!/usr/bin/env bash > +# > +# Copyright (C) 2021 Red Hat, Inc. > +# This file is part of elfutils. > +# > +# This file is free software; you can redistribute it and/or modify > +# it under the terms of the GNU General Public License as published > by > +# the Free Software Foundation; either version 3 of the License, or > +# (at your option) any later version. > +# > +# elfutils is distributed in the hope that it will be useful, but > +# WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > +# GNU General Public License for more details. > +# > +# You should have received a copy of the GNU General Public License > +# along with this program. If not, see < > http://www.gnu.org/licenses/>. > + > +. $srcdir/debuginfod-subr.sh > + > +# for test case debugging, uncomment: > +set -x > + > +mkdir Z > +# This variable is essential and ensures no time-race for claiming > ports occurs > +# set base to a unique multiple of 100 not used in any other 'run- > debuginfod-*' test > +base=12000 > +get_ports > + > +export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache > + > +cp -rvp ${abs_srcdir}/debuginfod-tars Z > +tempfiles Z > + > + > +for Cnum in "" "-C" "-C10" "-C100" > +do > + env LD_LIBRARY_PATH=$ldpath > ${abs_builddir}/../debuginfod/debuginfod $VERBOSE "$Cnum" -d :memory: > -Z .tar.xz -Z .tar.bz2=bzcat -p $PORT1 -t0 -g0 -v --fdcache-fds=0 -- > fdcache-prefetch-fds=0 Z >> vlog$PORT1 2>&1 & > + PID1=$! > + tempfiles vlog$PORT1 > + errfiles vlog$PORT1 > + > + wait_ready $PORT1 'ready' 1 > + # Wait for server to finish indexing > + wait_ready $PORT1 'thread_work_total{role="traverse"}' 1 > + wait_ready $PORT1 'thread_work_pending{role="scan"}' 0 > + wait_ready $PORT1 'thread_busy{role="scan"}' 0 > + > + # Do a bunch of lookups in parallel > + for jobs in `seq 400`; do > + curl -s > http://localhost:$PORT1/buildid/cee13b2ea505a7f37bd20d271c6bc7e5f8d2dfcb/debuginfo > > /dev/null & > + done > + (sleep 5; curl -s http://localhost:$PORT1/metrics | egrep > 'error|transfers'; kill $PID1) & > + wait > + PID1=0 > +done Launching 400 processes at once seems a lot. But maybe machines these days are fine with that. I would say that 50 should also do the trick. But maybe I just have wimpy machines :) Is there a more elegant way to check all requests have been processed than a sleep 5 followed by a metric scrape for errors? Is there a metric that gives the number of successful queries that we could use instead with wait_ready? > +xfail "grep Server.reached.connection vlog$PORT1" # PR18661 > + > +exit 0 Cheers, Mark