On Fri, Dec 07, 2018 at 12:10:05AM +0100, SZEDER Gábor wrote:
> On Wed, Dec 05, 2018 at 04:56:21PM -0500, Jeff King wrote:
> > Could we just kill them all?
> >
> > I guess it's a little tricky, because $! is going to give us the pid of
> > each subshell. We actually want to kill the test sub-proc
On Thu, Dec 06, 2018 at 11:56:01PM +0100, SZEDER Gábor wrote:
> > +test_expect_success 'roll those dice' '
> > + case "$(openssl rand -base64 1)" in
> > + z*)
> > + return 1
> > + esac
> > +'
>
> Wasteful :)
>
> test $(($$ % 42)) -ne 0
Oh, indeed, that is much more clever. :)
On Wed, Dec 05, 2018 at 04:56:21PM -0500, Jeff King wrote:
> Could we just kill them all?
>
> I guess it's a little tricky, because $! is going to give us the pid of
> each subshell. We actually want to kill the test sub-process. That takes
> a few contortions, but the output is nice (every sub-jo
On Wed, Dec 05, 2018 at 04:36:26PM -0500, Jeff King wrote:
> The signal interrupts the first wait.
Ah, of course. I'm ashamed to say that this is not the first time I
forget about that...
> > Bash 4.3 or later are strange: I get back the shell prompt immediately
> > after ctrl-C as well, so it d
Jeff King writes:
> Each "wait" will try to collect all processes, but may be interrupted by
> a signal. So the correct number is actually "1 plus the number of times
> the user hits ^C".
Yeah and that is not bounded. It is OK not to catch multiple ^C
that races with what we do, so having ane e
On Thu, Dec 06, 2018 at 09:22:23AM +0900, Junio C Hamano wrote:
> > So the right number of waits is either "1" or "2". Looping means we do
> > too many (which is mostly a harmless noop) or too few (in the off chance
> > that you have only a single job ;) ). So it works out in practice.
>
> Well,
Jeff King writes:
> But the ^C case is interesting. Try running your script under "sh -x"
> and hitting ^C. The signal interrupts the first wait. In my script (or
> yours modified to use a single wait), we then proceed immediately to the
> "exit", and get output from the subshells after we've exi
On Wed, Dec 05, 2018 at 03:01:06PM +0100, SZEDER Gábor wrote:
> > > - Make '--stress' imply '--verbose-log' and discard the test's
> > > standard ouput and error; dumping the output of several parallel
> > > tests to the terminal would create a big ugly mess.
> >
> > Makes sense. My scr
On Wed, Dec 05, 2018 at 11:34:54AM +0100, SZEDER Gábor wrote:
>
> Just a quick reply to this one point for now:
>
> On Wed, Dec 05, 2018 at 12:44:09AM -0500, Jeff King wrote:
> > On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:
> > > + job_nr=0
> > > + while test $job_nr -lt "$job_c
On Wed, Dec 05 2018, SZEDER Gábor wrote:
> On Wed, Dec 05, 2018 at 03:01:41PM +0100, Ævar Arnfjörð Bjarmason wrote:
>> >> decide to stress test in advance, since we'd either flock() the trash
>> >> dir, or just mktemp(1)-it.
>> >
>> > While 'mktemp' seems to be more portable than 'flock', it doe
On Wed, Dec 05, 2018 at 03:01:41PM +0100, Ævar Arnfjörð Bjarmason wrote:
> >> decide to stress test in advance, since we'd either flock() the trash
> >> dir, or just mktemp(1)-it.
> >
> > While 'mktemp' seems to be more portable than 'flock', it doesn't seem
> > to be portable enough; at least it's
On Wed, Dec 05 2018, SZEDER Gábor wrote:
> On Tue, Dec 04, 2018 at 07:11:08PM +0100, Ævar Arnfjörð Bjarmason wrote:
>> It's a frequent annoyance of mine in the test suite that I'm
>> e.g. running t*.sh with some parallel "prove" in one screen, and then I
>> run tABCD*.sh manually, and get unluck
On Wed, Dec 05, 2018 at 12:44:09AM -0500, Jeff King wrote:
> On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:
>
> > To prevent the several parallel invocations of the same test from
> > interfering with each other:
> >
> > - Include the parallel job's number in the name of the tras
On Tue, Dec 04, 2018 at 07:11:08PM +0100, Ævar Arnfjörð Bjarmason wrote:
> It's a frequent annoyance of mine in the test suite that I'm
> e.g. running t*.sh with some parallel "prove" in one screen, and then I
> run tABCD*.sh manually, and get unlucky because they use the same trash
> dir, and both
Just a quick reply to this one point for now:
On Wed, Dec 05, 2018 at 12:44:09AM -0500, Jeff King wrote:
> On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:
> > + job_nr=0
> > + while test $job_nr -lt "$job_count"
> > + do
> > + wait
> > + job_nr=$(($job_nr
On Tue, Dec 04, 2018 at 07:11:08PM +0100, Ævar Arnfjörð Bjarmason wrote:
> It's a frequent annoyance of mine in the test suite that I'm
> e.g. running t*.sh with some parallel "prove" in one screen, and then I
> run tABCD*.sh manually, and get unlucky because they use the same trash
> dir, and bot
On Tue, Dec 04, 2018 at 06:04:14PM +0100, Ævar Arnfjörð Bjarmason wrote:
>
> On Tue, Dec 04 2018, SZEDER Gábor wrote:
>
> > The number of parallel invocations is determined by, in order of
> > precedence: the number specified as '--stress=', or the value of
> > the GIT_TEST_STRESS_LOAD environme
On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:
> To prevent the several parallel invocations of the same test from
> interfering with each other:
>
> - Include the parallel job's number in the name of the trash
> directory and the various output files under 't/test-results/'
On Tue, Dec 04 2018, SZEDER Gábor wrote:
> Unfortunately, we have a few flaky tests, whose failures tend to be
> hard to reproduce. We've found that the best we can do to reproduce
> such a failure is to run the test repeatedly while the machine is
> under load, and wait in the hope that the lo
On Tue, Dec 04, 2018 at 06:04:14PM +0100, Ævar Arnfjörð Bjarmason wrote:
>
> On Tue, Dec 04 2018, SZEDER Gábor wrote:
>
> > The number of parallel invocations is determined by, in order of
> > precedence: the number specified as '--stress=', or the value of
> > the GIT_TEST_STRESS_LOAD environmen
On Tue, Dec 04 2018, SZEDER Gábor wrote:
> The number of parallel invocations is determined by, in order of
> precedence: the number specified as '--stress=', or the value of
> the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> available processors in '/proc/cpuinfo', or 8.
Unfortunately, we have a few flaky tests, whose failures tend to be
hard to reproduce. We've found that the best we can do to reproduce
such a failure is to run the test repeatedly while the machine is
under load, and wait in the hope that the load creates enough variance
in the timing of the test
22 matches
Mail list logo