I have one assigned to me [1]. test_permanent_udfs.py is known to be slow
as it invokes "hive" shell a number of times and that spends a ton of time
(re)loading jars. It was done that way since there were a few bugs in
hive's "show functions" when run it via beeline in the same session. The
BTW I did see Alex's comment on the related thread about this being
CPU time rather than response time, but given how frequently the
analytic fn tests show up on that list it seems fair to assume it's
contributing a fair amount to the response time. It'll have to be
tested, of course.
Filed
Thanks David, this is interesting.
I'll put up a patch to remove some of the tested file formats for
analytic functions since it shouldn't really matter too much, unless
there are differences in timing producing rows to the analytic
functions, but I'm not sure that's the right way to get that
I always have slowness issues with gerrit, so I just took a quick look at
the httpd config and noticed that KeepAlive was off. I turned it on, with a
low KeepAlive timeout, and also bumped the max number of workers (since
keepalive connections hold workers open).
Ideally we'd switch to a better
I remember trying this some time ago, and the the way it interacted with
xdist meant that the time reported was CPU time - but in most cases we care
about response time.
On Tue, Feb 21, 2017 at 11:57 AM, David Knupp wrote:
> The information in the docs is scant:
>
>
The information in the docs is scant:
duration profiling: new “–duration=N” option showing the N slowest
test execution or setup/teardown calls. This is most useful if you want
to find out where your slowest test code is.
My guess is that this is wall clock time, from test setup to
Did you run . bin/set-classpath.sh before running expr-benchmark?
On 21 February 2017 at 11:30, Zachary Amsden wrote:
> Unfortunately some of the benchmarks have actually bit-rotted. For
> example, expr-benchmark compiles but immediately throws JNI exceptions.
>
> On Tue,
Unfortunately some of the benchmarks have actually bit-rotted. For
example, expr-benchmark compiles but immediately throws JNI exceptions.
On Tue, Feb 21, 2017 at 10:55 AM, Marcel Kornacker
wrote:
> I'm also in favor of not compiling it on the standard commandline.
>
>
TIME TABLE FORMATSlowest Exhaustive Parallel Tests
Is this response time or CPU time?
On Tue, Feb 21, 2017 at 11:29 AM, David Knupp wrote:
> I just discovered on Friday that pytest allows for a --durations=N
> parameter, which will output the time it takes to execute the slowest N
> tests to the console, so I ran both a
I just discovered on Friday that pytest allows for a --durations=N
parameter, which will output the time it takes to execute the slowest N
tests to the console, so I ran both a core and exhaustive private build
over the weekend, and clocked the slowest 50 for each grouping of tests.
These are
I'm also in favor of not compiling it on the standard commandline.
However, I'm very much against allowing the benchmarks to bitrot. As
was pointed out, those benchmarks can be valuable tools during
development, and keeping them in working order shouldn't really impact
the development process.
I think -notests already skips the benchmarks. However, I understood that
the proposition is to even disable building the benchmarks without
-notests, i.e. they'll be disabled by default and you'd need to specify
-build_benchmarks to build them.
I'm in favor of doing that, including building them
+1 for not compiling the benchmarks in -notests
On Mon, Feb 20, 2017 at 7:55 PM, Jim Apple wrote:
> > On which note, would anyone object if we disabled benchmark compilation
> by
> > default when building the BE tests? I mean separating out -notests into
> > -notests and
14 matches
Mail list logo