Re: Cheap e2e test timing to find slowest tests (strategy = exhaustive)

2017-02-21 Thread Bharath Vissapragada
I have one assigned to me [1]. test_permanent_udfs.py is known to be slow as it invokes "hive" shell a number of times and that spends a ton of time (re)loading jars. It was done that way since there were a few bugs in hive's "show functions" when run it via beeline in the same session. The

Re: Cheap e2e test timing to find slowest tests (strategy = exhaustive)

2017-02-21 Thread Matthew Jacobs
BTW I did see Alex's comment on the related thread about this being CPU time rather than response time, but given how frequently the analytic fn tests show up on that list it seems fair to assume it's contributing a fair amount to the response time. It'll have to be tested, of course. Filed

Re: Cheap e2e test timing to find slowest tests (strategy = exhaustive)

2017-02-21 Thread Matthew Jacobs
Thanks David, this is interesting. I'll put up a patch to remove some of the tested file formats for analytic functions since it shouldn't really matter too much, unless there are differences in timing producing rows to the analytic functions, but I'm not sure that's the right way to get that

Tweaked some gerrit httpd settings

2017-02-21 Thread Todd Lipcon
I always have slowness issues with gerrit, so I just took a quick look at the httpd config and noticed that KeepAlive was off. I turned it on, with a low KeepAlive timeout, and also bumped the max number of workers (since keepalive connections hold workers open). Ideally we'd switch to a better

Re: Cheap e2e test timing to find slowest tests (strategy = core)

2017-02-21 Thread Alex Behm
I remember trying this some time ago, and the the way it interacted with xdist meant that the time reported was CPU time - but in most cases we care about response time. On Tue, Feb 21, 2017 at 11:57 AM, David Knupp wrote: > The information in the docs is scant: > >

Re: Cheap e2e test timing to find slowest tests (strategy = core)

2017-02-21 Thread David Knupp
The information in the docs is scant: duration profiling: new “–duration=N” option showing the N slowest test execution or setup/teardown calls. This is most useful if you want to find out where your slowest test code is. My guess is that this is wall clock time, from test setup to

Re: status-benchmark.cc compilation time

2017-02-21 Thread Henry Robinson
Did you run . bin/set-classpath.sh before running expr-benchmark? On 21 February 2017 at 11:30, Zachary Amsden wrote: > Unfortunately some of the benchmarks have actually bit-rotted. For > example, expr-benchmark compiles but immediately throws JNI exceptions. > > On Tue,

Re: status-benchmark.cc compilation time

2017-02-21 Thread Zachary Amsden
Unfortunately some of the benchmarks have actually bit-rotted. For example, expr-benchmark compiles but immediately throws JNI exceptions. On Tue, Feb 21, 2017 at 10:55 AM, Marcel Kornacker wrote: > I'm also in favor of not compiling it on the standard commandline. > >

Fwd: Cheap e2e test timing to find slowest tests (strategy = exhaustive)

2017-02-21 Thread David Knupp
TIME TABLE FORMATSlowest Exhaustive Parallel Tests

Re: Cheap e2e test timing to find slowest tests (strategy = core)

2017-02-21 Thread Alex Behm
Is this response time or CPU time? On Tue, Feb 21, 2017 at 11:29 AM, David Knupp wrote: > I just discovered on Friday that pytest allows for a --durations=N > parameter, which will output the time it takes to execute the slowest N > tests to the console, so I ran both a

Cheap e2e test timing to find slowest tests (strategy = core)

2017-02-21 Thread David Knupp
I just discovered on Friday that pytest allows for a --durations=N parameter, which will output the time it takes to execute the slowest N tests to the console, so I ran both a core and exhaustive private build over the weekend, and clocked the slowest 50 for each grouping of tests. These are

Re: status-benchmark.cc compilation time

2017-02-21 Thread Marcel Kornacker
I'm also in favor of not compiling it on the standard commandline. However, I'm very much against allowing the benchmarks to bitrot. As was pointed out, those benchmarks can be valuable tools during development, and keeping them in working order shouldn't really impact the development process.

Re: status-benchmark.cc compilation time

2017-02-21 Thread Lars Volker
I think -notests already skips the benchmarks. However, I understood that the proposition is to even disable building the benchmarks without -notests, i.e. they'll be disabled by default and you'd need to specify -build_benchmarks to build them. I'm in favor of doing that, including building them

Re: status-benchmark.cc compilation time

2017-02-21 Thread Alex Behm
+1 for not compiling the benchmarks in -notests On Mon, Feb 20, 2017 at 7:55 PM, Jim Apple wrote: > > On which note, would anyone object if we disabled benchmark compilation > by > > default when building the BE tests? I mean separating out -notests into > > -notests and