Hi Dave,

     Please find the revised patch for test result enhancement.

*What's in the patch:*
1. The test result summary will store in JSON file.
2. Removed some redundant code from *regression/test_utils.py*
*3. A*dded the scenario names for feature tests.
4. To print test scenario names in failed and skipped test cases, I o
verride *apply_scenario()* function in *regression/test_utils.py*

I have also attached the sample JSON file with the test result as per your
suggestions.

Thanks!



On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dp...@pgadmin.org> wrote:

> On Wed, Mar 29, 2017 at 4:12 AM, Navnath Gadakh
> <navnath.gad...@enterprisedb.com> wrote:
> > Hi,
> >
> > On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dp...@pgadmin.org> wrote:
> >>
> >> Hi
> >>
> >> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
> >> <navnath.gad...@enterprisedb.com> wrote:
> >> > Hello Dave,
> >> >
> >> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dp...@pgadmin.org> wrote:
> >> >>
> >> >> Hi
> >> >>
> >> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
> >> >> <navnath.gad...@enterprisedb.com> wrote:
> >> >> >
> >> >> >> When running with the patch:
> >> >> >>
> >> >> >> 1) The browser isn't closed, and the script never exits - it just
> >> >> >> sits
> >> >> >> indefinitely at:
> >> >> >>
> >> >> >> =====
> >> >> >> Please check output in file:
> >> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
> >> >> >>
> >> >> >> make: *** [check] Error 1
> >> >> >> =====
> >> >> >>
> >> >> >> without returning to a shell prompt. The browser exits when I hit
> >> >> >> Ctrl+C.
> >> >>
> >> >> The above is still a problem. In fact, not only do I have to hit
> >> >> Ctrl+C, but then the browser prompts me to check I really do want to
> >> >> exit.
> >> >>
> >> >> There's also another problem that just showed up. I got the following
> >> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
> >> >> Tira@Pivotal are working on). Note how it's not reported in the
> >> >> summary (or the JSON output):
> >> >
> >> >
> >> > I found the issue, In the feature tests we need to add a scenario name
> >> > for
> >> > each test case. the purpose of this patch is to print the
> failed/skipped
> >> > test class name with the scenario name like:
> >> >
> >> > 152 tests passed
> >> >
> >> > 1 test failed:
> >> >
> >> >          LoginRoleGetTestCase (Check Role Node)
> >> >
> >> > 16 tests skipped:
> >> >
> >> > SynonymGetTestCase (Fetch synonym Node URL)
> >> >
> >> > But our in-built test framework does not provide that scenario name
> with
> >> > failed/skipped test case that's why I override apply_scenario()
> >> > function.
> >> >
> >> > def apply_scenario(scenario, test):
> >> >
> >> >        name, parameters = scenario
> >> >
> >> >       parameters["scenario_name"] = name
> >> >
> >> > While printing the result, I have checked the if 'scenario_name' in
> test
> >> > as
> >> > we need to print scenario name in test summary as well as in JSON
> file.
> >> >
> >> > I can do it without scenario name but for better understanding which
> >> > test
> >> > scenario is failed it's good to add a scenario name with each test
> case.
> >>
> >> OK.
> >>
> >> > See this is how test cases looks like while printing on console
> >> >
> >> >  API:
> >> >
> >> > runTest
> >> >
> >> > (pgadmin.browser.server_groups.servers.databases.
> schemas.types.tests.test_types_put.TypesUpdateTestCase)
> >> >
> >> > Update type under schema node ... ok
> >> >
> >> >  Feature tests:
> >> >
> >> > runTest
> >> >
> >> > (pgadmin.utils.tests.test_versioned_template_loader.
> TestVersionedTemplateLoader)
> >> > ... ok
> >> >
> >> > No scenario name in feature tests.
> >> >
> >>
> >> OK, is that easy to fix while you're at it?
> >
> >
> > I have two solutions-
> >
> > 1. Need a little hack to skip scenario/test name if that does not exist,
> but
> > that's not the best idea.
> >
> > 2. Owner of feature tests should add scenario/test name to each feature
> > test. In the summary also we will know for which scenario test is
> failing or
> > skipping.
> > This is ideal and long term solution and I prefer it.
>
> Agreed - and as there are only 2 feature tests, you should be able to
> fix them up pretty quickly :-p
>
> Once code is in the repo, it's "ours", meaning the entire communities.
> I wouldn't expect us to ping all issues back to Pivotal - we're one
> team on this.
>
> Thanks!
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>



-- 
Regards,
Navnath Gadakh

EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Attachment: tests_result_enhancement_v3.patch
Description: Binary data

Attachment: tests_result.json
Description: application/json

-- 
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Reply via email to