Hi, On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dp...@pgadmin.org> wrote:
> Hi > > On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh > <navnath.gad...@enterprisedb.com> wrote: > > Hello Dave, > > > > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dp...@pgadmin.org> wrote: > >> > >> Hi > >> > >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh > >> <navnath.gad...@enterprisedb.com> wrote: > >> > > >> >> When running with the patch: > >> >> > >> >> 1) The browser isn't closed, and the script never exits - it just > sits > >> >> indefinitely at: > >> >> > >> >> ===== > >> >> Please check output in file: > >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log > >> >> > >> >> make: *** [check] Error 1 > >> >> ===== > >> >> > >> >> without returning to a shell prompt. The browser exits when I hit > >> >> Ctrl+C. > >> > >> The above is still a problem. In fact, not only do I have to hit > >> Ctrl+C, but then the browser prompts me to check I really do want to > >> exit. > >> > >> There's also another problem that just showed up. I got the following > >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and > >> Tira@Pivotal are working on). Note how it's not reported in the > >> summary (or the JSON output): > > > > > > I found the issue, In the feature tests we need to add a scenario name > for > > each test case. the purpose of this patch is to print the failed/skipped > > test class name with the scenario name like: > > > > 152 tests passed > > > > 1 test failed: > > > > LoginRoleGetTestCase (Check Role Node) > > > > 16 tests skipped: > > > > SynonymGetTestCase (Fetch synonym Node URL) > > > > But our in-built test framework does not provide that scenario name with > > failed/skipped test case that's why I override apply_scenario() function. > > > > def apply_scenario(scenario, test): > > > > name, parameters = scenario > > > > parameters["scenario_name"] = name > > > > While printing the result, I have checked the if 'scenario_name' in test > as > > we need to print scenario name in test summary as well as in JSON file. > > > > I can do it without scenario name but for better understanding which test > > scenario is failed it's good to add a scenario name with each test case. > > OK. > > > See this is how test cases looks like while printing on console > > > > API: > > > > runTest > > (pgadmin.browser.server_groups.servers.databases. > schemas.types.tests.test_types_put.TypesUpdateTestCase) > > > > Update type under schema node ... ok > > > > Feature tests: > > > > runTest > > (pgadmin.utils.tests.test_versioned_template_loader. > TestVersionedTemplateLoader) > > ... ok > > > > No scenario name in feature tests. > > > > OK, is that easy to fix while you're at it? > I have two solutions- 1. Need a little hack to skip scenario/test name if that does not exist, but that's not the best idea. 2. Owner of feature tests should add scenario/test name to each feature test. In the summary also we will know for which scenario test is failing or skipping. This is ideal and long term solution and I prefer it. > > > -- > Dave Page > Blog: http://pgsnake.blogspot.com > Twitter: @pgsnake > > EnterpriseDB UK: http://www.enterprisedb.com > The Enterprise PostgreSQL Company > -- Regards, Navnath Gadakh EnterpriseDB Corporation The Enterprise PostgreSQL Company