Yes sure, we can try to launch them on the trunk. I will let you know when this 
is done and how it turns out.
For the time being, we will stick with the already passing tests: 1/25 Test  
#1: qpid-client-test .................   Passed    1.01 sec 2/25 Test  #2: 
quick_perftest ...................   Passed    1.08 sec 3/25 Test  #3: 
quick_topictest ..................   Passed    3.94 sec 4/25 Test  #4: 
quick_txtest .....................   Passed    1.01 sec 5/25 Test  #5: 
quick_txtest2 ....................   Passed    0.99 sec 6/25 Test  #6: 
msg_group_tests ..................   Passed   23.73 sec 7/25 Test  #7: 
run_header_test ..................   Passed    3.20 sec 8/25 Test  #8: 
python_tests .....................***Failed  685.70 sec 9/25 Test  #9: 
interop_tests ....................***Failed    0.82 sec10/25 Test #10: ha_tests 
.........................   Passed    1.56 sec11/25 Test #11: qpidd_qmfv2_tests 
................***Failed    1.16 sec12/25 Test #12: interlink_tests 
..................***Failed    0.87 sec13/25 Test #13: idle_timeout_tests 
...............***Failed    1.13 sec14/25 Test #14: swig_python_tests 
................***Failed    0.82 sec15/25 Test #15: ipv6_test 
........................***Failed    0.78 sec16/25 Test #16: federation_tests 
.................***Failed  123.01 sec17/25 Test #17: federation_sys_tests 
.............***Failed    2.49 sec18/25 Test #18: queue_flow_limit_tests 
...........   Passed    1.37 sec19/25 Test #19: acl_tests 
........................***Failed   20.30 sec20/25 Test #20: cli_tests 
........................***Failed    0.37 sec21/25 Test #21: 
dynamic_log_level_test ...........***Failed    0.33 sec22/25 Test #22: 
dynamic_log_hires_timestamp ......***Failed    0.31 sec23/25 Test #23: 
store_tests ......................***Failed  153.19 sec24/25 Test #24: 
store_tests_clfs .................***Failed  153.92 sec25/25 Test #25: 
queue_redirect ...................***Failed    0.37 sec
PS: The ha_tests have a missing dependency but is marked as passed.
Regards,Adel

> Date: Mon, 23 May 2016 05:52:38 -0700
> Subject: Re: Qpid C++ 0.34 unit tests are failing with visual studio 2013
> From: [email protected]
> To: [email protected]
> 
> On Mon, May 23, 2016 at 4:12 AM, <[email protected]> wrote:
> 
> > 1) A missing powershell script(run_queue_redirect.ps1) forces a test
> > (queue_redirect) not to start
> >
> 
> Given that it's not really in working shape, I would disable this test.
> Otherwise, you'll need to write the missing script.  Even if you do write
> the missing script, there's no guarantee the test will work as expected,
> just because it wasn't ported to windows before and hasn't historically
> been under test.
> 
> 
> > 2) Python scripts can be executed on Linux without adding "python" prior
> > to the script name because it is detected from the first line in the script
> > (#!/usr/bin/env python). This doesn't work on Windows unless "py" extension
> > files are associated with python. To do this, all python scripts must end
> > with .py which is not the case for a lot of the scripts such as the qpidd
> > management tools (qpid-python-test, qpid-config, ...)
> >
> 
> The tests wrt Windows have long been a problem.  I started working on
> improving them a couple months ago.  One of the things I did was add batch
> files for all the command line tools so that the test scripts could more
> easily use them.
> 
> https://github.com/apache/qpid/blob/trunk/qpid/cpp/management/python/bin/qpid-config.bat
> 
> That work is on trunk right now, set for release in Qpid C++ 1.35.0.
> That's coming in a month or so.
> 
> 
> > So we wanted to know if you could help us fix the above issues?
> >
> > We are currently debugging other issues as well and wanted to ask you to
> > confirm that some of the tests actually requires the management tools
> > installed as a pre-requisite?
> >
> 
> That's right.  Many of the tests require the management tools.  As of
> 1.35.0, those tools are part of the C++ source tree.
> 
> Care to try running the trunk tests?  The changes there will address some
> of your problems, but there are definitely still known problems.  I'd love
> to get your help to improve our tests in the next release.
> 
> Justin
                                          

Reply via email to