Hi All,
The intention of this mail to share our experiences on writing integration
tests for API Manager with our automation test framework(we used 4.2.5).
Over last few weeks we were working on adding automated tests to cover some
of user scenarios in API manager. Test automation framework provided great
support to implement new test cases. With the help of automation framework
we were able to gain nearly 60% line of code, 74% method and 80% class
coverage for API manager specific packages. We have discussed about process
and updates on developer mailing list with header "*[Dev][APIM] Status
update on improving automated tests for API manager*". With experience we
gained we would like to suggest some improvements + best practices for our
automation framework and users. Some of these improvements may already
available with newer version of automation framework(4.3.0). If so please
ignore that as we have planned to move next version in future.


   1. Having single API to all http, https based client operations. Fix
   some minor issues in Http client.
   2. Make all base test cases environment independent. With that we will
   be able to run any test in builder enabled mode or disabled mode +
   distributed deployment or single server deployment.
   3. It would be ideal if we can start new server with some predefined
   configuration set. Ex: For API manager BAM integration we can ask user to
   put BAM distribution into some folder and add path to variable named
   bam.server.home. Then we can put all configurations into src/resources/conf
   directory. As most of carbon based servers having similar format we will
   able to start new server that need for our test like this
   [IntegrationTestUtils.startCarbonServer(" bam.server.home", "confDir")].
   This can be implemented as platform scenario but sometimes we might need to
   download already released pack and test integration scenario with that(In
   APIM case; test compatibility of APIM 2.0.0 with different BAM versions
   2.0.0, 2.3.0, 2.4.0).
   4. Also it would be ideal if we can start few sample web services as a
   part of automation test framework initializing(as external services). So we
   might be able to test against them. At this moment we are using some
   external hosted services to test our products. We can host few sample
   services on SLive and use them for test cases.
   5. Configuration re writer utility class. When we run test with builder
   enabled sometimes we need to edit configuration files based on the
   scenario. As an example we can consider about same test scenario with and
   without caching. So we need to enable caching by editing cache
   configurations of config file. We can have few configuration files based on
   scenario but it would be nightmare maintaining all of them.
   6. Decouple base test cases (ServerStartupBaseTest) from test scenarios
   and make it configurable.
   7. While writing APIM test modules for jaggery based applications we
   wanted to copy sample jaggery applications and validate output responses.
   We already written some of utility methods and we can improve them and use
   jaggery test framework as well.
   8. Adding code level break down for emma reports(now we have package,
   class and method level break down). We might need to point source code to
   emma run time for this. With this we will be able to identify missing
   scenarios, code blocks with minimum effort.
   9. We have to think about some advance scenarios like multiple user
   stores, multiple database types. At this stage for APIM related scenarios
   we used puppet scripts to setup distributed environment with MySQL and ran
   integration test against configured deployment. So puppet will take care of
   setting up environment with different database types, user stores and etc.
   10. Is there any possibility of generating emma code coverage for
   distributed external tests(builder disabled) as well? It would be useful
   IMO.
   11. Support to send bulk requests(we are using JMeter tests as work
   arounds)


*Things we need to focus when we develop tests using automation framework*

   - Write single test case which can run on builder enabled single server
   mode or external distributed/ clustered deployment (In APIM case it would
   be single server mode and gateway, key manager, store, publisher separated
   deployment).
   - Cleanup all data we added to system during the tests.
   - Always make all external services, testing end points configurable.
   - Use emma report to check code coverage and identify missing
   scenarios/methods/classes.
   - Use proper emma instrumentation and filtering according to you product
   use cases.


If we can add considerable amount of test cases to cover user scenarios of
product that would be very useful when we release new product versions
frequently. We don't have to worry about already released features unless
there is an API or data level change. Automated tests will take care about
released features and we can pay more attention to newly added features. It
would be ideal if we can add automated test to every product. At the
initial stage that might be bit difficult but once its done it will help us
to save Dev, QA time.
Thanks krishantha, nuwanW, dharshana and team for the support you all
provided.

Thanks,
sanjeewa.
-- 

*Sanjeewa Malalgoda*
Senior Software Engineer
WSO2 Inc.
Mobile : +94713068779

 <http://sanjeewamalalgoda.blogspot.com/>blog
:http://sanjeewamalalgoda.blogspot.com/<http://sanjeewamalalgoda.blogspot.com/>
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to