Am 02.08.2018 um 22:46 schrieb David Luu:
This isn't specifically a jmeter question but relates to it. Granted in
this scenario, jmeter could also be used as for the functional API tests,
but in general it's not often used that way.

Some product suites for commercial tools might provide some integration for
this, but say you use open source tools or in house tooling (different
tools for functional and performance tests), has anyone needed to deal with
or wanted to share the common API test data (e.g. input test data, some
outputs to validate, the API calls or routes to hit/test) between the tools?

If so just wanted to see what approaches you've taken as one would want to
define a common pattern of input data format and output validation format
so that the tools can read the schema and know what to do, without having
to customize the data format for different tools. e.g. instead define
common data schema/format, and have wrappers in the tooling to handle the
schema so that you can port to different tools using same data unmodified.

Interesting question, but sadly I have no easy answer to this.

Sharing input data could be done in many ways, e.g. storing them in a database or the file system and accessing them via JDBC Sampler or CSV input sources. But the real question here would be, is there any standard for structuring the data, so that different tools would be willing to use those data? I am not really aware of any such data format that goes further than CSV, JSON, or any such broadly defined structure.

Sharing the functionality means that you will either have to use the lowest common functionality, or be careful in using the full abstracted set of methods. But again I am unaware of existence of such a tool that crosses the gap. I believe taurus is trying to deliver something in that direction (but I haven't tried it and haven't looked too closely)

The output is probably again not really suited to share between functional and performance tests. Functional tests will probably produce less output, that are clear to interpret (yes - no) and performance tests tend to produce lots of data that have to be interpreted (together with data from the infrastructure you tested). The best sharing infrastructure for your output is probably the usage of the backend listeners to put the samplers data into a shared time series. See https://jmeter.apache.org/usermanual/realtime-results.html

Regards,
 Felix


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to