On Fri, Jan 20, 2017 at 7:49 PM, Rikard Sagnér <[email protected]>
wrote:
We (Metodika) took the same, or similar, path as Joshua. A REST API built
in 4D that can serve both 4D applications/clients but mostly web
applications written in Angular.

Just to add onto this interesting post, I wanted to share a thought about
REST API's built in 4D and automated testing and documentation. I've either
built or been deeply involved in building several REST APIs in 4D down the
years and it's a solid design, for all of the reasons mentioned in this
thread. First some background for those of you that haven't done something
like a REST API and aren't entirely clear on what the terminology means.
Then, a solid strategy for automated testing and documentation of working
examples of API calls (good and bad.)

First the background. Think of it like this:

   You make calls to 4D via a URL or a Web form request.
   It's like making a method call with inputs and an output.

A method, inputs, and outputs. That's familiar to anyone programming 4D,
even if you're not familiar with the Web. And on the Web, the rules are
very predictable:

-- The input will have some headers describing the input. So, input meta
data - it can tell you something about the client, how the inputs are
formatted, what output formats are allowed, how long the input data is, etc.

-- The input will have some number of values formatted in very specific
ways. (All of the 4D server-side tools available have schemes for parsing
this data out for you.)

Headers+Payload = HTTP request

The outputs are also predictable. Just like with the inputs, the output has
headers describing the result, such as:
-- Status code (404, 200, etc.)
-- Size of result
-- Formatting, encoding and compression information
-- Other optional information, like if the information can be cached.

And, of course, the payload.

Headers+Payload = HTTP response

Key point: It's part of the rules of HTTP that every request should have a
response. The response always has a header section include a status code.
Most everything else, including a payload, is optional. So, you always have
a response.

Key point: Everything is text. (Take that as true-enough if you prefer.)
You can use plain text, JSON, XML (that's what SOAP uses), whatever. It's
pretty common to have a plain text request in a URL that gets a JSON
response. But that's really a detail.

So what does all of that have to do with automated testing and
documentation? The essence of *any* test is:

    [Actual Outcome] = [Expected Outcome] : Pass or Fail

You might be comparing two images of a screen (Eggplant), a simple function
result:

    AddLongs (2;2) : 4

Or a more complex operation. With a REST API, you have a *deterministic*
system, even if it's a bit complicated. "Deterministic" meaning that if you
put in the same inputs, you always get back the same outputs. If you have
an API call like

   /GetTaxRate/?TaxAreaID=CA.Santa_Cruz

Returns "9.5", or whatever it should be.

That's typed off the top of my head, just to give you the idea.

You can store everything you need in records (or elsewhere) to drive
automated testing. A test record looks like this:

[Test_Case]
Name: Check User_GetAccessLevel with a good input.
Input_Headers: Your HTTP headers go here
Output_Headers_Expected: What you figure on getting back
Request:  /GetTaxRate/?TaxAreaID=CA.Santa_Cruz
Response: 9.5.

I've left the server address out of the URL as it's nice to be able to
change that before a batch test run. Eh? Before the run pick if you're
pointing at a dev server, test server, production server, remote client
server, etc.

In this case we have a dead simple function that takes a string and returns
a number. You can store the details in a record, use NTK/4DIC/HTTP Get to
make the request, capture the results and compare. You now have a live,
automated test.

That's a simple case. What about a complex service that brings down a bunch
of data as XML, JSON, or HTML? Sure, that's more complicated - but the test
harness/framework/scheme doesn't need to change. Theoretically, it doesn't
need to change it all. You capture the initial details, make sure that
they're right, save them as the canonical result and you're good to go. The
basic process is the same:

     configure --> request --> compare --> pass/fail

From there you can bolt on some kind of diff user interface, push results
out for comparison in another tool, whatever you like.

As some already know or will have thought, you do have a problem with false
positives. Meaning, differences in the outputs that are harmless. Imagine
that there's a date stamp in the output. That changes with every request,
so that means every test fails. Tedious. In the case of HTTP requests, you
can often ignore many of the headers and teach the system to only check the
ones you care about - like status. For the body, you can also write pre or
post-comparison rules to automatically filter out noisy false errors. Even
still, the basic process is really the same...just with a bit of cleanup

     configure --> request --> clean output --> compare --> pass/fail

...or

     configure --> request --> compare --> filter findings --> pass/fail

Okay, what does all of that have to do with automatic documentation? Well,
if you've got this kind of test setup to prove that your API works, why not
publish the test cases live within documentation? The real test records
become the live source of the docs. So long as the tests work, the docs are
always accurate and up-to-date. Once you figure out how you want to publish
the example cases, you never have to look at them again. As long as your
tests pass, this part of your docs are perfect.

Tip: Add a field to your [Test_Case] table like "Include_in_Documentation"
defaulted to False. Turn it on when you're ready. You may have cases that
are experimental.

Tip: Failures are important! One of the main features of a good Web API is
that it deals with errors. Bad inputs, unknown requests, badly formed
requests (JSON that won't parse, etc.) What do you do? It's no good to just
say "stuff was ill, can't help". Give a good error. And add test cases for
these errors! Kind of a big deal.

To be clear, I'm not suggesting that this sort of approach automates away
all documentation. You still need docs if you have a public-facing API. But
what this approach does do is automates publishing 100% perfect example
tests in your docs at no extra ongoing cost.

Given this is one of my usual pithy posts (cough-cough), I'll just add a
reminder that I'm not talking theoretically here. I've done this (or been
heavily involved in doing this) on pretty big systems a couple of times.
It's a great approach! It makes you SO much more confident in your API,
error results are no longer scary, they're something you know is part of
the API design and not scary, and it makes a lot of boring+stressful
problems go away.

I'd love to hear comments from others that have taken a similar (or
different) approach to API testing and so on.

P.S. For OS X users, try out a tool Justin Leavens turned me onto years
ago. It's free now:

https://itunes.apple.com/au/app/rested-simple-http-requests/id421879749?mt=12
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:[email protected]
**********************************************************************

Reply via email to