Hi all,

I am doing a (research) work about the use of a declarative approach to define 
test inputs for the testing of AutoScaler-like policies,
and I apply this to the brooklyn code.

I would like ear your takes over it and opinions on the idea. Feedbacks, 
suggestions for improvements and wishes/requests from the
community are welcome as well, and definitively will help me improving the 
work. That soon might be released as well.

A short summary of the idea is below:

================================================================
To test scaling policies a common pattern:
        setups an application,
        emits some sensor data, 
        checks for some specific outcome.

In particular, sensor data, i.e., test inputs, are manually provided. 

This is OK but can be improved.

A possible way to go, which is the core part of my work, is to define these 
test inputs by describing how they look like,
instead of manually providing actual, hard-coded input data. ( That is why the 
approach is called declarative).

For example, we can ask a test driver to provide us with a value to resize the 
system from 1 to 4 given a specific
configuration of the autoscaling policy,  so instead of writing (quoted from an 
actual test case)

// workload 200 so requires doubling size to 10 to handle: (200*5)/100 = 10
tc.setAttribute(MY_ATTRIBUTE, 200);

We could write something like:

tc.setAttribute(MY_ATTRIBUTE, mock.resize(5,10));

And expect at execution time that the call "mock.resize(5,10)" produces a value 
(included - but not limited to - 200) that will trigger the AutoScalerPolicy
to resize the application from 5 to 10. 

Similarly, we can also think of something like :

mock.createScenario({mock.SCALE_UP, mock.SCALE_DOWN, mock.SCALE_DOWN});

And expect a sequence of values (with time stamp, i.e., a trace-like) that can 
force the policy to perform a first scale up followed by two different scale 
down.

Or even:

mock.createPeriodicScenario(monitoringPeriod, {mock.SCALE_UP, mock.SCALE_DOWN, 
mock.SCALE_DOWN});

To generate a stream (or a big array) that produces a set of sensor data that 
are periodically emitted.
================================================================

At the moment, I studied the available test cases for the AutoScalerPolicy and 
extrapolated some of the main requirements
(from the testing point of view). This allowed me to generate “declaratively" 
test inputs for testing

        - normal resize operations triggered by emits
        - abnormal resize over/under the max/min poll size (that must not 
happen)
        - concurrent action/emit
        - sustained action/emit (Different HOT/COLD_SENSOR messages within the 
same stability period)
        - “blips” in the emit values (HOT_SENSOR followed by OK_SENSOR)

BTW, I integrated the code with the testing/eclipse setup so I can run the 
actual java tests.

This is a nice result in my opinion, but I can improve it further more … with 
your help !

Other than general comments (as mentioned at the beginning of this email) I am 
also seeking additional requests or suggestions
on possible (new) tests that "could have been defined” or “it would be 
beneficial for the quality of the project to have” by employing this technology.

I am pretty sure that you can provide really relevant comments !

Thanks to everybody !

Best,
— Alessio

PS: I am usually connected to the brooklyn channel on IRC, so you can also 
contact me there







Reply via email to