Hi!

-------- Original Message --------
Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a 
quality model that I’ve been working on and that was used so far by 
QualityAssistant.
Cool, I'm interested on that. Do you have some examples, docs?

At the moment I’m stuck while trying to do changes in Monkey, as it is really 
hard to understand how quality checks are made there. @Guille maybe you can 
advise something.

In the old model Rules were both checking code and storing the entities that 
violate them. In Renraku Rules are responsible only for checking, then for each 
violation they produce a critique object that is a mapping between the rule and 
the entity that violates the rule. Also critiques can provide plenty of 
additional information such as suggestion on how to fix the issue.

At the moment we can skip the critiques altogether, just run the rules and 
store the classes and method that violate them. But at the moment I cannot how 
Monkey in implemented. I.e. where do the rules come from and what output should 
be provided.
Well, so far I did not yet investigate all that part. I'm actually experimenting on a new monkey implementation that has the following objectives:
 - easy to configure and run locally
- faster. It should be able to run build validations in parallel (e.g., in my prototype tests run by 4 parallel pharo images are run in 2 minutes) - it should enforce the same process used for issue validation and integration
 - integrated with the bootstrap :)

Are you going to ESUG?

Cheers.
Uko


Reply via email to