Re: [Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-06 Thread stepharo
Well, so far I did not yet investigate all that part. I'm actually 
experimenting on a new monkey implementation that has the following 
objectives:

 - easy to configure and run locally
 - faster. It should be able to run build validations in parallel 
(e.g., in my prototype tests run by 4 parallel pharo images are run in 
2 minutes)
 - it should enforce the same process used for issue validation and 
integration

 - integrated with the bootstrap :)
This is super cool: it will change the way we work and make our life a 
lot simpler.



Stef

PS: I'm happy to work with motivated, visionary smart and efficient guys.



Re: [Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-06 Thread stepharo



Le 4/8/16 à 18:39, Yuriy Tymchuk a écrit :

Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a 
quality model that I’ve been working on and that was used so far by 
QualityAssistant.

At the moment I’m stuck while trying to do changes in Monkey, as it is really 
hard to understand how quality checks are made there. @Guille maybe you can 
advise something.

In the old model Rules were both checking code and storing the entities that 
violate them. In Renraku Rules are responsible only for checking, then for each 
violation they produce a critique object that is a mapping between the rule and 
the entity that violates the rule. Also critiques can provide plenty of 
additional information such as suggestion on how to fix the issue.


This is good that you improve this part.
The critics browser was done by someone that was told to do it and not 
someone that believe that it is important :)

So it was a good first step.
Now this is good to revisit LintRules. They already helped us a lot.
Rules are really important for Pharo (and you know it) so this is 
supercool that you push that.

Thanks a lot Yuriy.


At the moment we can skip the critiques altogether, just run the rules and 
store the classes and method that violate them. But at the moment I cannot how 
Monkey in implemented. I.e. where do the rules come from and what output should 
be provided.

Cheers.
Uko






Re: [Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-05 Thread Yuriy Tymchuk

> On 05 Aug 2016, at 10:49, Guille Polito  wrote:
> 
> Hi!
> 
> 
>  Original Message 
>> Hi,
>> 
>> At the moment I am moving Pharo quality tools to Renraku model. This is a 
>> quality model that I’ve been working on and that was used so far by 
>> QualityAssistant.
> Cool, I'm interested on that. Do you have some examples, docs?

There are multiple places, but at the moment I’m focusing on in-Pharo help. If 
you open the Help Browser there is a book called “Renraku Quality Rules”. Let 
me know if something is unclear or missing, as it is hard to write a good 
documentation in one shot :)

>> 
>> At the moment I’m stuck while trying to do changes in Monkey, as it is 
>> really hard to understand how quality checks are made there. @Guille maybe 
>> you can advise something.
>> 
>> In the old model Rules were both checking code and storing the entities that 
>> violate them. In Renraku Rules are responsible only for checking, then for 
>> each violation they produce a critique object that is a mapping between the 
>> rule and the entity that violates the rule. Also critiques can provide 
>> plenty of additional information such as suggestion on how to fix the issue.
>> 
>> At the moment we can skip the critiques altogether, just run the rules and 
>> store the classes and method that violate them. But at the moment I cannot 
>> how Monkey in implemented. I.e. where do the rules come from and what output 
>> should be provided.
> Well, so far I did not yet investigate all that part. I'm actually 
> experimenting on a new monkey implementation that has the following 
> objectives:
> - easy to configure and run locally
> - faster. It should be able to run build validations in parallel (e.g., in my 
> prototype tests run by 4 parallel pharo images are run in 2 minutes)
> - it should enforce the same process used for issue validation and integration
> - integrated with the bootstrap :)
> 
> Are you going to ESUG?

Yes

>> 
>> Cheers.
>> Uko
> 
> 




Re: [Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-05 Thread Guille Polito

Hi!


 Original Message 

Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a 
quality model that I’ve been working on and that was used so far by 
QualityAssistant.

Cool, I'm interested on that. Do you have some examples, docs?


At the moment I’m stuck while trying to do changes in Monkey, as it is really 
hard to understand how quality checks are made there. @Guille maybe you can 
advise something.

In the old model Rules were both checking code and storing the entities that 
violate them. In Renraku Rules are responsible only for checking, then for each 
violation they produce a critique object that is a mapping between the rule and 
the entity that violates the rule. Also critiques can provide plenty of 
additional information such as suggestion on how to fix the issue.

At the moment we can skip the critiques altogether, just run the rules and 
store the classes and method that violate them. But at the moment I cannot how 
Monkey in implemented. I.e. where do the rules come from and what output should 
be provided.
Well, so far I did not yet investigate all that part. I'm actually 
experimenting on a new monkey implementation that has the following 
objectives:

 - easy to configure and run locally
 - faster. It should be able to run build validations in parallel 
(e.g., in my prototype tests run by 4 parallel pharo images are run in 2 
minutes)
 - it should enforce the same process used for issue validation and 
integration

 - integrated with the bootstrap :)

Are you going to ESUG?


Cheers.
Uko





Re: [Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-04 Thread Nicolai Hess
2016-08-04 18:39 GMT+02:00 Yuriy Tymchuk :

> Hi,
>
> At the moment I am moving Pharo quality tools to Renraku model. This is a
> quality model that I’ve been working on and that was used so far by
> QualityAssistant.
>
> At the moment I’m stuck while trying to do changes in Monkey, as it is
> really hard to understand how quality checks are made there. @Guille maybe
> you can advise something.
>
> In the old model Rules were both checking code and storing the entities
> that violate them. In Renraku Rules are responsible only for checking, then
> for each violation they produce a critique object that is a mapping between
> the rule and the entity that violates the rule. Also critiques can provide
> plenty of additional information such as suggestion on how to fix the issue.
>
> At the moment we can skip the critiques altogether, just run the rules and
> store the classes and method that violate them. But at the moment I cannot
> how Monkey in implemented. I.e. where do the rules come from and what
> output should be provided.
>
> Cheers.
> Uko
>

Hi Uko,

as far as I know (from looking at the CICommandLineHandler), the
CIValidator collects a set of rules when validating an issue. Look at
CIValidator class>>#pharo60
The rules are
CISUnitTestsRule and
PharoCriticRules pharoIntegrationLintRule harden.


[Pharo-dev] Moving Monkey quality checks to Renraku infrastructure

2016-08-04 Thread Yuriy Tymchuk
Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a 
quality model that I’ve been working on and that was used so far by 
QualityAssistant.

At the moment I’m stuck while trying to do changes in Monkey, as it is really 
hard to understand how quality checks are made there. @Guille maybe you can 
advise something.

In the old model Rules were both checking code and storing the entities that 
violate them. In Renraku Rules are responsible only for checking, then for each 
violation they produce a critique object that is a mapping between the rule and 
the entity that violates the rule. Also critiques can provide plenty of 
additional information such as suggestion on how to fix the issue.

At the moment we can skip the critiques altogether, just run the rules and 
store the classes and method that violate them. But at the moment I cannot how 
Monkey in implemented. I.e. where do the rules come from and what output should 
be provided.

Cheers.
Uko