On 02 Feb 2014, at 16:27, [email protected] wrote:

> Thanks for your feedback Marcus.  A couple of thought inline...
> 
> Marcus Denker wrote:
>> 
>> On 02 Feb 2014, at 14:31, [email protected] wrote:
>> 
>>   
>>> For diligence and curiosity leading up to the Pharo3 release, I downloaded 
>>> build image 30733 (with PharoLauncher)
>>>     
>> Another wonderful thing is that tests are fine when run individually, but 
>> fail when running all tests.
>>   
> 
> It seems that when clicking on the failed test, the test is re-run and that 
> result is debugged - but sometimes the error doesn't occur.  It would be 
> great the context of the error in the original execution was stored so that 
> actual error could be inspected.  Indeed, (to dream) presuming you're working 
> off fresh CI build, it would be cool if using Fuel in one step you send the 
> context of the original error (with CI build meta-info attached) to someone 
> else's PharoLauncher, which automatically downloads the required CI build, 
> and launches with the Fuel file loaded and ready to debug.
> 
On the ci server this is already done: failing tests are serialise with Fuel 
for debugging. In addition, the image is save and can be downloaded (in the 
state after the test run).

>> We need to think about automatic UI testing… 
>> 
>> Loading an external package changes *everything*. We do not have a a release 
>> criterium that all tests are green after loading external packages.
>>   
> 
> No external packages loaded here.  
Launcher? It’s external in the sense that this is not code that is in the image 
when we run all tests on the build server.

> btw, What was your OS environment?
> 
Mac

>>> 3. Assuming Pharo3 will go through a Release Candidate phase, 
>>>     
>> 
>> Normally the idea is that we will just release. In the past we used an 
>> elaborate process, but the reality is that nobody looks at release 
>> candidates.
>> You can tell 5 times that the release candidate will be released unchanged 
>> and people should check: they will not. They will download
>> the release, though, and then complain that the “obvious” thing X is not 
>> fixed  that that the people who managed the release process did
>> everything wrong. Because “release” means bug free. By magic.
>> 
>>      
>>   
> I've seen that happening previously.  In trying to understand the reality of 
> this, perhaps...
> * "check if this is okay" is an open ended question that doesn't have a 
> deliverable to drive people to action.  How do you know when its time to 
> report success/failure? 
> * people do download and try it but its only shallow testing 
> * people only report by exception.  
> 
> Perhaps a more defined task such as "Run All Tests" is easier for people to 
> do (single button press) and to report success/failure.  PharoLaucher also 
> makes it easier and quicker to do this.  You don't need to call it a Release 
> Candidate. Just have a specified version that multiple people run in multiple 
> environments.  Of course, that could also open up a can of worms of being 
> hard from your end to dealing every test failure due to other people's 
> environments.  There would need a pragmatic approach about which ones to deal 
> with to stay on schedule with the release date. 
> 

yes… its important that we don burn ourselves while doing the release, that is 
why the idea was to move as much things upfront as possible
(“the build is the release unchanged”)

        Marcus

Reply via email to