We have some cool matchers (Phexample, StateSpecs), some nice mocking
libraries (Mocketry, BabyMock), and Phexample's acknowledgement that tests
build on each other. The problem is, it's hard to cherry-pick and build
one's perfect test enviroment. For example:
- Phexample and Mocketry, whose matching frameworks break each other
- Phexample and BabyMock both require subclassing from their own TestCase
subclass, so they can't be used together.
I wonder what the solution is? Maybe some registration scheme? It'd be nice
to be more coordinated here. As a first step, I manually merged BabyMock and
Phexample to produce the following (IMHO) gorgeous tests...
testNew
self library should
receive: #FMOD_System_Create:;
with: FMOD_SYSTEM new;
does: [ :h |
h handle: 20.
0 ].
self library should
receive: #FMOD_System_Init:;
with: [ :a | a handle = 30 ];
answers: 0.
^ FmodSystem new.
testNewSound
| soundFile system |
system := self given: #testNew.
soundFile := FileLocator vmBinary.
self library should
receive: #FMOD_System_CreateSound:to:with:;
with: soundFile fullName and: FMOD_SOUND new and: [ :h | h =
system handle
];
answers: 0.
^ system newSoundFromFile: soundFile.
testPlaySound
| sound |
sound := self given: #testNewSound.
self library should
receive: #FMOD_System:PlaySound:on:;
with: sound system handle and: sound handle and: FmodChannel
new;
answers: 0.
^ sound play.
testChannelIsPlaying
| channel |
channel := self given: #testPlaySound.
self library should
receive: #FMOD_Channel_IsPlaying:storeIn:;
with: channel and: NBExternalAddress new;
does: [ :c :isPlaying | isPlaying value: 1 ].
^ channel isPlaying.
The tests... let's call them specifications... clearly state how the object
should talk to the FMOD library. The neat part of Phexample is that even
though each specification uses the result of the last, it's smart enough not
to fail the dependent if the #given: fails. It moves it into "expected
failures" this is important because otherwise a failure deep in the system
would make it difficult to pinpoint since there could be many failing
specifications.
n.b. in this case, I'm not sure that mocking is the best strategy and I may
end up using a hand-written stub, but I wanted to do a case study. One area
where I've really found mocks to shine is internal to one's system. That is,
I'm writing an object and discover that it must talk to another
as-yet-unwritten object. So I mock it out and in so doing define its API,
which I then go implement. Also, the mocks are using a custom extension to
BabyMock that allows you to pass the arguments do #does:, so it will not run
on vanilla BabyMock
-----
Cheers,
Sean
--
View this message in context:
http://forum.world.st/Unifying-Testing-Ideas-tp4726787.html
Sent from the Pharo Smalltalk Developers mailing list archive at Nabble.com.