The same question from me.

We know, A GUI automation testing tool -- monkey runner is there, it
provide some functions such as Keyboard function, input message
functions as well as touch, which are all very good supports for the
automation testing.

However, there seems somethings that puzzle me a lot, so search helps
from all of you here:
1. As in most of GUI automation testing, locate the objects in the
screen and judge the existing of the objects are very important, so is
there any way to implement this function, or can we extend the
function ourselves, if so, how?
2. Now we can't touch the object by provide name or id or any other
attributes of the objects but just touch on location(by X,Y). in
common GUI automation, it's very inflexible and bring much trouble to
users to get the location info. Is there any way to resolve this
problems? or any further plans for the enhancement?

Thanks!

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to