Jason,
I believe your P1 list is in agreement, I have updated the list to reflect
your concerns.
Packaged Apps:
* [P1] Install/launch/uninstall signed/unsigned packaged app
* [P1] Install failure for packaged app not signed with type privileged
(Security)
* [P1] Install failure for packaged app with type certified (Security)
* [P1] Cancel download of packaged app
* [P1] Restart download of packaged app
* [P>1] Download failure for packaged app for having a bad packaged app zip
Updated Apps:
* [P1] Updated Manifest
* [P1] Updated Manifest with preloaded appcache
* [P1] General pieces of download portions of API on install/update
* [P1] Web activities app install/update
Preloaded Appcache:
Appcache:
* [P1] Install/launch/uninstall hosted app with appcache
* [P>1] Download failure for hosted app with appcache for running out of space
* [P1] Restart download of hosted app with appcache
Hosted Apps:
* [P1] Install/launch/uninstall hosted app
* [P1] Install failure for hosted app with type as privileged or certified
* [P1] Cancel download of hosted app with appcache
* [P>1] Download failure for hosted app with appcache for having a bad appcache
manifest
-David
----- Original Message -----
From: "Jason Smith" <[email protected]>
To: "David Clarke" <[email protected]>
Cc: "Fernando Jiménez" <[email protected]>, "Fabrice Desre" <[email protected]>, "Andrew McKay" <[email protected]>, "Gregor
Wagner" <[email protected]>, "dev-b2g" <[email protected]>, "Julien Wajsberg" <[email protected]>, "David Chan"
<[email protected]>
Sent: Tuesday, May 28, 2013 10:33:50 PM
Subject: Re: Platfrom Apps Automation Testing
Hi David,
I think your feedback makes sense. Some final comments:
When I refer to download API, I'm referring to what was originally defined here
- https://bugzilla.mozilla.org/show_bug.cgi?id=790558 , now included in this
file -
http://hg.mozilla.org/mozilla-central/file/495b385ae811/dom/interfaces/apps/nsIDOMApplicationRegistry.idl
. In this API, I'm meaning things such as the download attributes (e.g.
downloadAvailable, downloading), the event handlers (e.g. ondownloadsuccess),
and the download operations (e.g. checkForUpdate, download, cancelDownload).
To address your feedback about priorities, I conducted a quick study of the
blocker bugs (either basecamp+ or tef+) that showed up in core dom:apps. Here's
the areas that came up as noticeable:
* Web activities registration on app install and changes during app
updates - evidence of regressions here a few times
* Bad attribute state with download attributes (e.g. downloadAvailable)
depending on the state of the app install/update was evident across a few
blocker bugs
* Assurance is needed to we cannot rename an app on an update, even
through a locale override - a few blocker bugs were present in this area
* On a successful app update, we need to ensure that none of the
properties of the old version of the app is still present - some bugs present
here
* Some blocker bugs were evident in not finding out updates were available
when they were via checkForUpdate and vice versa (finding that an update was
available when it wasn't) - seen on a few bugs
* Generally saw bugs on general cancel download & restart download cases
(sanity flows)
* Insufficient storage error at install or download time triggers
correctly (evident on a few blocker bugs)
* Sanity flow installing a privileged app to ensure it works (but nothing
more than that appears to be critically needed)
* Some bugs were evident with preinstalled apps in how they are installed
and updated (this is a different flow to how the apps are installed from the
web, but out of scope here since it's part of the build system)
* A couple of bugs on mini-manifest to webapp manifest matching (certain
properties such as name have to match on both manifests)
* General bugs about the one app per origin rule
* Correct app status & permissions setup on install of an app (although
this is out of scope for this automation being targeted here)
Based on the analysis above, I would say that the critical areas appear to be
general pieces of the download portions of the API (attribute state, completed
update, checkForUpdate, cancelDownload, restartDownload) is one area to definitely
to target either during initial install vs. update of an existing app. For download
errors, the only interesting area I see to probably get automation for is the
insufficient storage case on install vs. download. For webapp & mini manifests,
the only interesting areas I see is web activities, cannot rename apps on update,
and mini-manifest to webapp manifest matching. I'd also slip sanity tests in for a
privileged app. I'd also double check we've got enough coverage on the one app per
origin rule management.
That about paragraph then means:
* [P1] General pieces of download portions of API on install/update
* [P1] Sanity privileged app install/update test
* [P1] Web activities app install/update
* [P2] Insufficient storage error handling
* [P2] Cannot rename apps on update
* [P3] Mini-manifest to webapp manifest matching
* [P3] One app per origin rule
Note - Web activities hits the P1 level mainly because if that breaks, that can
lead to the phone experiencing severe failures, given that many foundational
pieces of the phone rely on web activities.
Does that analysis align with your understanding?
Sincerely,
Jason Smith
Desktop QA Engineer
Mozilla Corporation https://quality.mozilla.com On 5/28/2013 9:31 PM, David
Clarke wrote:
Comments highlighted by section:
Could you point to the spec of the "download" API ? I think I am unfamiliar ?
* Update hosted app, hosted app with appcache, packaged app
<--- updated hosted app, is just caused by updating the end point, updating of
appcache is really an html5 feature, not necessarily an apps based feature so i
don't believe this is in scope.
Drilling at the specifics appcache isn't important for this, but ensuring the
integration into the download API definitely is. That's a fundamental piece of
the download portion of the API. So that's definitely in scope. If that breaks
(which it has in the past), we're in trouble - including potentially breaking
partner apps in the process.
** Generally the mechanism that is in place is that you call
app.checkForUpdate() and if the downloadAvailable flag has been set, then you
would call navigator.mozApps.install.
If you were attempting to write automation, it doesn't really make sense to test the
"Update" failure scenario as you are suggesting, as this is the same as the
install Failure.
I understand the scenario you are suggesting, but I think if you treat the
update as being, checkForUpdate(), then that makes more sense.
The general failure scenarios are all covered in the install Failure scenarios
which I listed below as being important.
* Fail to update hosted app, hosted app with appcache, packaged app
<-- unsure what failure to update hosted app means?
Failure to update means you couldn't complete the update due to an error. The
download API on failure to update needs to fail gracefully, not cause data loss,
and fail when you expect it to fail. An example test case would be ensuring that
trying to update a hosted app with an app type of web --> certified should fail
during download.
** Unsure what the download api is.
<-- failure to update a hosted app with appcache should be covered by appcache
testing.
Actually, there's integration points in the download API that directly
integrates with appcache logic. So appcache testing doesn't cover this
entirely. You need to ensure that the download API integrates correctly with
appcache. The specifics of appcache is out of scope though.
<-- failure to update a packaged app is reasonable, but would be considered a
P3. The failure scenario to update a packaged app, would be to have a bad packaged
app / diretory structure.
I don't think all error cases are low priority, especially because this plays a
fundamental piece of the download API and certain error cases can have severe
implications if those test cases don't fail correctly. Other example failure
cases such as:
** General Expected functionality would be the primary goal, next iteration
would be to re-review / prioritize.
** Is the anticipation such that apps will be poorly signed ? There are plenty
of things that could go wrong, and I'm in agreement that those tests need to be
worked on. However first goal is to make sure we have good general coverage
across all areas.
* Network connection loss
* Bad signing
* Incorrect app type on update (web --> privileged for hosted, web -->
certified for hosted/packaged)
* etc. (for the many things that can incorrectly happen on an update)
Some of these bad updates have put us into a bad state in certain cases in the
past. A key characteristic of the download API is being able to ensure that you
know when you fail and expect a failure when you should get one, have an
ability to recover from a restart, and not suffer from data loss. Error cases
are actually quite common with packaged apps given it's complexity. If we don't
fail when we should fail, that could be quite bad in certain cases - especially
in the case if say, I allowed an update of a hosted app with an app type of web
to certified. Or another example would be trying to update a signed privileged
packaged app to another packaged app without a proper signature. There's
security holes that open up if that ends up not failing.
* Restart update of hosted app, hosted app with appcache, packaged app
<-- unsure what an update of a hosted app means, are you saying a manifest
update, or content update ?
** Some of this I agree with, however this is handled by permissions testing /
david chan should be able to speak to this. But I imagine that is where this
would fall. Would love to hear what he is doing in this regard as well.
* [general hosted app case] Updating a hosted app without appcache means
changing the app manifest
* [appcache involved for hosted app] Updating the appcache for a hosted app
preloading appcache
(This doesn't look to be in scope for this round )
* Other more higher level areas of analysis could focus on:
* Webapp manifest analysis
* Drilling at the specifics of the download API
* Mini-manifest analysis
I wouldn't necessarily throw this out without analyzing this. The automation here
drills at the gecko API level, which if it doesn't work correctly, will cause end
up affecting Gaia negatively by caveat with the many bugs we've seen. However,
there's certainly areas of the API that end-users aren't necessarily affected by as
much. The manifest pieces (webapp manifest & mini-manifest) additionally play
an important role in how you install and update an app, so those flows are
important to analyze (example really bad bug that happened - we had a bug where a
certain app update that changed a web activity in the manifest on an update caused
the phone to fail to startup). I'd still apply perspectives on these pieces to see
what's needed here. Sincerely,
** Agreed with you here. But this test case does seem more of an edge case,
than a target for automation.
Jason Smith
Desktop QA Engineer
Mozilla Corporation https://quality.mozilla.com On 5/25/2013 12:37 AM, David
Clarke wrote:
Jason,
Great list, went through and organized things somewhat into the areas outlined
before.
Areas that I was unsure as to the meaning of the test I left out. I believe
Fernando is working on https://bugzilla.mozilla.org/show_bug.cgi?id=821589 , so
adding him directly to the thread.
Packaged Apps:
* Install/launch/uninstall packaged app
* Install failure for packaged app not signed with type privileged
* Install failure for packaged app with type certified
* Download failure for packaged app for running out of space (P3)
* Cancel download of packaged app
* Restart download of packaged app
* Download failure for packaged app for having a bad packaged app zip (P3)
* Install/launch/uninstall/update signed privileged packaged app
Updated Apps:
* Updated Manifest
* Updated Manifest with preloaded appcache
* Updated packaged app install
Preloaded Appcache:
Appcache:
* Install/launch/uninstall hosted app with appcache
* Download failure for hosted app with appcache for running out of space (P3)
* Restart download of hosted app with appcache
Hosted Apps:
* Install/launch/uninstall hosted app
* Install failure for hosted app with type as privileged or certified
* Cancel download of hosted app with appcache
* Download failure for hosted app with appcache for having a bad appcache
manifest (P3)
Notifications:
* Verification that notification of installs / upgrades / pause / cancel /
restart events are propagated correctly
Permissions:
-------------------------
Areas where I feel where we need more clarity before we add it tot he list.
* Update hosted app, hosted app with appcache, packaged app
<--- updated hosted app, is just caused by updating the end point, updating of
appcache is really an html5 feature, not necessarily an apps based feature so i
don't believe this is in scope.
* Fail to update hosted app, hosted app with appcache, packaged app
<-- unsure what failure to update hosted app means?
<-- failure to update a hosted app with appcache should be covered by appcache
testing.
<-- failure to update a packaged app is reasonable, but would be considered a
P3. The failure scenario to update a packaged app, would be to have a bad packaged
app / diretory structure.
* Restart update of hosted app, hosted app with appcache, packaged app
<-- unsure what an update of a hosted app means, are you saying a manifest
update, or content update ?
(This doesn't look to be in scope for this round )
* Other more higher level areas of analysis could focus on:
* Webapp manifest analysis
* Drilling at the specifics of the download API
* Mini-manifest analysis
--David
----- Original Message -----
From: "Jason Smith" <[email protected]> To: "David Clarke" <[email protected]> Cc: "Fabrice Desre" <[email protected]> , "Andrew McKay"
<[email protected]> , "Gregor Wagner" <[email protected]> , "Fernando Jiménez" <[email protected]> , "dev-b2g" <[email protected]> ,
"Julien Wajsberg" <[email protected]> Sent: Thursday, May 23, 2013 6:26:29 PM
Subject: Re: Platfrom Apps Automation Testing
My comments in summary:
* I'm in agreement with your approach you are suggesting in #1
* I'd suggest getting feedback from Julien on how Gaia integrates with the
gecko layers to restart and cancel workflows. I'd suggest simulating what he
does there. We should aim for sanity tests here, as the restart case
(especially with packaged apps) is actually quite common (many problems can
happen to cause a download to fail)
* David Chan is already focusing on app permissions, so I'd keep that out of
scope for now
* The next step in analysis that I think would be helpful to get is to take the
end to end analysis you've done and apply a gecko perspective on it referencing
the underlying APIs involved
* We need signed privileged packaged app test cases in this list (e.g. install
a privileged app)
* Mostly in agreement with the theme of the priorities. My comments
specifically would suggest that these themes are important to consider in
comparing with your list:
* Install/launch/uninstall hosted app
* Install failure for hosted app with type as privileged or certified
* Install/launch/uninstall packaged app
* Install failure for packaged app not signed with type privileged
* Install failure for packaged app with type certified
* Install/launch/uninstall hosted app with appcache
* Download failure for packaged app for running out of space
* Download failure for hosted app with appcache for running out of space
* Cancel download of hosted app with appcache
* Cancel download of packaged app
* Restart download of hosted app with appcache
* Restart download of packaged app
* Download failure for packaged app for having a bad packaged app zip
* Download failure for hosted app with appcache for having a bad appcache
manifest
* Update hosted app, hosted app with appcache, packaged app
* Fail to update hosted app, hosted app with appcache, packaged app
* Restart update of hosted app, hosted app with appcache, packaged app
* Install/launch/uninstall/update signed privileged packaged app
* Other more higher level areas of analysis could focus on:
* Webapp manifest analysis
* Drilling at the specifics of the download API
* Mini-manifest analysis
Sincerely,
Jason Smith
Desktop QA Engineer
Mozilla Corporation https://quality.mozilla.com On 5/23/2013 3:08 PM, Jason
Smith wrote:
==> Moving to dev-b2g (don't think there's anything confidential here).
+Fernando
+Julien
Sincerely,
Jason Smith
Desktop QA Engineer
Mozilla Corporation https://quality.mozilla.com On 5/23/2013 2:26 PM, David
Clarke wrote:
Navigator.mozApps Developers:
I have been reviewing the automation suites that are currently in place, and
wanted to propose a plan for organizing test cases going forward, and preparing
for 1.1 / 1.2 features.
#1 Cleanup / Stabilization:
- http://mzl.la/Ywh6rL .There are currently some amount of tests which are
failing intermittently of the current mozApps chrome test suite.
The current mozApps test suite is not run on B2G Emulator as it is a
mochitest-chrome based set of tests.
My proposal would be to move the current mozApps test suite to a
mochitest-plain based test suite, and use SpecialPowers.autoConfirmAppInstall,
which was not available
at the writing of the chrome based test.
The double benefit here is that we will remove one area of intermittent
failure, and also allow us to run the test suite on the B2G emulator.
(imho this is the main cause of the test timeouts you are seeing in the above
link )
#2 Extend coverage:
- Identify areas of coverage that are needed and high priority, and attempt to
write test cases for them.
- http://bit.ly/16aGkPS I have imported the manual test cases into the above
google spreadsheet and attempted to rank by both priority / difficulty to get
some sense of what test cases will be both easy / a high priority to test.
- Good news is that packagedApp test cases are on the way, but there are lots
of test cases that are not on the automation roadmap.
Example:.... stop / cancel and restart packaged app installs.
It would be good to separate the test cases listed above and discuss from a
feasibility standpoint what is capable of being supported, and then make sure
the hooks are in the platform that will allow for the automation to be written.
The general priority listings I have organized around:
P1: refers to Primary User Flows
P2: Secondary user flows, a little further from the beaten path
P3: Edge Cases, Error conditions, that are not explicitly part of a user story.
I am only considering P1 issues for this round, but what I would like to do is
to start breaking down the list further, and seeing what would be beneficial
for gecko automation.
If there are specific areas that we can agree on for automation, then we can
focus around those specific use cases / test cases. The main areas for apps
testing are listed below, please feel free to update with any thoughts or
specific test cases that you think are relevant. You can also edit the google
spreadsheet that is linked to from above, and add comments wherever necessary.
Packaged Apps:
Updated Apps:
Preloaded Appcache:
Appcache:
Hosted Apps:
Notifications:
Permissions:
Thanks all,
--David
_______________________________________________
b2g-internal mailing list [email protected]
https://mail.mozilla.org/listinfo/b2g-internal