Re: Telemetry experiments need to be easier (on Nightly)

2016-05-10 Thread Kartikaya Gupta
err [1] is https://wiki.mozilla.org/Telemetry/Experiments and [2] is
https://wiki.mozilla.org/QA/Telemetry/Developing_a_Telemetry_Experiment
:)

On Tue, May 10, 2016 at 10:53 AM, Kartikaya Gupta  wrote:
> Just to close the loop on this, I went ahead and updated the wiki
> pages at [1] and [2] to reflect that some parts of the process are
> more optional than they originally seemed. I also tried to generally
> make the documentation simpler to follow and less
> redundant/contradictory. Finally, I filed bug 1271440 for the missing
> piece to allow developers to self-sign their experiment addons.
>
> Hopefully these changes will make it easier to build and deploy
> experiments in the future.
>
> Cheers,
> kats
>
> On Wed, Apr 20, 2016 at 3:31 PM, Jared Hirsch <6...@mozilla.com> wrote:
>> Hi all,
>>
>> I wrote a telemetry experiment last year[1], and I also found the process
>> challenging to navigate.
>>
>> I found that many important details were undocumented, but were mentioned in
>> review comments, so I added what I could to the telemetry experiment wiki
>> page and to MDN.
>>
>> My experiment gathered new data, and sent it to a new, one-off server
>> endpoint. This definitely required more discussion than the typical
>> experiment. That said, I do think there are ways the process could be
>> improved for all experiments.
>>
>> Here are a few suggestions for improving the current process:
>> - document how to use Experiments.jsm on MDN
>> - document the schema of the Telemetry Experiment-specific manifest.json
>> file
>> - write and maintain at least one up-to-date, thoroughly commented example
>> experiment
>> - merge (parts of) the telex QA page[2] into the main page (the link is
>> buried in the main page)
>> - update and possibly merge the code docs[3] into the main wiki page
>>
>> To expand on the last bullet point: the code docs suggest using a special
>> python build script to assemble the final .xpi, but that's no longer
>> accurate, as experiments now need to be signed. Further, each experiment has
>> to be *manually* signed, because AMO signing tools will not auto-sign an
>> add-on with the special telex emType. A bug has been filed[4] to enable
>> auto-signing for experiments, but it hasn't moved since November. Might be
>> worth pushing on it.
>>
>> Each of these missing pieces makes shipping an experiment a little bit
>> harder than it has to be. Adding them all up, the process as a whole can
>> seem difficult for a first-time experiment author (at least, it did for me
>> and evidently for Kats as well).
>>
>> I hope these suggestions are helpful.
>>
>> Cheers,
>>
>> Jared
>>
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1174937
>> [2] https://wiki.mozilla.org/QA/Telemetry
>> [3]
>> http://hg.mozilla.org/webtools/telemetry-experiment-server/file/tip/README.rst
>> [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1220097
>>
>> On Wed, Apr 20, 2016 at 8:05 AM, Kartikaya Gupta  wrote:
>>>
>>> On Wed, Apr 20, 2016 at 10:26 AM, Benjamin Smedberg
>>>  wrote:
>>> > The goal of this is for experiments to be fairly lightweight.
>>> >
>>> > Can you talk about where the problems were? The only signoffs that are
>>> > currently required are code review (no surprise there) and
>>> > acknowledgement from a release driver.
>>>
>>> This sounds reasonable, but the page at
>>> https://wiki.mozilla.org/Telemetry/Experiments (taken at face value,
>>> which is what I did as it was my first foray into this) indicates
>>> otherwise. Perhaps most of my issues could be resolved just by
>>> updating the documentation on that page. For example, it says "Product
>>> approval is required to run an experiment." and is unclear about what
>>> is "user-facing" vs not. It also says to talk to you *before* building
>>> an experiment, which I did (bug 1251052 comment 1), only to then find
>>> out that step was not necessary, so that was extra unnecessary
>>> latency. The doc also says "QA should verify the experience on the
>>> staging server", so I went through that process - it was almost no
>>> work on my part since QA had a template for it already but it still
>>> took nonzero time. The addon signing step is also not yet automated,
>>> as far as I could tell, even though the bug referenced in the doc is
>>> resolved fixed, so that adds an additional dependency and round-trip
>>> to somebody who can sign it.
>>>
>>> > For pref flips in particular, we've talked about extending the
>>> > experiment system so that you don't have to write an addon at all:
>>> > that you can just specify pref values in the experiment manifest. That
>>> > would require engineering work and a little bit of new signing work
>>> > that is currently not planned; but if somebody wants to do that work,
>>> > I would be willing to mentor.
>>>
>>> This sounds great, and would be really nice. If it's not a huge amount
>>> of work I would be willing to take this on. Is there a bug on 

Re: Telemetry experiments need to be easier (on Nightly)

2016-05-10 Thread Kartikaya Gupta
Just to close the loop on this, I went ahead and updated the wiki
pages at [1] and [2] to reflect that some parts of the process are
more optional than they originally seemed. I also tried to generally
make the documentation simpler to follow and less
redundant/contradictory. Finally, I filed bug 1271440 for the missing
piece to allow developers to self-sign their experiment addons.

Hopefully these changes will make it easier to build and deploy
experiments in the future.

Cheers,
kats

On Wed, Apr 20, 2016 at 3:31 PM, Jared Hirsch <6...@mozilla.com> wrote:
> Hi all,
>
> I wrote a telemetry experiment last year[1], and I also found the process
> challenging to navigate.
>
> I found that many important details were undocumented, but were mentioned in
> review comments, so I added what I could to the telemetry experiment wiki
> page and to MDN.
>
> My experiment gathered new data, and sent it to a new, one-off server
> endpoint. This definitely required more discussion than the typical
> experiment. That said, I do think there are ways the process could be
> improved for all experiments.
>
> Here are a few suggestions for improving the current process:
> - document how to use Experiments.jsm on MDN
> - document the schema of the Telemetry Experiment-specific manifest.json
> file
> - write and maintain at least one up-to-date, thoroughly commented example
> experiment
> - merge (parts of) the telex QA page[2] into the main page (the link is
> buried in the main page)
> - update and possibly merge the code docs[3] into the main wiki page
>
> To expand on the last bullet point: the code docs suggest using a special
> python build script to assemble the final .xpi, but that's no longer
> accurate, as experiments now need to be signed. Further, each experiment has
> to be *manually* signed, because AMO signing tools will not auto-sign an
> add-on with the special telex emType. A bug has been filed[4] to enable
> auto-signing for experiments, but it hasn't moved since November. Might be
> worth pushing on it.
>
> Each of these missing pieces makes shipping an experiment a little bit
> harder than it has to be. Adding them all up, the process as a whole can
> seem difficult for a first-time experiment author (at least, it did for me
> and evidently for Kats as well).
>
> I hope these suggestions are helpful.
>
> Cheers,
>
> Jared
>
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1174937
> [2] https://wiki.mozilla.org/QA/Telemetry
> [3]
> http://hg.mozilla.org/webtools/telemetry-experiment-server/file/tip/README.rst
> [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1220097
>
> On Wed, Apr 20, 2016 at 8:05 AM, Kartikaya Gupta  wrote:
>>
>> On Wed, Apr 20, 2016 at 10:26 AM, Benjamin Smedberg
>>  wrote:
>> > The goal of this is for experiments to be fairly lightweight.
>> >
>> > Can you talk about where the problems were? The only signoffs that are
>> > currently required are code review (no surprise there) and
>> > acknowledgement from a release driver.
>>
>> This sounds reasonable, but the page at
>> https://wiki.mozilla.org/Telemetry/Experiments (taken at face value,
>> which is what I did as it was my first foray into this) indicates
>> otherwise. Perhaps most of my issues could be resolved just by
>> updating the documentation on that page. For example, it says "Product
>> approval is required to run an experiment." and is unclear about what
>> is "user-facing" vs not. It also says to talk to you *before* building
>> an experiment, which I did (bug 1251052 comment 1), only to then find
>> out that step was not necessary, so that was extra unnecessary
>> latency. The doc also says "QA should verify the experience on the
>> staging server", so I went through that process - it was almost no
>> work on my part since QA had a template for it already but it still
>> took nonzero time. The addon signing step is also not yet automated,
>> as far as I could tell, even though the bug referenced in the doc is
>> resolved fixed, so that adds an additional dependency and round-trip
>> to somebody who can sign it.
>>
>> > For pref flips in particular, we've talked about extending the
>> > experiment system so that you don't have to write an addon at all:
>> > that you can just specify pref values in the experiment manifest. That
>> > would require engineering work and a little bit of new signing work
>> > that is currently not planned; but if somebody wants to do that work,
>> > I would be willing to mentor.
>>
>> This sounds great, and would be really nice. If it's not a huge amount
>> of work I would be willing to take this on. Is there a bug on file for
>> it?
>>
>> > Data review is required only if an experiment collects new data. My
>> > goal is for this to be fast and straightforward, but IIRC it wasn't
>> > necessary at all for your recent experiment. There is no legal review
>> > required for experiments unless I raise a question during data review.
>>
>> Again, the 

Re: Telemetry experiments need to be easier (on Nightly)

2016-04-20 Thread Jared Hirsch
Hi all,

I wrote a telemetry experiment last year[1], and I also found the process
challenging to navigate.

I found that many important details were undocumented, but were mentioned
in review comments, so I added what I could to the telemetry experiment
wiki page and to MDN.

My experiment gathered new data, and sent it to a new, one-off server
endpoint. This definitely required more discussion than the typical
experiment. That said, I do think there are ways the process could be
improved for all experiments.

Here are a few suggestions for improving the current process:
- document how to use Experiments.jsm on MDN
- document the schema of the Telemetry Experiment-specific manifest.json
file
- write and maintain at least one up-to-date, thoroughly commented example
experiment
- merge (parts of) the telex QA page[2] into the main page (the link is
buried in the main page)
- update and possibly merge the code docs[3] into the main wiki page

To expand on the last bullet point: the code docs suggest using a special
python build script to assemble the final .xpi, but that's no longer
accurate, as experiments now need to be signed. Further, each experiment
has to be *manually* signed, because AMO signing tools will not auto-sign
an add-on with the special telex emType. A bug has been filed[4] to enable
auto-signing for experiments, but it hasn't moved since November. Might be
worth pushing on it.

Each of these missing pieces makes shipping an experiment a little bit
harder than it has to be. Adding them all up, the process as a whole can
seem difficult for a first-time experiment author (at least, it did for me
and evidently for Kats as well).

I hope these suggestions are helpful.

Cheers,

Jared


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1174937
[2] https://wiki.mozilla.org/QA/Telemetry
[3]
http://hg.mozilla.org/webtools/telemetry-experiment-server/file/tip/README.rst
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1220097

On Wed, Apr 20, 2016 at 8:05 AM, Kartikaya Gupta  wrote:

> On Wed, Apr 20, 2016 at 10:26 AM, Benjamin Smedberg
>  wrote:
> > The goal of this is for experiments to be fairly lightweight.
> >
> > Can you talk about where the problems were? The only signoffs that are
> > currently required are code review (no surprise there) and
> > acknowledgement from a release driver.
>
> This sounds reasonable, but the page at
> https://wiki.mozilla.org/Telemetry/Experiments (taken at face value,
> which is what I did as it was my first foray into this) indicates
> otherwise. Perhaps most of my issues could be resolved just by
> updating the documentation on that page. For example, it says "Product
> approval is required to run an experiment." and is unclear about what
> is "user-facing" vs not. It also says to talk to you *before* building
> an experiment, which I did (bug 1251052 comment 1), only to then find
> out that step was not necessary, so that was extra unnecessary
> latency. The doc also says "QA should verify the experience on the
> staging server", so I went through that process - it was almost no
> work on my part since QA had a template for it already but it still
> took nonzero time. The addon signing step is also not yet automated,
> as far as I could tell, even though the bug referenced in the doc is
> resolved fixed, so that adds an additional dependency and round-trip
> to somebody who can sign it.
>
> > For pref flips in particular, we've talked about extending the
> > experiment system so that you don't have to write an addon at all:
> > that you can just specify pref values in the experiment manifest. That
> > would require engineering work and a little bit of new signing work
> > that is currently not planned; but if somebody wants to do that work,
> > I would be willing to mentor.
>
> This sounds great, and would be really nice. If it's not a huge amount
> of work I would be willing to take this on. Is there a bug on file for
> it?
>
> > Data review is required only if an experiment collects new data. My
> > goal is for this to be fast and straightforward, but IIRC it wasn't
> > necessary at all for your recent experiment. There is no legal review
> > required for experiments unless I raise a question during data review.
>
> Again, the wiki page should state this more explicitly, for the
> benefit of people who are doing an experiment for the first time.
>
> > There is no explicit QA "approval" process required: clearly we don't
> > want to ship broken code, so we should use normal good judgement about
> > how to test each particular experiment, but that should not be a
> > high-process thing by default.
>
> Ditto, wiki page should be clarified. I'm happy to go and update the
> page to reflect what you've said here, provided you're willing to
> review my changes to make sure I don't go overboard :)
>
> Cheers,
> kats
>
> > On Tue, Apr 19, 2016 at 4:43 PM, Kartikaya Gupta 
> wrote:
> >> (Cross-posted to 

Re: Telemetry experiments need to be easier (on Nightly)

2016-04-20 Thread Kartikaya Gupta
On Wed, Apr 20, 2016 at 10:26 AM, Benjamin Smedberg
 wrote:
> The goal of this is for experiments to be fairly lightweight.
>
> Can you talk about where the problems were? The only signoffs that are
> currently required are code review (no surprise there) and
> acknowledgement from a release driver.

This sounds reasonable, but the page at
https://wiki.mozilla.org/Telemetry/Experiments (taken at face value,
which is what I did as it was my first foray into this) indicates
otherwise. Perhaps most of my issues could be resolved just by
updating the documentation on that page. For example, it says "Product
approval is required to run an experiment." and is unclear about what
is "user-facing" vs not. It also says to talk to you *before* building
an experiment, which I did (bug 1251052 comment 1), only to then find
out that step was not necessary, so that was extra unnecessary
latency. The doc also says "QA should verify the experience on the
staging server", so I went through that process - it was almost no
work on my part since QA had a template for it already but it still
took nonzero time. The addon signing step is also not yet automated,
as far as I could tell, even though the bug referenced in the doc is
resolved fixed, so that adds an additional dependency and round-trip
to somebody who can sign it.

> For pref flips in particular, we've talked about extending the
> experiment system so that you don't have to write an addon at all:
> that you can just specify pref values in the experiment manifest. That
> would require engineering work and a little bit of new signing work
> that is currently not planned; but if somebody wants to do that work,
> I would be willing to mentor.

This sounds great, and would be really nice. If it's not a huge amount
of work I would be willing to take this on. Is there a bug on file for
it?

> Data review is required only if an experiment collects new data. My
> goal is for this to be fast and straightforward, but IIRC it wasn't
> necessary at all for your recent experiment. There is no legal review
> required for experiments unless I raise a question during data review.

Again, the wiki page should state this more explicitly, for the
benefit of people who are doing an experiment for the first time.

> There is no explicit QA "approval" process required: clearly we don't
> want to ship broken code, so we should use normal good judgement about
> how to test each particular experiment, but that should not be a
> high-process thing by default.

Ditto, wiki page should be clarified. I'm happy to go and update the
page to reflect what you've said here, provided you're willing to
review my changes to make sure I don't go overboard :)

Cheers,
kats

> On Tue, Apr 19, 2016 at 4:43 PM, Kartikaya Gupta  wrote:
>> (Cross-posted to dev-platform and release-management)
>>
>> Hi all,
>>
>> Not too long ago I ran a telemetry experiment [1] to figure out how to
>> tune some of our code to get the best in-the-wild behaviour. While I
>> got the data I wanted, I found the process of getting the experiment
>> going to be very heavyweight as it involved getting all sorts of
>> approvals and reviews. Going through that process was more
>> time-consuming than I would like, and it has put me off from doing
>> further experiments of a similar nature. However, this means that the
>> decisions I make are going to be less data driven and more guesswork,
>> which is not good for obvious reasons.
>>
>> What I would like to see is a simplified process for telemetry
>> experiments on Nightly, making it easier to flip a pref on 50% of the
>> population for a week or two and get some useful data out of it. It
>> seems to me that many of the approvals (QA, RelMan, Legal, Product)
>> should not really be needed for this kind of simple temporary
>> pref-flip, assuming the necessary data collection mechanisms are
>> already in the code. Does anybody have any objections to this, or have
>> other suggestions on how to streamline this process a bit more?
>>
>> To be clear, I'm not suggesting we do away with these approvals
>> entirely, I just want to see more nuance in the process to determine
>> when they are *really* required, so that they don't slow us down
>> otherwise.
>>
>> Cheers,
>> kats
>>
>> [1] https://wiki.mozilla.org/Telemetry/Experiments
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Telemetry experiments need to be easier (on Nightly)

2016-04-20 Thread Benjamin Smedberg
The goal of this is for experiments to be fairly lightweight.

Can you talk about where the problems were? The only signoffs that are
currently required are code review (no surprise there) and
acknowledgement from a release driver.

For pref flips in particular, we've talked about extending the
experiment system so that you don't have to write an addon at all:
that you can just specify pref values in the experiment manifest. That
would require engineering work and a little bit of new signing work
that is currently not planned; but if somebody wants to do that work,
I would be willing to mentor.

Data review is required only if an experiment collects new data. My
goal is for this to be fast and straightforward, but IIRC it wasn't
necessary at all for your recent experiment. There is no legal review
required for experiments unless I raise a question during data review.

There is no explicit QA "approval" process required: clearly we don't
want to ship broken code, so we should use normal good judgement about
how to test each particular experiment, but that should not be a
high-process thing by default.

In what ways does your experience differ from the ideal? What can we
change to make this less painful?

--BDS


On Tue, Apr 19, 2016 at 4:43 PM, Kartikaya Gupta  wrote:
> (Cross-posted to dev-platform and release-management)
>
> Hi all,
>
> Not too long ago I ran a telemetry experiment [1] to figure out how to
> tune some of our code to get the best in-the-wild behaviour. While I
> got the data I wanted, I found the process of getting the experiment
> going to be very heavyweight as it involved getting all sorts of
> approvals and reviews. Going through that process was more
> time-consuming than I would like, and it has put me off from doing
> further experiments of a similar nature. However, this means that the
> decisions I make are going to be less data driven and more guesswork,
> which is not good for obvious reasons.
>
> What I would like to see is a simplified process for telemetry
> experiments on Nightly, making it easier to flip a pref on 50% of the
> population for a week or two and get some useful data out of it. It
> seems to me that many of the approvals (QA, RelMan, Legal, Product)
> should not really be needed for this kind of simple temporary
> pref-flip, assuming the necessary data collection mechanisms are
> already in the code. Does anybody have any objections to this, or have
> other suggestions on how to streamline this process a bit more?
>
> To be clear, I'm not suggesting we do away with these approvals
> entirely, I just want to see more nuance in the process to determine
> when they are *really* required, so that they don't slow us down
> otherwise.
>
> Cheers,
> kats
>
> [1] https://wiki.mozilla.org/Telemetry/Experiments
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Telemetry experiments need to be easier (on Nightly)

2016-04-19 Thread Jet Villegas
+1 for streamlining Telemetry deployment

I think we'd still want to:
1. broadcast when experiments are shipping, with a specific start/end/goal,
and what data is collected
2. define scope: Nightly/Aurora/Beta (with a higher approval bar for each,)
+ Desktop/Mobile/Other
3. track bugs that distinguish the experiment and control group

--Jet


On Wed, Apr 20, 2016 at 4:43 AM, Kartikaya Gupta  wrote:

> (Cross-posted to dev-platform and release-management)
>
> Hi all,
>
> Not too long ago I ran a telemetry experiment [1] to figure out how to
> tune some of our code to get the best in-the-wild behaviour. While I
> got the data I wanted, I found the process of getting the experiment
> going to be very heavyweight as it involved getting all sorts of
> approvals and reviews. Going through that process was more
> time-consuming than I would like, and it has put me off from doing
> further experiments of a similar nature. However, this means that the
> decisions I make are going to be less data driven and more guesswork,
> which is not good for obvious reasons.
>
> What I would like to see is a simplified process for telemetry
> experiments on Nightly, making it easier to flip a pref on 50% of the
> population for a week or two and get some useful data out of it. It
> seems to me that many of the approvals (QA, RelMan, Legal, Product)
> should not really be needed for this kind of simple temporary
> pref-flip, assuming the necessary data collection mechanisms are
> already in the code. Does anybody have any objections to this, or have
> other suggestions on how to streamline this process a bit more?
>
> To be clear, I'm not suggesting we do away with these approvals
> entirely, I just want to see more nuance in the process to determine
> when they are *really* required, so that they don't slow us down
> otherwise.
>
> Cheers,
> kats
>
> [1] https://wiki.mozilla.org/Telemetry/Experiments
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Telemetry experiments need to be easier (on Nightly)

2016-04-19 Thread Kartikaya Gupta
(Cross-posted to dev-platform and release-management)

Hi all,

Not too long ago I ran a telemetry experiment [1] to figure out how to
tune some of our code to get the best in-the-wild behaviour. While I
got the data I wanted, I found the process of getting the experiment
going to be very heavyweight as it involved getting all sorts of
approvals and reviews. Going through that process was more
time-consuming than I would like, and it has put me off from doing
further experiments of a similar nature. However, this means that the
decisions I make are going to be less data driven and more guesswork,
which is not good for obvious reasons.

What I would like to see is a simplified process for telemetry
experiments on Nightly, making it easier to flip a pref on 50% of the
population for a week or two and get some useful data out of it. It
seems to me that many of the approvals (QA, RelMan, Legal, Product)
should not really be needed for this kind of simple temporary
pref-flip, assuming the necessary data collection mechanisms are
already in the code. Does anybody have any objections to this, or have
other suggestions on how to streamline this process a bit more?

To be clear, I'm not suggesting we do away with these approvals
entirely, I just want to see more nuance in the process to determine
when they are *really* required, so that they don't slow us down
otherwise.

Cheers,
kats

[1] https://wiki.mozilla.org/Telemetry/Experiments
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform