Re: [Tails-dev] Automated tests specification

2015-09-02 Thread anonym
On 09/01/2015 06:57 PM, anonym wrote:
> On 09/01/2015 12:04 PM, intrigeri wrote:
>> bertagaz wrote (28 Aug 2015 14:24:51 GMT) :
>>> and might take quite a bit of disk space to store.
>>
>> ... and smaller (#10001). I'm curious how much space a full set of
>> test suite videos take.
> 
> I'll try to remember to collect this info in my next full run.

I got a video that was 309 MiB large with test/6094-improved-snapshots
checked out on an isotester. I don't think the errors I got affected the
runtime (or amount of changes on the screen, which matters more w.r.t.
video compression). Any way, I believe the per-(failed-)scenario
approach is what we really want; videos of the passed scenarios is just
distracting and wasteful.

Failing Scenarios:
cucumber features/time_syncing_bridges.feature:29 # Scenario: Clock way
in the past in bridge mode
cucumber features/time_syncing_bridges.feature:45 # Scenario: Clock way
in the future in bridge mode
cucumber features/tor_stream_isolation.feature:48 # Scenario: Explicitly
torify-wrapped applications are using the default SocksPort
cucumber features/torified_git.feature:22 # Scenario: Cloning git
repository over SSH

186 scenarios (4 failed, 182 passed)
1345 steps (4 failed, 5 skipped, 1336 passed)
348m33.765s

It's interesting to note that this runtime is not different from what I
normally get when running this branch on the isotesters (I said "~350
minutes" earlier in this thread), which makes me wonder if the main
concern of #10001, that --capture increases the runtime (even when
enough CPU is available), is actually valid. I'm gonna investigate that
now...

Cheers!

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] [Tails-ux] Update on Firefox extension

2015-09-02 Thread tchou
sajolida:
> Hi,

Hey,

> Meta: putting tails-ux in copy for the info but please answer on
> tails-dev only.
> 
> I wanted to clarify a bit what's going on regarding the development of
> the Verification Extension (#7552).
> 
> In terms of specs, I realized that we were not taking into account the
> case where the download might get interrupted either on the client side
> by a faulty network connection, either on the server side by a faulty
> mirror. That's now #9717 and I proposed a mockup:
> 
> https://labs.riseup.net/code/attachments/download/922/extension-20150828-resume.pdf
> 
> Tchou, can you have a look (as I couldn't assign the ticket to both you
> and Maone).
Looks nice.


> In terms of HTML+CSS, we're still waiting for tchou to review and mockup
> the code on /blueprint/bootstrapping/extension/prototype. That's #9384.
> And to draft a CSS. That's #9385. Tchou, any ETA on that?
I worked on it. It's in a good shape. You can have a look to my repo,
but it's not really ready for q, I need a few more hours (it should
happend soon).


---
tchou

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Update on Firefox extension

2015-09-02 Thread Giorgio Maone
On 31/08/2015 20:55, sajolida wrote:
> Giorgio Maone:
>
>> The main roadblock was Mozilla finalizing its add-ons migration strategy
>> to the Electrolysis (e10s) multi-process & sandboxing architecture.
>> Without an ultimate public decision, which has been deemed "imminent"
>> for all the past month (see
>> https://labs.riseup.net/code/issues/8567#note-7 ), it was hard to even
>> decide which of the 4 different technical approaches to develop Firefox
>> add-ons was the best for this project:
>>
>> 1. XUL/XPCOM, like desktop NoScript;
>> 2. restartless, like Android NoScript;
>> 3. Add-on SDK;
First of all, let me clear what seems to be a qui pro quo I've
involuntarily induced: I wrote "restartless add-ons" because that's
their most commonly used name, but in this context I would better have
used the "bootstrapped add-ons" designation, in orde to prevent the
false impression that Add-on SDK extensions are not restartless themselves.

Please notice that *Add-on SDK extensions are restartless add-ons*: the
difference between #2 (bootstrapped) and #3 (SDK) is just the API
they've got available, i.e. just bare XPCOM for #2 and a pure-JS
sandboxed abstraction for #3.

Only #1 (XUL/XPCOM legacy add-ons) require a browser restart after
installation.
As far as I understand, this should already wipe off most of the worries
you expressed about the Add-on SDK :)

> If I understodd correctly, these were the three options that were
> already available previously.

Indeed. During the past few months I've been involved in Mozilla's
e10s/add-ons team, so I've been aware early that all these options were
going to be deprecated sooner or later,  hence my decision to defer any
implementation commitment, but an actual schedule and details about the
ultimate replacement have been missing until mid-August, mostly because
a migration strategy for existing add-ons required careful planning and
long discussions, both technical and "political". Even now, the
situation is still in development.

>
> This move to WebExtensions indeed sounds interesting from a portability
> point of view.
Yes it does, even though there won't be a 1:1 mapping between the APIs,
but more like a "shared core" intersection plus browser-dependent
customizations reflecting the peculiarities of each browser and the
expectations of its own user base (e.g. Mozilla will neglect those APIs
whose main purpose is integrating with Google service, while is
committed to make the features needed by the most popular current
extensions available "natively" on the long run).
Also, different browsers are going to use different formats and signing
protocols, therefore cross-browser development will require at least
some repackaging step.

> So the dilemma we're facing here is to find the proper technology to
> use in order to be compatible with Electrolysis (FF43) while not being
> able to write it in WebExtensions (no ETA). Sounds like a pretty bad
> time to start writing new Firefox extensions :)
That's the biggest elephant in the room at this moment, indeed,
especially if you've got ESR compatibility as a requirement :(
Things should get more exciting (very exciting, I believe) as soon as
both the WebExtensions API and its native.js mechanism are supported in
all stable Firefox versions, which unfortunately seems to be about one
year away.

> I understand that Firefox wants to deprecate XUL/XPCOM, so we're left
> with option 2. restartless, and 3. Add-on SDK. But how do you choose
> Add-on SDK from these two:
Simple: non-SDK bootstrapped (restartless) add-ons can work without XUL,
but still depend on XPCOM heavily, therefore they're not gonna outlive
the XUL ones.

Add-on SDK extensions are basically restartless add-ons themselves, but
they run in a sandbox providing them with a rich high-level API
available as CommonJS modules, and under certain constraints (i.e. not
using XPCOM "super powers" which are currently available but are gonna
be deprecated as well) are automatically compatible with Electrolysis,
while bootstrapped ones require a considerable additional effort to
achieve the same compatibility.
In fact, the announcement states that Add-on SDK extensions will be
supported for the foreseeable future as long as they don't call
require('chrome'), i.e. as long as they don't touch XPCOM directly.

> - Do you need to restart the browser after installing a Add-on SDK
> extension? If so, then we probably need to rethink our wireframe,
> rethink what happens in browser with no history like Tor Browser, etc.
Fortunately that's not the case, since Add-on SDK extensions are
themselves restartless :)

>
> - What's the problem with restartless extensions? Will they become
> deprecated? Do they rely on XUL? Are they not e10s ready? Would they be
> much harder to port to WebExtensions?
All the above, as I already explained, except probably the portability,
which won't be a cakewalk in either case.
However coding with portability in mind is obviously easier now than two
weeks 

[Tails-dev] [Review] Adding laptops which don't shutdown properly

2015-09-02 Thread mercedes508
Hi,

I added two laptops that don't shutdown properly Tails, and need a
brutal shutdown to the known issues page. This change is to review from
here:

mercedes508 dellE6230 90a758a & e0efb6e

But I was wondering how relevant it is to add computers there, as that
list might never ends, wrt. to UEFI hardware?

Cheers.
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread anonym
On 09/02/2015 11:11 AM, anonym wrote:
> Failing Scenarios:
> cucumber features/time_syncing_bridges.feature:29 # Scenario: Clock way
> in the past in bridge mode
> cucumber features/time_syncing_bridges.feature:45 # Scenario: Clock way
> in the future in bridge mode
> cucumber features/tor_stream_isolation.feature:48 # Scenario: Explicitly
> torify-wrapped applications are using the default SocksPort
> cucumber features/torified_git.feature:22 # Scenario: Cloning git
> repository over SSH
> 
> 186 scenarios (4 failed, 182 passed)
> 1345 steps (4 failed, 5 skipped, 1336 passed)
> 348m33.765s

... and the run without --capture:

Failing Scenarios:
cucumber features/totem.feature:48 # Scenario: Watching a WebM video
over HTTPS, with and without the command-line
cucumber features/totem.feature:58 # Scenario: Copying video files to a
persistence and making sure that they persist

186 scenarios (2 failed, 184 passed)
1345 steps (2 failed, 10 skipped, 1333 passed)
320m10.637s

So there's an 8.9% increase. Of course, due to the failures it's hard to
draw any conclusions with any sort of confidence from this.

Cheers!

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread intrigeri
hi,

anonym wrote (02 Sep 2015 09:11:43 GMT) :
> I got a video that was 309 MiB large [...]

Assuming (very wild guess) that the size gain brought by the proposed
changes from #10001 can be extrapolated, this could become 117 MiB.
If that's mostly correct, then with an aggressive enough cleaning
policy (e.g. delete those that are more than 7 days old), I *guess*
that we can very well afford storing those. I guess that our existing
artifacts cleanup script could easily be adjusted to support expiring
videos as well (note that we may need different expiration policies
for videos and for other test suite artifacts, that we may want to
keep longer; not sure yet; needs to be refined).

> It's interesting to note that this runtime is not different from what I
> normally get when running this branch on the isotesters (I said "~350
> minutes" earlier in this thread),

Excellent news: this means that the runtime increase concern is not
valid on lizard -- one less blocker for archiving videos.
bertagaz also confirmed that archiving them is not much more work for
him, so we should be good.

I think that's not a blocker for the first iteration, though: if videos
are added between Oct. 15 and Jan. 15, I'm happy :)

bertagaz, time to create tickets that sum this up, or do we need
more discussion?

> which makes me wonder if the main concern of #10001, that --capture
> increases the runtime (even when enough CPU is available), is
> actually valid. I'm gonna investigate that now...

Well, it *is* valid at least on my system. However, it could depend on
hardware, and/or on the version of the video codec libraries (I did my
tests on sid, which is useful info because that gives us an idea of
what software we'll run on our infra in ~2 years).

Cheers!
-- 
intrigeri
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread intrigeri
anonym wrote (01 Sep 2015 16:57:05 GMT) :
> Yes, per-scenario videos would be great (my plan was to do this when we
> have #8947, but whatever, nothing prevents us from having it now). They
> would be more useful than per-feature videos, and actually easier to
> implement AFAICT. Please file a ticket!

that's now #10148 :)
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread anonym
On 09/02/2015 01:12 PM, intrigeri wrote:
> hi,
> 
> anonym wrote (02 Sep 2015 09:11:43 GMT) :
>> I got a video that was 309 MiB large [...]
> 
> Assuming (very wild guess) that the size gain brought by the proposed
> changes from #10001 can be extrapolated, this could become 117 MiB.
> If that's mostly correct, then with an aggressive enough cleaning
> policy (e.g. delete those that are more than 7 days old), I *guess*
> that we can very well afford storing those. I guess that our existing
> artifacts cleanup script could easily be adjusted to support expiring
> videos as well (note that we may need different expiration policies
> for videos and for other test suite artifacts, that we may want to
> keep longer; not sure yet; needs to be refined).

Great!

>> It's interesting to note that this runtime is not different from what I
>> normally get when running this branch on the isotesters (I said "~350
>> minutes" earlier in this thread),
> 
> Excellent news: this means that the runtime increase concern is not
> valid on lizard -- one less blocker for archiving videos.
> bertagaz also confirmed that archiving them is not much more work for
> him, so we should be good.

I just updated the ticket with more thorough test results. It seems it
*does* increase the runtime with 17% (old video compression) vs 5.8% (new).

The run I talked about above was just one data point, after all, so it's
not entirely surprising if I got outlying results. I'm running another
run without --capture now for comparison. However, there are so many
other variables that can make the runtime differ (transient errors, Tor
bootstrapping issues that make it take a longer time (which seems to
cluster in time for some reasons)) so I'm not sure what that will be run.

Any way, I trust my test results on #10001.

>> which makes me wonder if the main concern of #10001, that --capture
>> increases the runtime (even when enough CPU is available), is
>> actually valid. I'm gonna investigate that now...
> 
> Well, it *is* valid at least on my system. However, it could depend on
> hardware, and/or on the version of the video codec libraries (I did my
> tests on sid, which is useful info because that gives us an idea of
> what software we'll run on our infra in ~2 years).

Right. I guess we need longterm statistics on the isotesters to be able
to see any meaningful numbers.

Cheers!

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread intrigeri
hi,

bertagaz wrote (02 Sep 2015 10:41:59 GMT) :
> On Tue, Sep 01, 2015 at 06:59:09PM +0200, anonym wrote:
>> On 09/01/2015 12:23 PM, intrigeri wrote:
>> > bertagaz wrote (26 Aug 2015 17:52:26 GMT) :

>> Since pushing stuff into the branch after this field has been set to
>> true invalidates the Jenkins' test suite run, would Jenkins monitor for
>> this and unset the field, or is it up to the committer to unset it?

> Jenkins should probably unset the "Jenkins OK" field by itself if the
> test has failed.

Sure, but this still leaves a window during which a reviewer believes
that the branch has been tested OK, while the test result is about an
older version than the one they're looking at. Racy!

>> I realize this is not a problem unique to this solution. Any way,
>> doing this manually gets hairy since we don't necessarily know which
>> commit Jenkins has tested. I suppose it would help if Jenkins also
>> posted a message about what commit it has successfully tested. Or
>> maybe the field we want to add instead could contain the commit?

> Yes, I think that when Jenkins reports the test result, it should also
> add in a comment some informations. I think at least which commit it
> tested and the link to the test result page.

Yes, and then the reviewer can manually check whether that commit ID
matches what they're looking at. But that doesn't address the race
condition I was mentioning above. It's also not enough info,
see below.

However, if Jenkins *also* unset the "Jenkins OK" field when it
*starts* testing an ISO, then we're fine.

> But as the branch was RfQA, it shouldn't be too complicated to know
> which commit was tested, as unless the test fails, this branch is not
> really supposed to receive new commits.

Well... let's not rely on this. It's quite common that we push a small
fix on top of a RfQA branch, after checking that nobody has started
reviewing it yet.

Also, note that what exactly we're testing is not fully encoded in the
topic branch's HEAD: it also depends on the state of the corresponding
base branch, and of all APT suites / overlays that are listed in
there. So "Jenkins OK for $commit" is a piece of information that is
insufficient, and that needs to expire somehow. Hence the suggestion
of invalidating it at the beginning of each test run.

>> > But this doesn't address the problem anonym pointed to initially, that
>> > is "the reviewer also has to wait until the automated tester posts the
>> > result to the ticket".
>> 
>> And the corollary is that it neither solves the problem: reviewers may
>> waste time reviewing a branch that breaks an automated test. That's the
>> important part, IMHO.

Yes. Thankfully this only happens if if the reviewer doesn't wait for
Jenkins to post its results (we're starting looping endlessly on this
one, perhaps :)

>> > One possible solution would be to assign RfQA
>> > tickets to Jenkins initially, and once Jenkins has voted +1 (and set
>> > "Jenkins OK" to true), it could also unassign the ticket from itself,
>> > and then human reviewers can look into it. Jenkins would still run
>> > automated tests on branches regardless of their ticket's assignee, as
>> > specified elsewhere, but at least this would make it clear to human
>> > reviewers when it's time for them to start reviewing stuff.
>> 
>> Sure. I guess Jenkins would only assign itself, and not assign the
>> intended human reviewer (since that info isn't available). That seems a
>> bit awkward to me, to (as the implementer) have to return to the tickets
>> when Jenkins is done, and then assign the human reviewer.

Indeed, that's painful. For many branches we don't need to do that as
we don't assign them to a specific reviewer, and the RM is supposed to
handle what's left unreviewed, but still. Another field ("Next
reviewer") could be added and Jenkins would copy its value to the
"Assignee" field, but oh well, ETOOMANYFIELDS, no?

> Maybe when assigning to Jenkins, one can also add herself as a watcher
> to the ticket? Then she would get the ticket update notification in case
> the test passes.

Indeed. One more manual operation, though, unless we find a way to
automatically add to the watchers list anyone who has set QA Check =
Ready for QA.

It feels like we're already feeling the limits of what we're able to
do with Redmine, and trying to use it for stuff that it's not designed
to handle. I'm afraid that any solution we'll come up with will
require some combination of boring, repetitive and error-prone manual
ticket handling, writing custom Redmine plugins, writing custom bots
and/or post-update hooks to update tickets. Maybe it'll soon be time
to look into Zuul, Gerrit and friends for real...

Cheers,
-- 
intrigeri
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


Re: [Tails-dev] Automated tests specification

2015-09-02 Thread bertagaz
On Tue, Sep 01, 2015 at 06:59:09PM +0200, anonym wrote:
> On 09/01/2015 12:23 PM, intrigeri wrote:
> > bertagaz wrote (26 Aug 2015 17:52:26 GMT) :
> >> On Wed, Aug 26, 2015 at 03:38:19PM +0200, anonym wrote:
> >>> The current proposal seems to be to only start the automated test run of
> >>> a feature branch when it is marked "Ready for QA". This has overloaded
> >>> the meaning of this status so it no longer alone means that the branch
> >>> is ready for review'n'merge; the reviewer also has to wait until the
> >>> automated tester posts the result to the ticket.
> >>>
> >>> We could get rid of this ambiguity by splitting "Ready for QA" into
> >>> "Ready for (automated) testing" (RFT) and "Ready for review" (RFR). 
> >>> Example:
> >>>
> >>> Let's say I have created a new feature branch and think I'm done. I
> >>> assign it to intrigeri (who I want to review it in the end) and mark it
> >>> RFT. And automated build is triggered, and then a full run of the test
> >>> suite. Let's say the test suite fails. Then it gets reassigned to me and
> >>> marked "Dev needed". I fix the issue (which probably was easier to spot
> >>> for me than it would be for intrigeri, since I did the implementation)
> >>> assign it to intrigeri and mark it RFT again. This time the test suite
> >>> passes, and then the ticket is marked RFR automatically. Also, if I run
> >>> the test suite myself and see it pass, then I can just mark it RFR 
> >>> directly.
> > 
> >> I've think about this issue too. My own conclusion was to add a new
> >> Custom fiels in redmine, that Jenkins would use, so something similar to
> >> yours. The dev mark the ticket as ReadyForQA, then Jenkins run the test
> >> suite on it and send its report modifying that field accordingly.
> > 
> > The way it's conceptualized in Gerrit (and presumably in other similar
> > tools) is that Jenkins is "just another reviewer", that can vote +1
> > or -1 on a merge request. I like this logic. To translate it into
> > Redmine, RfQA means that all reviewers (humans and robots) can start
> > looking at the branch, and Pass means that all reviewers are happy
> > with it.
> 
> Conceptually, I like this.
> 
> > I think it would be overly complicated to encode individual
> > reviews (e.g. the one done by Jenkins) in the QA Check field, and
> > conceptually I prefer to keep QA Check a bit more high level and not
> > give it a finer granularity.
> 
> I certainly agree that my proposal complicates things.
> 
> > So, adding dedicated custom fields seems to be the best option to
> > encode individual review results: "Jenkins OK" would be unset
> > initially, and set to true (= +1) upon successful testing. Upon failed
> > testing:
> > 
> >  * "QA Check" would be set back to "Dev Needed";
> >  * a negative vote from Jenkins is a blocker, so given QA Check has
> >been reset already, I'm not sure it's useful to also set "Jenkins
> >OK" to "false" (which we would have to revert after pushing a fix
> >and setting QA Check = RfQA).
> > 
> > ... and "Human reviewer OK" would work just the same.
> 
> Since pushing stuff into the branch after this field has been set to
> true invalidates the Jenkins' test suite run, would Jenkins monitor for
> this and unset the field, or is it up to the committer to unset it?

Jenkins should probably unset the "Jenkins OK" field by itself if the
test has failed.

> I realize this is not a problem unique to this solution. Any way,
> doing this manually gets hairy since we don't necessarily know which
> commit Jenkins has tested. I suppose it would help if Jenkins also
> posted a message about what commit it has successfully tested. Or
> maybe the field we want to add instead could contain the commit?

Yes, I think that when Jenkins reports the test result, it should also
add in a comment some informations. I think at least which commit it
tested and the link to the test result page.

But as the branch was RfQA, it shouldn't be too complicated to know
which commit was tested, as unless the test fails, this branch is not
really supposed to receive new commits.

> > But this doesn't address the problem anonym pointed to initially, that
> > is "the reviewer also has to wait until the automated tester posts the
> > result to the ticket".
> 
> And the corollary is that it neither solves the problem: reviewers may
> waste time reviewing a branch that breaks an automated test. That's the
> important part, IMHO.
> 
> > One possible solution would be to assign RfQA
> > tickets to Jenkins initially, and once Jenkins has voted +1 (and set
> > "Jenkins OK" to true), it could also unassign the ticket from itself,
> > and then human reviewers can look into it. Jenkins would still run
> > automated tests on branches regardless of their ticket's assignee, as
> > specified elsewhere, but at least this would make it clear to human
> > reviewers when it's time for them to start reviewing stuff.
> 
> Sure. I guess Jenkins would only assign itself, and not assign the
> 

Re: [Tails-dev] Automated tests specification

2015-09-02 Thread bertagaz
Hi,

On Tue, Sep 01, 2015 at 12:04:39PM +0200, intrigeri wrote:
> bertagaz wrote (28 Aug 2015 14:24:51 GMT) :
> >
> > But then, often the screen capture are enought to identify why
> > a step failed to run.
> 
> In my experience, sometimes, what would help understanding the problem
> has disappeared a while ago when we hit the timeout, fail, and take
> a screenshot. That's why I almost always turn on --capture when I'm
> doing TDD.

Ack. Got the point.

> So, I invite you to reconsider the option of storing videos of failed
> test suite runs. A first iteration could keep *all* videos for the
> entire test suite — presumably this should be just a glob to add to
> the list of artifacts we archive, right? If it's more work for you,
> let me know.

No, it shouldn't change much as you say.

> > I don't expect this last one to raise a lot of discussions, so let say that 
> > the
> > deadline for this thread is next Sunday 12pm CEST unless some points are 
> > still
> > not clear. I think the rest has already been discussed and drafted enough.
> 
> Sorry I wasn't available during these 48 hours.

My bad, I don't know why I've put such a deadline, but known OTOH that
you wouldn't be available on the week-end.

bert.
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Automated tests specification

2015-09-02 Thread bertagaz
Hi,

On Tue, Sep 01, 2015 at 06:57:05PM +0200, anonym wrote:
> On 09/01/2015 12:04 PM, intrigeri wrote:
> > bertagaz wrote (28 Aug 2015 14:24:51 GMT) :
> >> I've also added a new section about the result to keep:
> > 
> >> ## What kind of result shall be kept
> > 
> >> The test suite produces different kind of artifacts: logfiles, screen
> >> captures for failing steps, snapshots of the test VM, and also videos of
> >> the running test session.
> > 
> >> Videos may be a bit too much to keep, given they slow down the test
> >> suite
> > 
> > Do we have data about how much they slow down the test suite on our
> > current Jenkins slaves? See #10001 for partial data about how we could
> > make it less resources-hungry...
> 
> Speculation: Didn't we plan for an extra core for this? I suppose we'd
> need fours cores, for: video capture, sikuli, libvirt (USB emulation in
> particular), cucumber-and-other-stuff. Hmm, perhaps
> cucumber-and-other-stuff are happy with the left over from the other
> cores? So three cores?

We planned for 3 cores per isotester and to be honest I don't think we
have room to add more. We're currently using 43 of the 48 cores, so
reserving 4 more of them won't leave much for the host.

> > And by the way, perhaps having videos optionally split per-scenario
> > would help a lot making good use of such videos: faster download,
> > easier to jump to the failing part, less storage needs on our infra.
> > anonym and Kill Your TV, if you agree it would be useful and not
> > stupid an idea, I can file a non-blocking ticket about it.
> 
> Yes, per-scenario videos would be great (my plan was to do this when we
> have #8947, but whatever, nothing prevents us from having it now). They
> would be more useful than per-feature videos, and actually easier to
> implement AFAICT. Please file a ticket!

+1.

bert.
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.