Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Andrew Ruthven
Hi Bastian,

I was writing a long reply to this, but I've decided not to bother,
because it boils down to this:

Can you please accept that there are a number, possibly a large, number
of clouds out there which can not, and will not, consume qcow2 images
natively.

We, and I expect all, are not going to change our deployments because
you disagree with our operational decisions and feel like nit-picking
about the upstream documentation.

Kind regards,
Andrew

On Sat, 2020-06-06 at 15:13 +0200, Bastian Blank wrote:
> On Sat, Jun 06, 2020 at 11:16:42PM +1200, Andrew Ruthven wrote:
> > Those are examples, and it notes that the formats available are
> > configurable and none of them are specified as "must be available".
> > The
> > CLI docs also have a similar note.
> > "Disk and container formats are configurable on a per-deployment
> > basis."
> 
> Both say "configurable", this makes the setting a policy decision.
> 
> What I seek is the documentation of the technical problems.  And, if
> Glance can't handle qcow2 with rbd, why such broken cases are not
> outright rejected, without the admin setting some magic options.
> 
> > Please don't make assumptions. How can you know that the system
> > you're
> > dealing with can make conversions?
> 
> At least Cinder converts images all the time, and sometimes does not
> even know what it actually got, which leads to things like
> CVE-2015-1851.
> 
> > How do you get that reading? When you read in context within the
> > email
> > it reads as "we disable qcow2 because our backend only supports
> > raw"
> > because that's what I said elsewhere in the email.
> 
> Because of the "we", which I read as the admins of the instance.  And
> "backend", which I don't talk to directly, but only to the Glance
> API.
> And the, at least to my searches, missing big and fat warning: don't
> do
> that, ever!
> 
> The only thing I can find comes from the Ceph documentation:
> > Important
> > Using QCOW2 for hosting a virtual machine disk is NOT recommended.
> > If
> > you want to boot virtual machines in Ceph (ephemeral backend or
> > boot
> > from volume), please use the raw image format within Glance.
> 
> Regards,
> Bastian
> 
-- 
Andrew Ruthven, Wellington, New Zealand
and...@etc.gen.nz  |
Catalyst Cloud:| This space intentionally left blank
   https://catalystcloud.nz|



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Thomas Goirand
On 6/6/20 1:16 PM, Thomas Lange wrote:
>> On Sat, 6 Jun 2020 12:28:00 +0200, Bastian Blank  said:
> 
> > Also we don't want to specify the size a priori, because it can break to
> > easily.  So we need to deduct the size during the build process.
> I could improve fai-diskimage using what zigo is using in his
> build-openstack-debian-image script. Call resize2fs, parted and
> truncate to make the fs and raw image as small as possible and add a
> few MB of additional space. That way we always will get the smallest
> image.

Thomas,

It'd be really great if you could implement this. I tried quickly to do
that on images produced by the team, but also failed quickly and gave up
quickly too! :)

> But I'm not sure if the partition hooks Bastian wrote
> may cause problems. I guess not, because we only have to resize
> partition 1 which is always at the end of the raw disk.

Cloud-init takes care of resizing when time image boots, so it's safe to
have a smaller image. We still need a bit of free space to allow:
- the resize itself
- the VM to boot while the resize isn't finished

Cheers,

Thomas Goirand (zigo)



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Thomas Goirand
On 6/6/20 12:11 PM, Bastian Blank wrote:
> On Wed, May 27, 2020 at 09:43:01AM +0200, Thomas Goirand wrote:
>> On 5/26/20 9:26 PM, Bastian Blank wrote:
>>> Please show me this existing 512MB image you are talking about.  At
>>> least right now it does not exist.  The build log currently even clearly
>>> states how much space is used.
>> Sure! The latest genericcloud bullseye daily image:
>> http://cdimage.debian.org/cdimage/cloud/bullseye/daily/20200527-276/debian-11-genericcloud-amd64-daily-20200527-276.tar.xz
>> Though *BULLSHIT*, it's actually only a 507 MB sparse file, not even a
>> 512 MB one ... :)
> 
> Well.  Sparse means that _all_ holes, even the ones that are required by
> the filesystem, are not stored.  So you are not likely to fit the data
> on a 507 MB filesystem.

You can continue to nit-pick, though the main point remains: our raw
images could be smaller.

Cheers,

Thomas Goirand (zigo)



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Thomas Goirand
On 6/6/20 3:13 PM, Bastian Blank wrote:
> On Sat, Jun 06, 2020 at 11:16:42PM +1200, Andrew Ruthven wrote:
>> Those are examples, and it notes that the formats available are
>> configurable and none of them are specified as "must be available". The
>> CLI docs also have a similar note.
>> "Disk and container formats are configurable on a per-deployment
>> basis."
> 
> Both say "configurable", this makes the setting a policy decision.
> 
> What I seek is the documentation of the technical problems.  And, if
> Glance can't handle qcow2 with rbd, why such broken cases are not
> outright rejected, without the admin setting some magic options.

The backend here, refers to the virtualization layer, *NOT* how glance
store images. Indeed, Glance can store any format. But for example,
Qcow2 format aren't recommended at all if you use Ceph as a backend for
Nova's /var/lib/nova/instances (which a lot of people do).

Yes, you CAN use qcow2 with Ceph, but that's really not optimized, and
that's not what our users want to do.

>> Please don't make assumptions. How can you know that the system you're
>> dealing with can make conversions?
> 
> At least Cinder converts images all the time, and sometimes does not
> even know what it actually got, which leads to things like
> CVE-2015-1851.

Conversion are time consuming. Cloud users don't want to wait for it to
finish before using their instances.

>> How do you get that reading? When you read in context within the email
>> it reads as "we disable qcow2 because our backend only supports raw"
>> because that's what I said elsewhere in the email.
> 
> Because of the "we", which I read as the admins of the instance.  And
> "backend", which I don't talk to directly, but only to the Glance API.

Wrong reading. Backend refers to what's in use for the virtualization,
so sometimes the hypervisor, sometimes the block device backend (nova
file backend or Ceph).

> And the, at least to my searches, missing big and fat warning: don't do
> that, ever!
> 
> The only thing I can find comes from the Ceph documentation:
> | Important
> | Using QCOW2 for hosting a virtual machine disk is NOT recommended. If
> | you want to boot virtual machines in Ceph (ephemeral backend or boot
> | from volume), please use the raw image format within Glance.

If you've read this, then you got the point. I don't get why you're
continuing to reply then.

Cheers,

Thomas Goirand (zigo)



Bug#932943: hex or base64

2020-06-06 Thread Thomas Lange
> On Sat, 6 Jun 2020 19:35:45 +0200, Bastian Blank  said:

> Now the remaining question is: GNU or BSD style checksum files?
Let's use the same as our installer images use. This is GNU style.
We should keep a common style on our Debian ISO images both for
install and cloud images.

-- 
regards Thomas



Bug#932943: hex or base64

2020-06-06 Thread Bastian Blank
On Sat, Jun 06, 2020 at 07:16:59PM +0200, Bastian Blank wrote:
> That's exactly what this change does:
> https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/203

Now the remaining question is: GNU or BSD style checksum files?

GNU: "checksum  filename"
- No information which type, so filename need to show.
- Different types in different files.

BSD: "type (file) = checksum"
- Can contain different checksums for the same file.
- The *sum tools from coreutils read only the one exact variant.

Regards,
Bastian

-- 
You canna change the laws of physics, Captain; I've got to have thirty minutes!



Bug#932943: hex or base64

2020-06-06 Thread Bastian Blank
On Mon, May 18, 2020 at 11:56:15AM +0200, Thomas Lange wrote:
> I've checked some other distributions in may 2020. They all use hex.

Well, they ship a single file for consumption by "sha512sum", which we
currently don't.

> Maybe provide base64 and hex in our manifest but also sha{265,512}sum
> hex files in the download directory on our server (petterson).

That's exactly what this change does:

https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/203

Regards,
Bastian

-- 
I've already got a female to worry about.  Her name is the Enterprise.
-- Kirk, "The Corbomite Maneuver", stardate 1514.0



Re: Official cloud image requirements

2020-06-06 Thread Noah Meyerhans
> 1. Security, not from cloud providers themselves, but from other cloud
> customers via sidechannel attacks such as meltdown.  The risk is small,
> but IMO greater than the risk of the cloud provider itself doing
> anything nefarious.  (Keep in mind that all major cloud providers have
> taken sophisticated steps to mitigate this class of risks at the
> hypervisor level, above & beyond what's already in Xen, KVM, etc,
> possibly implemented in custom hardware.)
> 
> 2. Neutrality.  Debian could build images on a single cloud service, but
> that might be seen by some as an endorsement of that service.  By
> building the images "in-house", we avoid such perception.  We could
> mitigate this concern by building images for a given provider on that
> provider's service, but that just adds complexity and is not worth the
> effort.

Also, 3. Infrastructure management.  The Debian sysadmin team doesn't
operate any resources in a public cloud, so we'd be on our own if we
chose to run there.  We'd lose out on any kind of hardening, monitoring,
and other management benefit from running on DSA-maintained
infrastructure if we did that.  We could surely make it work, but
without a compelling reason to do so, we should stick with DSA-managed
resources.

noah



Re: Official cloud image requirements

2020-06-06 Thread Noah Meyerhans
On Sat, Jun 06, 2020 at 08:28:30PM +0900, Charles Plessy wrote:
> > AFAIK there is general consensus amongst us that we want the cloud
> > images to be built on the Debian infrastructure, not on the cloud
> > provider infrastructure.
> 
> just for the record, here is what you added:
> 
> * '''E. all cloud-related images have to be built on Debian
>   infrastructure''' (for instance Salsa, Casulana, Patterson machines).
>   This is to avoid risks that some cloud providers might injects their
>   code.

I'm not a fan of that language.  It puts us well into the tinfoil-hat
realm, and ignores the reality of cloud adoption across a wide variety
of industries, many of which have very significant security
requirements.

> I do not oppose the requirement, but I have a long-standing question
> that I asked when we were criticised for building Amazon images on the
> Amazon cloud, and that was never answered:
> 
>  -> When a cloud provider can inject some code at build time, isn't it
>  as easy for it to inject the code at run time, or to instance virtual
>  machines with a tampered images while pretending to use the official
>  one ?

Yes.

But a cloud provider isn't going to do that, because doing covertly
would risk such a blow to customer trust that it would do very
significant financial damage to the cloud provider and to the cloud
computing industry as a whole.  Whatever benefit the cloud provider
thinks they'd gain from this is outweighed by this risk.

IMO, as somebody who fought against this requirement and who still
generally disagrees with it, here are the primary reasons I see for
Debian to have this requirement:

1. Security, not from cloud providers themselves, but from other cloud
customers via sidechannel attacks such as meltdown.  The risk is small,
but IMO greater than the risk of the cloud provider itself doing
anything nefarious.  (Keep in mind that all major cloud providers have
taken sophisticated steps to mitigate this class of risks at the
hypervisor level, above & beyond what's already in Xen, KVM, etc,
possibly implemented in custom hardware.)

2. Neutrality.  Debian could build images on a single cloud service, but
that might be seen by some as an endorsement of that service.  By
building the images "in-house", we avoid such perception.  We could
mitigate this concern by building images for a given provider on that
provider's service, but that just adds complexity and is not worth the
effort.

noah



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Bastian Blank
On Sat, Jun 06, 2020 at 11:16:42PM +1200, Andrew Ruthven wrote:
> Those are examples, and it notes that the formats available are
> configurable and none of them are specified as "must be available". The
> CLI docs also have a similar note.
> "Disk and container formats are configurable on a per-deployment
> basis."

Both say "configurable", this makes the setting a policy decision.

What I seek is the documentation of the technical problems.  And, if
Glance can't handle qcow2 with rbd, why such broken cases are not
outright rejected, without the admin setting some magic options.

> Please don't make assumptions. How can you know that the system you're
> dealing with can make conversions?

At least Cinder converts images all the time, and sometimes does not
even know what it actually got, which leads to things like
CVE-2015-1851.

> How do you get that reading? When you read in context within the email
> it reads as "we disable qcow2 because our backend only supports raw"
> because that's what I said elsewhere in the email.

Because of the "we", which I read as the admins of the instance.  And
"backend", which I don't talk to directly, but only to the Glance API.
And the, at least to my searches, missing big and fat warning: don't do
that, ever!

The only thing I can find comes from the Ceph documentation:
| Important
| Using QCOW2 for hosting a virtual machine disk is NOT recommended. If
| you want to boot virtual machines in Ceph (ephemeral backend or boot
| from volume), please use the raw image format within Glance.

Regards,
Bastian

-- 
Vulcans believe peace should not depend on force.
-- Amanda, "Journey to Babel", stardate 3842.3



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Thomas Lange
> On Sat, 6 Jun 2020 12:28:00 +0200, Bastian Blank  said:

> Also we don't want to specify the size a priori, because it can break to
> easily.  So we need to deduct the size during the build process.
I could improve fai-diskimage using what zigo is using in his
build-openstack-debian-image script. Call resize2fs, parted and
truncate to make the fs and raw image as small as possible and add a
few MB of additional space. That way we always will get the smallest
image.

But I'm not sure if the partition hooks Bastian wrote
may cause problems. I guess not, because we only have to resize
partition 1 which is always at the end of the raw disk.
-- 
regards Thomas



Re: Official cloud image requirements

2020-06-06 Thread Charles Plessy
Le Sat, Jun 06, 2020 at 11:37:17AM +0200, Emmanuel Kasper a écrit :
> 
> AFAIK there is general consensus amongst us that we want the cloud
> images to be built on the Debian infrastructure, not on the cloud
> provider infrastructure.

Hi Emmanuel,

just for the record, here is what you added:

* '''E. all cloud-related images have to be built on Debian
  infrastructure''' (for instance Salsa, Casulana, Patterson machines).
  This is to avoid risks that some cloud providers might injects their
  code.

I do not oppose the requirement, but I have a long-standing question
that I asked when we were criticised for building Amazon images on the
Amazon cloud, and that was never answered:

 -> When a cloud provider can inject some code at build time, isn't it
 as easy for it to inject the code at run time, or to instance virtual
 machines with a tampered images while pretending to use the official
 one ?

Again, there are other advantages anyway to centralise image building.
But a more comprehensive risk assessment of running our official images
in untrusted clouds would be neat.

And the answer is not urgent of course, thus,

Have a nice week-end :)

(By the way, my email server where my @debian messages transit is hosted
in the Amazon cloud since I moved in a building where self-hosting is
difficult because not only of network congestion, but also heat and
humidity !)

-- 
Charles Plessy
Akano, Uruma, Okinawa, Japan



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Andrew Ruthven
On Sat, 2020-06-06 at 12:06 +0200, Bastian Blank wrote:
> On Wed, May 27, 2020 at 10:13:43AM +1200, Andrew Ruthven wrote:
> > That may be the case, but not all *backends* can use qcow2 images.
> 
> Can you please show OpenStack documentation detailing this all?  I
> fail
> to find anything.  And if the documentation does not tell clearly, I
> have to assume that formats supported by the frontend can be properly
> converted in formats supported by the backend.

Sure. See:

https://docs.openstack.org/api-ref/image/v2/index.html?expanded=create-image-detail#create-image

Under "disk_format", description is:

-- Begin --
The format of the disk.

Values may vary based on the configuration available in a particular
OpenStack cloud. See the Image Schema response from the cloud itself
for the valid values available.

Example formats are: ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi,
ploop or iso.

The value might be null (JSON null data type).
-- End --

Those are examples, and it notes that the formats available are
configurable and none of them are specified as "must be available". The
CLI docs also have a similar note.

From:
https://docs.openstack.org/glance/ussuri/user/formats.html#disk-format

"Disk and container formats are configurable on a per-deployment
basis."

Please don't make assumptions. How can you know that the system you're
dealing with can make conversions?

> >
> > On
> > our public and private OpenStack clouds we only work with raw
> > images.
> > We have to convert all qcow2 images.
> 
> This reads to me like: "we disable qcow2 because we want to, so we
> need
> to convert first".



How do you get that reading? When you read in context within the email
it reads as "we disable qcow2 because our backend only supports raw"
because that's what I said elsewhere in the email.

Regards,
Andrew

-- 
Andrew Ruthven, Wellington, New Zealand
and...@etc.gen.nz  |
Catalyst Cloud:| This space intentionally left blank
   https://catalystcloud.nz|



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Bastian Blank
On Wed, May 27, 2020 at 12:16:32PM +0100, kuLa wrote:
> Are you thinking about doing it within FAI or outside of it using new
> tool set for this?

Outside.  FAI would in any case only create the directory tree, not the
filesystem is lives in or the image.  So what it does would look the
same in any case and the differences between the outputs would only show
up in a subsequent step.

But I don't know yet how.  Maybe:
- Run FAI on a empty directory on an arbitrary filesystem.
- Run tests on it.
- If the output requires an image:
  - create image, partition
  - rsync tree onto it
  - run post-image stuff, like grub installation
  - bundle it up like now
- If no image:
  - bundle the directory
- Collect information

Bastian

-- 
The joys of love made her human and the agonies of love destroyed her.
-- Spock, "Requiem for Methuselah", stardate 5842.8



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Bastian Blank
On Thu, Jun 04, 2020 at 01:06:29PM +0200, Thomas Lange wrote:
> > On Wed, 27 May 2020 12:01:53 +0200, Bastian Blank  
> > said:
> >> Sadly it is not that easy.  A whole bunch of temporary data is deleted
> >> in the final stages of the build process.
> 
> > You mean those bigger ones?
> > rm -rf $target/var/cache/apt/*
> > rm -rf $target/var/lib/apt/lists/*
> 
> > FAI can put a ramdisk on to of those directories (using FAI_RAMDISKS),
> > and will also do the cleanup later.

This helps for those parts, yes.

But other parts, dpkg, apt, need temporary space all over the place.
Not sure how much.

Also we don't want to specify the size a priori, because it can break to
easily.  So we need to deduct the size during the build process.

> > Anything else I've missed?
> I like to get feedback if I missed anything. Also kula asked about
> your plans concerning fai-diskimage/fai replacement.

Nothing new yet.  I had no time to think about it really.

Bastian

-- 
The sight of death frightens them [Earthers].
-- Kras the Klingon, "Friday's Child", stardate 3497.2



Re: Presenting Debian to the user at cloud provider marketplaces (round 2)

2020-06-06 Thread Bastian Blank
Hi Marcin

On Sun, Apr 26, 2020 at 06:37:38PM +0100, Marcin Kulisz wrote:
> On 2020-04-22 16:35:25, Bastian Blank wrote:
> > What do you mean with "formating"?  "GNU/Linux" is now irrelevant.
> But IMO a nice gesture to non Linux parts of our community and costs us
> nothing.
> > "Debian Linux", maybe, but redundant.
> It is possible but why not to indicate that also there is something else than
> Linux in Debian.

The problem is the length of the line.  If you skim the list, you see
"Debian", then you need to skip two words to get to the information you
seek next: which release is it, which in case of the number is a way
shorter word.

> > Should this be highlights of the release?  Hightlights of the software?
> > I don't know.
> If we're going with version and/or code name in the information text then I
> suppose giving a bit of information about release it self IMO would make 
> sense.

I have to admit, I'm not convinced, and others use it for more generic
stuff.

Maybe something about:
1. About Debian; focus on community distribution
2. Something about the release
3. Something about the name? "Buster"

Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, "Day of the Dove", stardate unknown



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Bastian Blank
Hi Thomas

I clearly does not help that you ignore one my questions several times.
Maybe you want to stop that.

On Wed, May 27, 2020 at 09:43:01AM +0200, Thomas Goirand wrote:
> On 5/26/20 9:26 PM, Bastian Blank wrote:
> > Please show me this existing 512MB image you are talking about.  At
> > least right now it does not exist.  The build log currently even clearly
> > states how much space is used.
> Sure! The latest genericcloud bullseye daily image:
> http://cdimage.debian.org/cdimage/cloud/bullseye/daily/20200527-276/debian-11-genericcloud-amd64-daily-20200527-276.tar.xz
> Though *BULLSHIT*, it's actually only a 507 MB sparse file, not even a
> 512 MB one ... :)

Well.  Sparse means that _all_ holes, even the ones that are required by
the filesystem, are not stored.  So you are not likely to fit the data
on a 507 MB filesystem.

Regards,
Bastian

-- 
Landru! Guide us!
-- A Beta 3-oid, "The Return of the Archons", stardate 3157.4



Re: Publishing raw generic{,cloud} images without tar, and without compression, plus versionning of point releases

2020-06-06 Thread Bastian Blank
On Wed, May 27, 2020 at 10:13:43AM +1200, Andrew Ruthven wrote:
> That may be the case, but not all *backends* can use qcow2 images.

Can you please show OpenStack documentation detailing this all?  I fail
to find anything.  And if the documentation does not tell clearly, I
have to assume that formats supported by the frontend can be properly
converted in formats supported by the backend.

>On
> our public and private OpenStack clouds we only work with raw images.
> We have to convert all qcow2 images.

This reads to me like: "we disable qcow2 because we want to, so we need
to convert first".

Regards,
Bastian

-- 
Another dream that failed.  There's nothing sadder.
-- Kirk, "This side of Paradise", stardate 3417.3



Re: Nightly Builds and Continous Delivery for Vagrant Boxes plans

2020-06-06 Thread Bastian Blank
On Sat, Jun 06, 2020 at 11:00:36AM +0200, Emmanuel Kasper wrote:
> Unfortunately the 250MB limit of artifacts prevent building a pipeline
> with multiple stages like
> create .box from raw -> test -> upload
> as I hit the artifact limit size between each of these stages.

Please take a deeper look into what the rest of the cloud team image
workflow does.  Maybe you find inspiration into how not to break
everything.

Now the important question, why do you want to clone everything?  Please
work with the other people.

> I will write the salsa admins if it's possible to increase the limit for the
> two vagrant projects which would need it.

As Salsa admin I already told him that this is unlikely to happen.

> As a solution of last resort, I've seen gitlab.com has an artifact limit of
> 1GB. I would prefer of course to build on Debian infrastructure.

Well.  Dead cloud delegates, please tell Emmanuel about the rule set we
developer over the last years.

Regards,
Bastian

-- 
Spock: The odds of surviving another attack are 13562190123 to 1, Captain.



Official cloud image requirements

2020-06-06 Thread Emmanuel Kasper
Hi

AFAIK there is general consensus amongst us that we want the cloud
images to be built on the Debian infrastructure, not on the cloud
provider infrastructure.

Since this was not explicitely listed in
https://wiki.debian.org/Teams/DPL/OfficialImages
I added it there a new point, the text coming from a mail from serpent@

If I misunderstood the consensus, just got ahead and revert the edit
it's a wiki :)

Manu



Re: Nightly Builds and Continous Delivery for Vagrant Boxes plans

2020-06-06 Thread Emmanuel Kasper

On 6/2/20 10:49 PM, Emmanuel Kasper wrote:

Hi

Now that I have working vagrant boxes with FAI, I'm starting to look at
nightly builds and continuous delivery of boxes to the vagrant cloud
(remember the "vagrant cloud" is just a disk image registry. The VMs are
run locally on your infrastructure)

Nightly Build:
--
I set a weekly build of Vagrant boxes at
https://salsa.debian.org/cloud-team/debian-vagrant-images/pipeline_schedules
which is working fine.
The only problem is that the boxes, which are gzip'ed qcow2 images, are
over the 250MB salsa artifact limit (they are around 300 MB) and thus
cannot be saved as artifacts when the build is complete.


Unfortunately the 250MB limit of artifacts prevent building a pipeline
with multiple stages like

create .box from raw -> test -> upload

as I hit the artifact limit size between each of these stages.

I will write the salsa admins if it's possible to increase the limit for 
the two vagrant projects which would need it.


As a solution of last resort, I've seen gitlab.com has an artifact limit 
of 1GB. I would prefer of course to build on Debian infrastructure.


--
You know an upstream is nice when they even accept m68k patches.
  - John Paul Adrian Glaubitz, Debian OpenJDK maintainer