Bug#956940: EC2 images should install compatibility symlinks for NVME drives

2020-06-02 Thread Noah Meyerhans
The amazon-ec2-utils package is currently in the NEW queue and will
provide the necessary tools for examining the instance NVME
configuration and creating the requested links.

Details of that package are tracked in the ITP at
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=959066.



Re: Debian Cloud Image not setting DNS resolvers on Rackspace Cloud

2020-05-29 Thread Noah Meyerhans
On Fri, May 29, 2020 at 05:14:39PM +, Brian King wrote:
> That is correct; in Rackspace cloud, DNS is set 'automatically' via
> cloud-init and the config-drive's network-data.json on Ubuntu and the
> RHEL family distros (RHEL, Fedora, CentOS) using their stock Openstack
> images.
> 
> >> I have to wonder if this bug report could be relevant:
> 
> https://launchpad.net/bugs/1850310
> 
> Yes, this comment appears to be most relevant:
> https://bugs.launchpad.net/cloud-init/+bug/1850310/comments/2 . Based
> on my tests, adding the resolvconf package to the Debian 10 cloud
> image and rerunning cloud-init does resolve the issue (no pun
> intended!). In contrast, enabling systemd-resolve (which is already
> present on the image) does NOT fix the issue. 
> 

Have you tested against newer versions of cloud-init?  Is the behavior
the same there?  Bullseye contains cloud-init 20.2, and you can find
current images in https://cloud.debian.org/images/cloud/bullseye/daily/

noah



Re: Debian cloud image default login details

2020-05-24 Thread Noah Meyerhans
On Sat, May 23, 2020 at 11:50:10PM +0200, Eric Kom wrote:
>Can any one assist or direct me with the login details for:
> 
>debian-10-genericcloud-amd64-20200511-260.qcow2
> 
>I have booted but can’t login.
> 
>Tried admin, root & Debian with and without password.

There are no default credentials with these images.  You're expected to
provide an ssh public key via the OpenStack APIs, which will be
configured by cloud-init running within the instance.  The default
username is debian.

If you're looking for a VM image that can be used outside of a cloud
environment, look for the "nocloud" variants.  These use an empty root
password.

noah



Re: python-boto3 (Was Re: Bug#953970: Taking over DPMT (Was: python-boto: autopkgtest) failure with Python 3.8 as default)

2020-05-20 Thread Noah Meyerhans
On Tue, May 19, 2020 at 10:08:53AM +0300, Alexander Gerasiov wrote:
> > The cloud team is now maintaining the python-boto source package.  I
> > think it similar makes sense for the team to maintain python-boto3 as
> > well.  Do people agree that this makes sense?
> I don't mind.

OK, given that Eric hasn't committed to the master branch since 2016, I
don't see any need to wait any longer for his confirmation.  Since we
have an ack from the person who's done all the work for the past several
years, I will go ahead and begin work on moving this package to the
cloud team.

> > To be clear, this is also an invitation for the current maintainers
> > (eevans and gq) to contribute as members of the cloud team, if you're
> > interested.
> Well, thanks for your invitation =) May be one day I'll do this. (E.g.
> if I need to package something cloud-related I do this under team
> maintenance.)

Thanks for your work so far.  Please don't hesitate to remain involved
in maintenance of this package.

noah



Bug#955620: cloud-init - debian/rules clean fails from git repo

2020-05-19 Thread Noah Meyerhans
On Fri, Apr 03, 2020 at 03:54:34PM +0200, Bastian Blank wrote:
> Currently running debian/rules clean from git repository fails:
> 
> |  % ./debian/rules clean
> | py3versions: no X-Python3-Version in control file, using supported versions
> | dh clean --with python3,systemd --buildsystem pybuild
> |dh_auto_clean -O--buildsystem=pybuild
> | I: pybuild base:217: python3.7 setup.py clean
> | Traceback (most recent call last):
> |   File "setup.py", line 293, in 
> | version=get_version(),
> |   File "setup.py", line 85, in get_version
> | (ver, _e) = tiny_p(cmd)
> |   File "setup.py", line 50, in tiny_p
> | (cmd, ret, out, err))
> | RuntimeError: Failed running ['/usr/bin/python3.7', 'tools/read-version'] 
> [rc=1] (, git describe version (0.7.9-145-g12042ee9) differs from 
> cloudinit.version (20.1)
> | Please get the latest upstream tags.
> | As an example, this can be done with the following:
> | $ git remote add upstream https://git.launchpad.net/cloud-init
> | $ git fetch upstream --tags
> | )
> | E: pybuild pybuild:352: clean: plugin distutils failed with: exit code=1: 
> python3.7 setup.py clean
> | dh_auto_clean: error: pybuild --clean --test-nose -i python{version} -p 
> "3.7 3.8" returned exit code 13
> | make: *** [debian/rules:7: clean] Error 13

This is only the case if the debian patches aren't applied; we apply
debian/patches/0009-Drop-all-unused-extended-version-handling.patch to
remove all the git parsing from the tools/read-version command.  So I'm
not sure this is actually a bug.  Is there anywhere that we actually
depend on the ability to run 'debian/rules clean' on an unpatched source
tree?

As an alternative, we could defer to the upstream Makefile's 'clean'
target and skip the pybuild clean process.

noah



python-boto3 (Was Re: Bug#953970: Taking over DPMT (Was: python-boto: autopkgtest) failure with Python 3.8 as default)

2020-05-18 Thread Noah Meyerhans
(Updating the CC list)

On Tue, Mar 31, 2020 at 11:40:27AM -0500, Eric Evans wrote:
> > > If it's going to move, might I suggest maintaining it within the
> > > cloud-team instead?  Among other things, the cloud team has an interest
> > > in providing feature updates to cloud-oriented packages, such as
> > > python-boto, in stable point releases.  Having the team maintain these
> > > packages, as we do with other packages such as cloud-init, would help us
> > > with this goal.
> 
> I agree; cloud-team seems like the best fit

The cloud team is now maintaining the python-boto source package.  I
think it similar makes sense for the team to maintain python-boto3 as
well.  Do people agree that this makes sense?

To be clear, this is also an invitation for the current maintainers
(eevans and gq) to contribute as members of the cloud team, if you're
interested.

noah



Re: Debian Buster container inside AWS ec2

2020-05-18 Thread Noah Meyerhans
On Mon, May 18, 2020 at 02:59:50AM +, Cauã Siqueira wrote:
> My name's Cauã (Brazilian) and i use Debian for many years. I'm building a 
> new infrastructure inside AWS with terraform. We're migrating from CoreOS to 
> Debian (choice me), but we're having a problem with userdata to create 
> systemd with others containers to running inside Debian Buster 
> (https://wiki.debian.org/Cloud/AmazonEC2Image/Buster).
> See below a little example about my userdata. I can't found docs to help me. 
> Could you help me, please ?

Cloud-init documentation is at https://cloudinit.readthedocs.io/en/18.3/

> systemd:
>   - name: "filebeat.service"
>   command: "start"
>   contents: |
...

This isn't valid cloud-init cloud-config data.  Cloud-init doesn't have
a "systemd" module.  You mentioned that you are migrating from CoreOS.
Is this something that worked with Ignition?  If so, it looks like you
have more porting work to do.  Cloud-init's write_files and runcmd
modules would appear to be useful here.

noah



Re: Re: Correct images to use for baremetal

2020-05-12 Thread Noah Meyerhans
On Tue, May 12, 2020 at 06:57:06PM -0500, Ryan Belgrave wrote:
>After doing a bunch more digging it seems to be related to
>[1]https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=947351 

It certainly could be.

Can you test a sid or bullseye image?  They have cloud-init 20.1.

Try https://cloud.debian.org/images/cloud/sid/daily/20200512-261/ or
https://cloud.debian.org/images/cloud/bullseye/daily/20200512-261/

noah



Re: Correct images to use for baremetal

2020-05-12 Thread Noah Meyerhans
On Mon, May 11, 2020 at 07:18:08PM -0500, Ryan Belgrave wrote:
>What are the correct images to use for a baremetal cloud? The first ones I
>tried were the Openstack ones but those don't seem to have any drivers for
>physical machines. I couldn't get USB keyboards to work physically or over
>the server's IPMI kvm. It also seemed that network devices weren't working
>either. I assumed this image would work since what I am doing is very
>similar to Openstack Ironic. I'm guessing ironic isn't supported in the
>openstack image?

Which specific openstack image did you try?  If it includes the 'cloud'
kernel variant, then the lack of hardware drivers is expected.

>I also tried the generic image. This one seems to be the most promising as
>usb and network both worked, however any network configuration done by
>cloud-init was ignored. The machine always requested an IP from dhcp
>instead of using the static configuration placed down by cloud-init.
>I am using the config drive cloud-init datasource. The same exact setup
>works for Centos 7, Fedora 32, Ubuntu 18.04 and 20.04 so I don't believe
>it is a configuration issue on my end.
>Any pointers would be appreciated.

The 'generic' images, such as
https://cloud.debian.org/images/cloud/buster/20200511-260/debian-10-generic-amd64-20200511-260.qcow2
should work.  Can you share the relevant portion of the cloud-config and
cloud-init logs? (/var/log/cloud-init.log and
/var/log/cloud-init-output.log)

noah



Re: branching the debian-cloud-config repository for stable support

2020-05-08 Thread Noah Meyerhans
On Fri, May 08, 2020 at 11:41:55PM +0200, Bastian Blank wrote:
> > Does this seem sane?  Any other ideas?
> 
> Nope, it is not really.  The daily and release projects needs to use
> current tools, without it diverging between the branches.

I'm not sure I agree.  A lot of important details about the structure of
the image and how it's represented at the provider's service come from
tools, so the tools that build our stable releases need to stay stable.
Details like the image names and the list of FAI classes used to
generate the image come to mind.

> So we need to split tools and config space then and solve the
> compatibility problem on that level.

Where would you have the split?  Separate config_space from tools, and
have config_space be handled as separate (per branch) submodules?

noah



branching the debian-cloud-config repository for stable support

2020-05-08 Thread Noah Meyerhans
At some point, maintaining config for our stable images in the same
repository as our unstable/testing images is going to become
unmanageable.  We'd like to be able to make changes targeting unstable
without worrying about breaking our stable builds.

Consider the following simple case.  I'm working on packaging
amazon-ec2-utils, which I'd like to add to the default installation once
it's available.  To do that today, I'll need to add release specific
configs to package_config/EC2, or add EC2 specific stuff to
package_config/{BULLSEYE,SID}.  It's manageable, but clunky.  When we
start talking about config files that are specific to combinations of
releases and/or architectures and/or cloud providers, it gets even
worse.

So here's a proposal for handling stable releases.  I think this solves
our problems without crazy ongoing effort:

1. We create a 'buster' branch all our image build repos
(debian-cloud-config, debian-cloud-images-daily,
debian-cloud-images-release)

2. The -release and -daily master branches stop building buster images.
Specifically, we remove buster from the .gitlab-ci.yaml file.  These
branches continue to track the master branch of 'debian-cloud-config' in
their tools submodule.

3. The buster branch in the -release and -daily repos tracks the buster
branch of 'debian-cloud-config' in its tools directory.  This branch
drops everything *except* buster from .gitlab-ci.yaml.

4. We set up salsa pipelines to run for both the 'master' and the
'buster' branches for the daily project.  The release project only needs
to build the buster branch.

Once this is done, we can continue working in master in the
debian-cloud-images repo without risking breaking buster.  It sets us up
with a reasonable strategy for future releases as well.

Does this seem sane?  Any other ideas?

noah



Bug#959486: cloud-init - Enable fallback data source if nothing is detected in ds-identify

2020-05-05 Thread Noah Meyerhans
On Sat, May 02, 2020 at 11:24:59PM +0200, Bastian Blank wrote:
> cloud-init does some basic tasks, like
> - network config (currently completely shadowed by our own),
> - ssh host keys.
> 
> I think the most sensible setup would always enable cloud-init, and if
> it only runs with the fallback datasource.
> 
> Currently we are using ds-identify.  This tools does not have any way to
> only enable the fallback data source.
> 
> Any ideas?

By fallback datasource, you mean "None"?

We could always reintroduce the use of debconf for datasource selection,
and avoid depending on ds-identify at all.  The nice thing about that is
that we could then pre-fill that answer in our cloud images and
configure an explicit datasource there, too.

(Note that I don't actually *like* the idea of reintroducing debconf...)

noah



Re: List of stuff running on the Debian AWS account

2020-04-29 Thread Noah Meyerhans
On Thu, Mar 26, 2020 at 09:01:00PM -0300, Antonio Terceiro wrote:
> > the instances were created by hand but their setup is automated.
> > 
> > > For some background, the AWS account under which these instances are
> > > running is owned by an individual DD, not SPI/Debian. We have created a
> > > new account that is properly manageable and officially recognized by
> > > SPI. We'd like to migrate as much as possible to the new account.
> > > 
> > > The old account won't go away completely, as it is where the pre-buster
> > > AMIs live, and they can't be migrated between accounts. So there's not
> > > an immediate sense of urgency, but we'd like to get things moved as soon
> > > as possible.
> > > 
> > > Practically speaking, moving services to the new account will involve
> > > launching replacement instances. If they were created/managed by a tool,
> > > rather than by hand, this is much easier, hence Bastian's question.
> > 
> > what is the process of getting access to the new account?
> 
> ping?
> 
> I am willing to migrate Debian CI to the new account, but I need to be
> able to do it.

Hi, this is not forgotten.  We have some work that's blocked on AWS
taking some action to associate a new AWS account with the appropriate
billing organization, so we aren't quite in a position to let you start
migrating right now.

Please stay tuned.

noah



Re: Better arm64 support

2020-04-27 Thread Noah Meyerhans
On Mon, Apr 27, 2020 at 11:41:10PM +0100, Wookey wrote:
> > > > generic is most flexible, genericcloud has physical hardware drivers 
> > > > disabled,
> > > > and nocloud is as minimal as possible.  The tools, configs, and docs 
> > > > are in
> > > > this repo: https://salsa.debian.org/cloud-team/debian-cloud-images/
> > > 
> > > OK. Sounds like arm64 should be enabled for these too. I'll have a look.
> > 
> > What do you mean by "enabled"?  "generic" arm64 images are available today 
> > at e.g.
> > https://cloud.debian.org/images/cloud/sid/daily/20200427-245/ and
> > https://cloud.debian.org/images/cloud/buster/20200425-243/, both of
> > which are two clicks from https://cloud.debian.org/images/cloud/
> 
> Yes 'generic' is built for arm64 but 'genericcloud' and 'nocloud'
> aren't. By 'enabled' I meant build those other two images for arm64 as well.

nocloud is built for arm64, see e.g.
https://cloud.debian.org/images/cloud/sid/daily/20200427-245/debian-sid-nocloud-arm64-daily-20200427-245.tar.xz
It's only genericcloud that isn't.  Again, this is because the arm64
cloud kernel was only introduced with linux 5.5, so it wasn't possible
until very recently.

> -start---
> At Packet you can boot operating systems through our Custom iPXE
> setup, described in some detail at:
> https://www.packet.com/developers/docs/servers/operating-systems/custom-ipxe/
> 
> We should be able to boot any "server-ready" operating system that way, so
> long as the image contains NIC drivers for the hardware (typically Mellanox 
> CX-4
> in our Arm-based servers, or Intel x710 in our x86 servers).
> 
> To make a given operating system directly provisionable at Packet
> we use "Packer" from Hashicorp, and there are some things we
> bake into the image - no daemons, but there are a few helper
> scripts for things like iSCSI attach for our block storage that get
> added to the base image.
> end--

It seems likely that the generic images will work there, though they
will try to use things like cloud-init that won't function usefully.
Cloud-init typically takes care of setup of user accounts and ssh keys
for remote access in environments that use it.  How does Packet set this
up?  Is the expectation that authorized_keys files are baked in to the
image, or is there some other mechanism to populate them dynamically?

> Ah cool, and I see that https://cloud.debian.org/images/cloud/ has
> been updated. That's great, cheers.

Yes, I just added a little more detail to it, once I was reminded of
where to look to actually do so...

noah



Re: Debian AMI on AWS GovCloud

2020-04-27 Thread Noah Meyerhans
On Mon, Apr 27, 2020 at 01:51:27PM +0200, Bastian Blank wrote:
> > It appears that a Debian AMI released by the Debian cloud team does not 
> > exist on AWS GovCloud.  While several options are provided in the AWS 
> > GovCloud Marketplace by commercial vendors, having an AMI provided by the 
> > Debian cloud team would be very beneficial.
> 
> The problem is AWS, otherwise this would have been done long time ago.

I think the problem is also SPI legal signoff on the GovCloud agreement.
AFAIK, we do not yet have that.  I believe SPI was once aware of this
requirement, but it may well have fallen off their radar.



Re: Better arm64 support

2020-04-27 Thread Noah Meyerhans
On Mon, Apr 27, 2020 at 01:05:34PM +0100, Wookey wrote:
> > > Someone asked me for a VM image recently, and I discovered we now
> > > make some available at: https://cloud.debian.org/images/cloud/
> > 
> > There's new work in progress at: https://image-finder.debian.net/ That page
> > includes some info on the different image types, but doesn't always have up 
> > to
> > date data yet.
> 
> OK. That's useful. Boggling long list of AWS images!
> Perhaps that should be linked from /debian-cloud-images/ so people could find 
> it.
> (where does the content for that page live?)

It should be linked from somewhere more prominent when it's actually
usable.  It is not currently.  Information on that site is not current
and shouldn't be relied on to be useful today.

So many images are listed for AWS because it (currently) loads every
daily image of every version we build for, as well as every "release"
ami, in all regions.  It then relies on client-side filtering to limit
the view to what's been asked for.  That technique is already showing
performance issues, and will only get worse.  This is evidence of its
WIP status.

> > > The 'genericcloud' and 'nocloud' images are amd64 only. Maybe they
> > > don't make sense for arm64 - what are they for?
> > 
> > generic is most flexible, genericcloud has physical hardware drivers 
> > disabled,
> > and nocloud is as minimal as possible.  The tools, configs, and docs are in
> > this repo: https://salsa.debian.org/cloud-team/debian-cloud-images/
> 
> OK. Sounds like arm64 should be enabled for these too. I'll have a look.

What do you mean by "enabled"?  "generic" arm64 images are available today at 
e.g.
https://cloud.debian.org/images/cloud/sid/daily/20200427-245/ and
https://cloud.debian.org/images/cloud/buster/20200425-243/, both of
which are two clicks from https://cloud.debian.org/images/cloud/

> > > I don't know if there are other services that it would make sense to
> > > support, e.g packet provide arm64 online compute. https://www.packet.com
> > > They already provide debian 8,9,10 options, apparently built as docker
> > > images: https://github.com/packethost/packet-images
> > 
> > I haven't heard that anyone on the team uses packet - but if someone wanted 
> > to
> > extend support for their platform, I imagine we'd accept it.
> 
> I've made contact with them to ask about what specific support is
> needed. It seems like a generic image should work, but there are some
> API support tools we could usefully add. I'll have a go with this and
> make changes to /debian-cloud-images/ if appropriate.

Is packet based on OpenStack?  If so, then yes, the generic images
should work there.

> > > A kosher debian image might be a good idea, but then maybe the generic
> > > ones work already? I just failed to find any info on whether anything
> > > special is needed in their images.
> > 
> > The generic image should have a good shot at working, though it likely 
> > contains
> > many drivers that won't be useful.
> 
> Right. And maybe generic cloud works. 

Note that the arm64 cloud kernel is available in bullseye and sid, but
not buster.  Arm64 buster images just use the regular ("generic") arm64
kernel.  It's not necessarily the goal of the cloud kernel to support
every cloud infrastructure in existence, but if there are missing
drivers that prevent it from doing so, we can consider enabling them.

noah



Re: Presenting Debian to the user at cloud provider marketplaces (round 2)

2020-04-21 Thread Noah Meyerhans
On Tue, Apr 21, 2020 at 03:42:16PM -0700, Ihor Antonov wrote:
> - I often want to make sure that this is *really* the official AMI, some kind 
> of 
> link to the debian page that says "yes, this is indeed Debian's account ID 
> would make me feel more reassured.

Yes, I agree that this is important.  At the moment there are at least
two other sellers in the Marketplace with "Debian" listings, and this
could be confusing.  A big driver of the development of the current
image build pipeline was a desire to declare our AMIs "official."  We
should be doing just that, and whatever else makes sense to clearly
label ours as the only ones directly affiliated with the project.

> - Next I often want to know when is the End of Life for this release, having 
> that information in AMI description would save time googling it.

That makes sense.  It might force us to discuss (again) the relationship
of our projects with the LTS work...

> - The reason why I use Debian AMIs is because they contain almost no bloat 
> (if 
> you compare to how much stuff is in Fedora or Ubuntu), so having a handy link 
> to AMI build configuration that tells you what packages are pre-installed is 
> a 
> nice thing IMHO

Thanks.  We have a manifest at this point, so this is probably not
horribly difficult.

> - I personally almost never read generic descriptions that usually say 
> something along the lines of: "this is a general purpose free OS, with so 
> many 
> packages, and founded in 1815, and GNU and bla bla Linus Torvalds.. " but it 
> may be only me. I would prefer this be replaced with something more concise, 
> like bullet points. Example:
> 
> Debian 10 Buster
> Webiste Url: https://debian.org
> Debian cloud images:  official 
> ami"
> AMI Configuration page: 
> Release Number: 10
> EOL: ~ 2022 
> Arch: x86
> 
> More info:  debian.og>

Thanks for your feedback, this is all very helpful.

noah



Re: Presenting Debian to the user at cloud provider marketplaces (round 2)

2020-04-21 Thread Noah Meyerhans
On Tue, Apr 21, 2020 at 04:57:22PM +0200, Emmanuel Kasper wrote:
> YMMV, I use the following text in
> https://app.vagrantup.com/debian/
> 
> Debian is a free Operating System for your laptop, server or embedded
> device. But it provides more than a pure OS: it comes with over 59000
> packages, with a Long Term Support cycle reaching 5 years for each
> stable release.
> Besides stable releases, Debian also provides a testing distribution
> channel, which is daily updated with the latest and greatest opensource
> software. Debian strives for correctness of implementation, as detailed
> in the official Debian Policy, and compliance to Free and Opensource
> software licensing, as documented in the Debian Free Software Guidelines.
> 
> This is more or less based on the debian.org start page.

Yes, I considered using similar text, but I'm not a big fan of it for
this case (or in general, really).  Linguistically it doesn't really
excite me, and I don't think its message has enough focus.  Maybe I'll
try to come up with something original and try to propose that for web
site, as well as the cloud marketplace. (Don't hold your breath waiting
for this, though!)

noah



Presenting Debian to the user at cloud provider marketplaces (round 2)

2020-04-20 Thread Noah Meyerhans
This is a continuation, in spirit, of a thread from last summer, but I'm
intentionally starting a new one here. [1]

This post will specifically focus on the Debian AWS Marketplace
listings, which are currently split across two AWS accounts [2][3]

We've got some inconsistencies between our current listings and our old
ones, and some long-standing issues [4] that would be nice to clean up.
I'd like some input on how best to do so.

Product title: For older releases, the title is listed as (e.g.) "Debian
GNU/Linux 9 (Stretch)".  For buster, it is "Debian 10 Buster".  I prefer
the formating used for stretch, and would like to update buster to
match.  I'm open to the idea of listing only the version number, and
dropping the code name, but don't feel strongly either way.  Opinions?

Product overview: For buster, the overview is simply "Debian 10 "Buster"
for Amazon Web Services."  For stretch, it is a longer blob of copypasta
from the Debian entry on Wikipedia.  Neither of these is ideal, IMO.
Some condensed version of About Debian[5] would probably be better, but
I don't have anything specific in mind.  Is there existing text that
would work better here?

Highlights: Stretch lists a couple of items in the "Highlights" section
on the listings pages: "After 26 months of development the Debian
project is proud to present its new stable version 9 (code name
"Stretch"), which will be supported for the next 5 years" and "Debian 9
is dedicated to the project's founder Ian Murdock, who passed away on 28
December 2015".  Buster is only "The universal operating system."  I
think pulling some snippets in from the release notes makes sense for
buster. Agree?

The AWS Marketplace requires some text for a "EULA".  Currently we link
to the Social Contract for that, but that's not at all written like a
EULA and doesn't specifically discuss legal rights or restrictions.
IMO, as suggested in #696596 [4], we should replace the EULA text with
something similar to what's in the default MOTD.  Thoughts?

Support information: The stretch listing says "Debian is developed and
supported by a diverse global community. It can be reached through a
variety of means including email, IRC, and web forums." and links to
www.debian.org/support.  Buster indicates that "No Support is offered
for this product"  I'd like to make buster match the stretch listing.

I think this has gotten plenty long enough, and covers the important
things, so let's leave it at this for now.  Thanks for reading, and I
look forward to your input!

noah

1. https://lists.debian.org/debian-cloud/2019/07/msg00019.html
2. 
https://aws.amazon.com/marketplace/seller-profile?id=890be55d-32d8-4bc8-9042-2b4fd83064d5
3. 
https://aws.amazon.com/marketplace/seller-profile?id=4d4d4e5f-c474-49f2-8b18-94de9d43e2c0
4. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=696596
5. https://www.debian.org/intro/about



Bug#956940: EC2 images should install compatibility symlinks for NVME drives

2020-04-16 Thread Noah Meyerhans
Package: cloud.debian.org
Severity: wishlist

As documented at
https://opensource.creativecommons.org/blog/entries/2020-04-03-nvmee-on-debian-on-aws/
and https://lists.debian.org/debian-cloud/2020/04/msg00015.html, it
would be helpful if we install meaningful /dev/xvd* or /dev/sd* symlinks
mapping the EC2 API's block device mapping to the corresponding NVME
device.



Re: What belongs in the Debian cloud kernel?

2020-04-04 Thread Noah Meyerhans
On Sat, Apr 04, 2020 at 10:17:20AM +0200, Thomas Goirand wrote:
> > The first two bugs are about nested virtualization.  I like the idea of
> > deciding to support that or not.  I don't know much about nested virt,
> > so I don't have a strong opinion.  It seems pretty widely supported on
> > our platforms.  I don't know if it raises performance or security
> > concerns.  So these seem okay to me, as long as we decide to support
> > nested virt, and there aren't major cons that I'm unaware of.
> 
> There's a big problem when activating nested virt. I have read that Live
> migration of VMs can become impossible (ie: for all VMs that are also
> host OS for virtualization). As much as I understand, this is because of
> the difficulty to support nested MMU. I'm not sure if the situation has
> changed or not, but last time I checked this was the case. Ben, do you
> know if this has evolved?

Remember, nested virtualization works today; nothing we have done would
have prevented that.  The question is about whether or not we care about
enabling features to support use cases that only arise when nested
virtualization is in use.

The reason nested virtualization breaks live migration is that it shares
state between the VM and the underlying hypervisor.  The VM is, in a
sense, no longer self contained.  The nested VMs state is tracked by the
parent VM in a VMCS structure, as shown in the nested-vmx.rst doc I
linked previously, and the values in that struct need to be mapped to a
corresponding list in the hypervisor.  Migration would entail some
coordination between the hypervisor and the outer VM, as the shared
state would need to be kept in sync throughout the process.

The sharing of state between the VM and the hypervisor hints at some of
the potential security concerns around nested virtualization in
mixed-trust environments.

> So, when I'm being asked about it, my answer from an OpenStack operator
> point of view, is always a big "NO !". I want to be able to service my
> compute nodes. This means being able to live-migrate the workload away,
> otherwise, customers may notice.

Whether or not you support nested virt on your infrastructure is a
deployment choice, not a choice Debian needs to make.

noah



Re: What belongs in the Debian cloud kernel?

2020-04-03 Thread Noah Meyerhans
On Wed, Apr 01, 2020 at 03:15:37PM -0400, Noah Meyerhans wrote:
> There are open bugs against the cloud kernel requesting that
> configuration options be turned on there. [1][2][3]



> 1. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=952108
> 2. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955366
> 3. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955232

So, the discussion thus far has focused on these specific requests more
than I had hoped.  So for now, so we can deal with the current requests,
here's what happens if we enable them:

These are the kernel .config changes:
+CONFIG_VHOST_SCSI=m
+CONFIG_KSM=y
+CONFIG_NET_9P=m
+CONFIG_NET_9P_VIRTIO=m
+# CONFIG_NET_9P_XEN is not set
+# CONFIG_NET_9P_DEBUG is not set
+CONFIG_TARGET_CORE=m
+CONFIG_TCM_IBLOCK=m
+CONFIG_TCM_FILEIO=m
+CONFIG_TCM_PSCSI=m
+CONFIG_TCM_USER2=m
+# CONFIG_LOOPBACK_TARGET is not set
+CONFIG_ISCSI_TARGET=m
+# CONFIG_XEN_SCSI_BACKEND is not set
+CONFIG_9P_FS=m
+CONFIG_9P_FSCACHE=y
+CONFIG_9P_FS_POSIX_ACL=y
+CONFIG_9P_FS_SECURITY=y
+CONFIG_XXHASH=y

Because CONFIG_KSM changes statically linked code, it results in a size
increase of roughly 12 kB of the compressed kernel.  The uncompressed
kernel increases by about 852 kB in size.  The boot time appears to be
unchanged.  I don't like the size increase, but this feature is enabled
everywhere else and apparently does break some users if it's disabled,
so we should enable it.

The kernel package installed size increases by roughly 2 MB due to the
additional modules we generate for 9P and VHOST_SCSI.

So, I think the answer for these specific requests can be affirmative.
The cost is small enough that if these features are useful to somebody,
then we might as well enable them.

noah



Re: What belongs in the Debian cloud kernel?

2020-04-03 Thread Noah Meyerhans
On Fri, Apr 03, 2020 at 12:20:32PM +0200, Thomas Lange wrote:
> If adding a new kernel option does not change the boot time and the
> image size a lot and it's reasonable, then just go add it.

What I was trying to do was specifically define "a lot" and "it's
reasonable."  I would rather not have to make a judement call every time
such a request comes in.

I'm entirely open to the possibility that these requests are rare enough
that they can be considered individually, but there are 3 such requests
open right now, which raises the possibility that they may be more
common.

noah



Re: Stable NVMe block device names on AWS Nitro instances

2020-04-03 Thread Noah Meyerhans
On Fri, Apr 03, 2020 at 06:16:33PM +0200, Jonathan Ballet wrote:
> On AWS Nitro instances, EBS volumes are exposed as NVMe block devices by
> the kernel on the /dev/nvme* paths.
> 
> The AWS documentation "Identifying the EBS Device" says that the Linux
> kernel doesn't guarantee the creation order of these devices, as they
> are discovered in the order the devices respond.
> 
> This creates a situation where you can start an instance with 2 (or
> more) attached EBS and sometimes end up with the "root" EBS being named
> /dev/nvme0n1 and the other one /dev/nvme1n1, sometimes the opposite.

This is why we use UUIDs exclusively in fstab.

> In its own Linux image, AWS apparently ships a set of udev rules + a
> script to get these device names out of the information exposed by the
> NVMe devices, which create appropriate symlinks in /dev towards each
> /dev/nvme* devices.

This is the ec2-utils package.  I haven't looked at how portable it is,
but the code is MIT and Apache 2.0 licensed, so we should be able to
incorporate it if needed.

> This doesn't change the random detection order by the kernel, but this
> provides a more stable interface to deal with instances containing
> multiple disks.

Is there any reason that you can use UUIDs and/or LVM to accomplish the
same thing?

> I wonder if it would be possible to provide such a mechanism by default
> in the official Debian AMI?
> Is this type of cloud provider-specific configuration could be accepted?
> (I guess that would make sense only for AWS images.)

The AMIs are provider-specific anyway, so this isn't an issue.

noah



Re: What belongs in the Debian cloud kernel?

2020-04-03 Thread Noah Meyerhans
On Fri, Apr 03, 2020 at 02:03:16PM +, Jeremy Stanley wrote:
> And countless OpenStack service providers as well, though whether
> they do usually depends on if they've got new enough hardware, new
> enough host operating system, and what hypervisor backend they're
> using (for example, accelerating KVM on a Xen guest doesn't seem to
> be viable).

Part of why we publish two OpenStack images is in recognition of the
wide variety of deployments out there.  Many of them are going to want
the generic kernel.

noah



Re: What belongs in the Debian cloud kernel?

2020-04-02 Thread Noah Meyerhans
On Thu, Apr 02, 2020 at 10:55:16AM -0700, Ross Vandegrift wrote:
> I don't think just saying "yes" automatically is the best approach.  But
> I'm not sure we can come up with a clear set of rules.  Evaluating the
> use cases will involve judgment calls about size vs functionality.  I
> guess I think that's okay.

You certainly may be right.  I wasn't able to convince myself either
way, which is why I posted for additional opinions.

> The first two bugs are about nested virtualization.  I like the idea of
> deciding to support that or not.  I don't know much about nested virt,
> so I don't have a strong opinion.  It seems pretty widely supported on
> our platforms.  I don't know if it raises performance or security
> concerns.  So these seem okay to me, as long as we decide to support
> nested virt, and there aren't major cons that I'm unaware of.

IMO nested virtualization is not something I'd want to see in a
"production" environment.  Hardware-assisted isolation between VMs is
critical for hosting mixed-trust workloads (e.g. VMs owned and
controlled by unrelated parties without a mutual trust relationship).
Current hardware virtualization extensions, e.g. Intel VTx, only have a
concept of a single level of virtualization.  Nested virtualization is
implemented by trapping and emulating the CPU extensions, and by doing a
bunch of mapping of nested guest state to allow it to effectively run as
a peer VM of the parent guest in hardware.  Some details at [1].  So not
only is it painfully complex, but it's also quite slow.

This is not to say that there aren't any legitimate use cases for nested
virtualization.  Only that I'm not sure it's something we want to be
optimizing for.

> Can you share more about the KSM use case?  I'm worried about raising
> security concerns for this one.  KSM has had a history of enabling
> attacks that are sorta serious, but also sorta theoretical.  This might
> cause upset from infosec folks that freak out about any vulnerability -
> even when they don't really understand the magnitude of the risk.

I don't have any direct experience with KSM.  I can certainly see how it
could help with certain classes of workload, though, if it's known that
multiple processes with mostly identical state are running.

I'm not sure I'd focus too much on the security implications of KSM,
though, since it's widely enabled in Debian's generic kernel and kernels
distributed by other distros.  I don't want to cargo-cult it, but
neither do I want to ignore prior art.  I don't think there's any reason
to drop support for applications making use of KSM in our cloud kernels,
though.  I can't think of any reason why the feature would be less
useful in a cloud environment, and it could certainly save money by
allowing the use of smaller instances.

noah

1. 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/virt/kvm/nested-vmx.rst



What belongs in the Debian cloud kernel?

2020-04-01 Thread Noah Meyerhans
For buster, we generate a cloud kernel for amd64.  For sid/bullseye,
we'll also support a cloud kernel for arm64.  At the moment, the cloud
kernel is the only used in the images we generate for Microsoft Azure
and Amazon EC2.  It's used in the GCE images we generate as well, but
I'm not sure anybody actually uses those.  We generate two OpenStack
images, one that uses the cloud kernel and another uses the generic
kernel.

There are open bugs against the cloud kernel requesting that
configuration options be turned on there. [1][2][3]  These, IMO,
highlight a need for some documentation around what is in scope for the
cloud kernel, and what is not.  This will help us answer requests such
as these more consistently, and it will also help our users better
understand whether they can expect the cloud kernel to meet their needs
or not.

At the moment, the primary optimization applied to the cloud kernel
focuses on disk space consumed.  We disable compilation of drivers that
we feel are unlikely to ever appear in a cloud environment.  By doing
so, we reduce the installed size of the kernel package by roughly 70%.
There are other optimization we may apply (see [4] for examples), but we
don't yet.

Should we simply say "yes" to any request to add functionality to the
cloud kernel?  None of the drivers will add *that* much to the size of
the image, and if people are asking for them, then they've obviously got
a use case for them.  Or is this a slipperly slope that diminishes the
value of the cloud kernel?  I can see both sides of the argument, so I'd
like to hear what others have to say.

If we're not going to say "yes" to all requests, what criteria should we
use to determine whether or not to enable a feature?  It's rather not
leave it as a judgement call.

noah

1. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=952108
2. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955366
3. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955232
4. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=947759



Bug#954363: cloud-init fails to obtain an IMDS API token on Amazon EC2

2020-03-20 Thread Noah Meyerhans
Control: tags -1 + upstream

> 2020-03-20 18:25:10,332 - url_helper.py[DEBUG]: [0/1] open 
> 'http://169.254.169.254/latest/api/token' with {'url': 
> 'http://169.254.169.254/latest/api/token', 'allow_redirects': True, 'method': 
> 'PUT', 'timeout': 1.0, 'headers': {'User-Agent': 'Cloud-Init/20.1', 
> 'X-aws-ec2-metadata-token-ttl-seconds': 'REDACTED'}} configuration

It seems that the "redaction" of the
X-aws-ec2-metadata-token-ttl-seconds header value happens before the
actual request is made, so where the IMDS server expects a TTL in
seconds, cloud-init actually passes it the literal string "REDACTED".
Unsurprisingly, this fails.

I've verified this by avoiding the redacting, via the attached patch.
This isn't an ideal solution, as it avoids all redacting.  The intent of
cloud-init's behavior is to avoid storing IMDS API tokens in the logs,
which is sensible, and is broken by my change.

noah

diff --git a/debian/patches/no-redact-imds-headers.patch b/debian/patches/no-redact-imds-headers.patch
new file mode 100644
index ..26195c02
--- /dev/null
+++ b/debian/patches/no-redact-imds-headers.patch
@@ -0,0 +1,25 @@
 Index: cloud-init/cloudinit/sources/DataSourceEc2.py 
 ===  
 --- cloud-init.orig/cloudinit/sources/DataSourceEc2.py   
 +++ cloud-init/cloudinit/sources/DataSourceEc2.py
 @@ -32,7 +32,7 @@ API_TOKEN_ROUTE = 'latest/api/token'   
  AWS_TOKEN_TTL_SECONDS = '21600' 
  AWS_TOKEN_PUT_HEADER = 'X-aws-ec2-metadata-token'   
  AWS_TOKEN_REQ_HEADER = AWS_TOKEN_PUT_HEADER + '-ttl-seconds'
 -AWS_TOKEN_REDACT = [AWS_TOKEN_PUT_HEADER, AWS_TOKEN_REQ_HEADER] 
 +AWS_TOKEN_REDACT = []   
  
  
  class CloudNames(object):   
 Index: cloud-init/tests/unittests/test_datasource/test_ec2.py
 ===  
 --- cloud-init.orig/tests/unittests/test_datasource/test_ec2.py  
 +++ cloud-init/tests/unittests/test_datasource/test_ec2.py   
 @@ -479,6 +479,7 @@ class TestEc2(test_helpers.HttprettyTest 
  
  def test_aws_token_redacted(self):  
  """Verify that aws tokens are redacted when logged."""  
 +self.skipTest('skipping for now...')
  ds = self._setup_ds(
  platform_data=self.valid_platform_data, 
  sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},  


Bug#954363: cloud-init fails to obtain an IMDS API token on Amazon EC2

2020-03-20 Thread Noah Meyerhans
Package: cloud-init
Version: 20.1-1
Severity: important

Cloud-init 20.1 attempts to obtain an API token for use with Amazon EC2
instance metadata service (IMDS).  On EC2, this operation should always
succeed, whether using IMDSv1 or v2, and cloud-init will always access
IMDS in v2 mode.  However, this fails on EC2:

2020-03-20 18:25:10,331 - DataSourceEc2.py[DEBUG]: Fetching Ec2 IMDSv2 API Token
2020-03-20 18:25:10,332 - url_helper.py[DEBUG]: [0/1] open 
'http://169.254.169.254/latest/api/token' with {'url': 
'http://169.254.169.254/latest/api/token', 'allow_redirects': True, 'method': 
'PUT', 'timeout': 1.0, 'headers': {'User-Agent': 'Cloud-Init/20.1', 
'X-aws-ec2-metadata-token-ttl-seconds': 'REDACTED'}} configuration
2020-03-20 18:25:10,336 - url_helper.py[DEBUG]: Read from 
http://169.254.169.254/latest/api/token (400, 0b) after 1 attempts
2020-03-20 18:25:10,336 - DataSourceEc2.py[WARNING]: Calling 
'http://169.254.169.254/latest/api/token' failed [0/1s]: empty response [400]
2020-03-20 18:25:10,344 - url_helper.py[DEBUG]: Please wait 1 seconds while we 
wait to try again

With 20.1, cloud-init will fall back to using IMDSv1 in this case, but
this behavior will change in future versions, which will always use v2
mode (it is backwards-compatible with v1), and only use v1 mode for
compatibility with non-AWS services providing IMDS-compatible metadata
endpoints.

-- System Information:
Debian Release: bullseye/sid
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: amd64 (x86_64)

Kernel: Linux 5.4.0-4-cloud-amd64 (SMP w/2 CPU cores)
Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE=C.UTF-8 
(charmap=UTF-8)
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages cloud-init depends on:
ii  fdisk   2.34-0.1
ii  gdisk   1.0.5-1
ii  ifupdown0.8.35+b1
ii  locales 2.30-2
ii  lsb-base11.1.0
ii  lsb-release 11.1.0
ii  net-tools   1.60+git20180626.aebd88e-1
ii  procps  2:3.3.16-4
ii  python3 3.8.2-1
ii  python3-configobj   5.0.6-3
ii  python3-jinja2  2.10.1-2
ii  python3-jsonpatch   1.23-3
ii  python3-jsonschema  3.0.2-4
ii  python3-oauthlib3.1.0-1
ii  python3-requests2.22.0-2
ii  python3-six 1.14.0-2
ii  python3-yaml5.3.1-1
ii  util-linux  2.34-0.1

Versions of packages cloud-init recommends:
ii  cloud-guest-utils  0.31-1
pn  eatmydata  
ii  sudo   1.8.31-1

Versions of packages cloud-init suggests:
pn  btrfs-progs  
ii  e2fsprogs1.45.5-2
pn  xfsprogs 

-- debconf information:
* cloud-init/datasources:Ec2



Re: Testing cloud-init 20.1

2020-03-16 Thread Noah Meyerhans
On Mon, Mar 16, 2020 at 10:46:49PM +0100, Thomas Goirand wrote:
> For the release team to accept a new version of 20.1, we need to test
> it. Happyaron already tested it with success on Aliyun, Tencent Cloud,
> and Huawei Cloud. We need to test it with Azure, GCE, AWS and OpenStack,
> with the image we're currently generating for Buster.

I've done basic testing on EC2.  We will want to cherry-pick the
following change onto 20.1, if that's the version we want to get into
buster:

https://github.com/canonical/cloud-init/commit/4bc399e0cd0b7e9177f948aecd49f6b8323ff30b

I have tested that it cherry-picks cleanly, but left it out of the
20.1-1 upload for safety and simplicity.  I expect to upload 20.1-2 with
that change around the time 20.1-1 reaches bullseye.

> Please also note that Noah removed the cloud-utils dependency from the
> package (it is now downgraded to Recommends), because in some use case,
> it makes sense not to have it. So to avoid regression, we will have to
> modify our image to re-add this dependency. There are tools like vcs-run
> that our users may use (so it's not only limited to growpart).

I've already updated the debian-cloud-images repository to explicitly
install cloud-guest-utils everywhere where we install cloud-init, and
have confirmed that the daily sid builds (with cloud-init 20.1) do have
cloud-guest-utils.

Since Recommends are installed by default in the default apt
configuration, I'd expect that most other users who are installing
cloud-init will not notice a difference.

noah



Re: cloud-init should not depend on cloud-guest-utils anymore

2020-03-12 Thread Noah Meyerhans
On Sat, Mar 07, 2020 at 05:56:43PM -0800, Noah Meyerhans wrote:
> > > So in case we'd be removing this dependency to satisfy your container
> > > use case (which is IMO very valid), we should carefully re-add a
> > > dependency on cloud-utils in our VM images.
> > 
> > If I may add if it wasn't obvious with what I wrote: our VM images must
> > be changed *first* to include the new dependency, before we remove it
> > from the cloud-init package.
> 
> +1 to the proposal to drop the dependency.  Let's drop it to Recommends,
> rather than removing it altogether, though.  And yes, we should update
> the debian-cloud-images FAI configuration to ensure that
> cloud-guest-utils is still installed in the environments where we're
> currently installing it.

Actually, in addition to reducing the relationship to Recommends in
cloud-init, we should also remove the cloud-guest-utils dependency on
e2fsprogs.  The growpart tool doesn't actually use any of its binaries.
Tools from the cloud-image-utils (same source package) do, so the
dependency is appropriate there.

noah



Re: cloud-init should not depend on cloud-guest-utils anymore

2020-03-07 Thread Noah Meyerhans
On Sun, Mar 08, 2020 at 02:49:26AM +0100, Thomas Goirand wrote:
> On 3/7/20 11:53 PM, Thomas Goirand wrote:
> > So in case we'd be removing this dependency to satisfy your container
> > use case (which is IMO very valid), we should carefully re-add a
> > dependency on cloud-utils in our VM images.
> 
> If I may add if it wasn't obvious with what I wrote: our VM images must
> be changed *first* to include the new dependency, before we remove it
> from the cloud-init package.

+1 to the proposal to drop the dependency.  Let's drop it to Recommends,
rather than removing it altogether, though.  And yes, we should update
the debian-cloud-images FAI configuration to ensure that
cloud-guest-utils is still installed in the environments where we're
currently installing it.

noah



Buster is available in the AWS Marketplace

2020-03-02 Thread Noah Meyerhans
Please see https://aws.amazon.com/marketplace/pp/B0859NK4HC for details.

Enjoy, and please leave reviews and ratings on the Marketplace.

noah



Bug#952563: src:cloud-utils: ec2metadata does not speak EC2 IMDSv2

2020-02-25 Thread Noah Meyerhans
Package: src:cloud-utils
Version: 0.31-1
Severity: important

The ec2metadata command queries a well-known link-local endpoint
(169.254.169.254 in Amazon EC2) to obtain information about the instance
on which it runs.  Last year, AWS released "IMDSv2" in an effort to
protect customers against some potentially severe information leaks
related to accidentally proxying this local data to the network.  Details
at
https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/

IMDSv2 makes use of a session-based protocol, requiring clients to first
retrieve a time-limited session token, and then to include that token with
subsequent requests.

Because the intended purpose of IMDSv2 is to provide an additional layer
of defense against network abuses, customers utilizing it may choose to
disable IMDSv1.  It's important that we facilitate this use case by
supporting IMDSv2 wherever possible.  We should work to add this support
in both bullseye and buster (and potentially stretch if feasible)

noah



Bug#866613: cloud-init: Adding Apache v2 license to debian/copyright

2020-02-14 Thread Noah Meyerhans
On Fri, Jun 30, 2017 at 02:14:00PM +, Joonas Kylmälä wrote:
> We need to also take care of asking permission from the authors of
> Debian patches if they can be used under Apache v2 license.

I don't think there's anything copyrightable in any of those
contributions.  Note that none of the debian-specific changes include
any license information as it is.  I'm going to make the change to
debian/copyright to reflect upstream's license.

noah



Debian 9.12 AMIs available on Amazon EC2

2020-02-10 Thread Noah Meyerhans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Debian 9.12 (stretch) AMIs for Amazon EC2 are now available.  See the 9.12
release announcement at https://www.debian.org/News/2020/2020020802 and
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch for more details.

The AMIs are owned by AWS accound ID 379101102735 and have the following
details:

Names:
arm64: debian-stretch-hvm-arm64-gp2-2020-02-10-74089
amd64: debian-stretch-hvm-x86_64-gp2-2020-02-10-73984

Regional AMI IDs:

(region) (arm64)   (amd64)
ap-east-1   ami-08821c58afaf4ad03 ami-0b868fbb4c877f437
ap-northeast-1  ami-09de4ee1721cb8f29 ami-0d1a6c2b848d23a6d
ap-northeast-2  ami-0fcb3bb296f15b982 ami-00df969d08a3ea730
ap-south-1  ami-0c046658d5b68c1af ami-09b1626b27596815f
ap-southeast-1  ami-0e7f0bfc03fb01aef ami-0e13f5fb9f9f3c104
ap-southeast-2  ami-0778a46755a9c389d ami-0d63d6457a180078e
ca-central-1ami-054e5b309c4dca528 ami-09ff1197737556c58
eu-central-1ami-023e1f91c848fc49d ami-09415feedc2d22008
eu-north-1  ami-024bf24155b02c7db ami-098c2f770214112a1
eu-west-1   ami-04e1a4e612eedbbc3 ami-079b9207facfcaf0e
eu-west-2   ami-0b4b1178457b3a06c ami-0746bb5dcdb5f08fe
eu-west-3   ami-08516a90c447806d8 ami-0f4c84f7511a7b98e
me-south-1  ami-0a75c46ee538159e8 ami-091adbf53613eeef1
sa-east-1   ami-00199281329b61d5b ami-0305f692e938ece5b
us-east-1   ami-0ea51afb2084a5bf3 ami-066027b63b44ebc0a
us-east-2   ami-0c8f04a4e82d45248 ami-09887484cc0721114
us-west-1   ami-0413e0d0fc9173aed ami-05b1b0e2065a73a53
us-west-2   ami-0b2a776780bc56851 ami-0d270a69ac13b22c3

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE65xaF5r2LDCTz+zyV68+Bn2yWDMFAl5Bxu8ACgkQV68+Bn2y
WDP+qw//QJiwRs9KcFli2B2KB2eVknaENKHHQou7aUCHUNfHkZ3DcBDsxOEHVRDn
/6+flJj+WDE2HEAcufv+clHpMMizsRfw9JUXRCXw68pR8f/RqYVduFhFxxY9XEc5
OYDvuMyFIrlrF7Ovpy+CuL3TLUsjRTIm9WFhHWkp1Eo6Bqp/P4nuBTi8DCfX1ByR
t9jlX1GPatg8w3uEMOth0ZfnkebgYwcaB56UHUbAo3CU/Bo93+OnLVjnVlFLI8NU
j3uV+/wISDMnMAWoJRyEQ34YSSxZnyT2p0Q+Y9iCNMUm3ojDbgkRiXD7lEWBGAQW
WmmmtaA3iU90OJ20z0WC8wuHv1Adhy2+BUiMkl0XcPRNJa3OAWP2Q9+McXIk/dqv
UFojx+/BfmMtdxy5FOYGXzkIoch0JiFTGWT+I4VGjLnLxADEEXdpxhFnzNFQ4Juq
9Toh7hPZRJJC/lpNDgDOmkUCk48JNvUnnrW9SVRCJ4wpNeKRoYs+qc7AtOk3f8ez
y3KFpmiV1HfCjGO8V1/WWj1aw1mJ0DrYhXorczFAe2PL4durUipkVVYzF2zL2hTw
X7D+Kan+IjT1LakF5LHPzKcrp3czOppCrx1yd9bb9VMt/LQMlsD7wduYxywV11ED
aprlJt7be+XAqbObq7ton/825N7CZjZYUoYOqC9HFbXlCAJNRIE=
=Qf2n
-END PGP SIGNATURE-



Debian 10.3 AMIs available on Amazon EC2

2020-02-10 Thread Noah Meyerhans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Debian 10.3 (buster) AMIs for Amazon EC2 are now available.  See the 10.3
release announcement at https://www.debian.org/News/2020/20200208 and
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster for more details.

The AMIs are owned by AWS accound ID 136693071363 and have the following
details:

Names:
arm64: debian-10-arm64-20200210-166
amd64: debian-10-amd64-20200210-166

Regional AMI IDs:

(region) (arm64)   (amd64)
ap-east-1ami-8bcc88fa  ami-f9c58188
ap-northeast-1   ami-0282bfbfdd650f4cc ami-0fae5501ae428f9d7
ap-northeast-2   ami-0b7aad3b1c1ab5bf5 ami-0522874b039290246
ap-south-1   ami-074b1202dd6104cba ami-03b4e18f70aca8973
ap-southeast-1   ami-074d2e4d3a12447e7 ami-0852293c17f5240b3
ap-southeast-2   ami-0c7faee4092c73179 ami-03ea2db714f1f6acf
ca-central-1 ami-0e1e0dceab7778252 ami-094511e5020cdea18
eu-central-1 ami-02326335b24f04021 ami-0394acab8c5063f6f
eu-north-1   ami-0cc1803d72d492d0c ami-0c82d9a7f5674320a
eu-west-1ami-013be4b5a86a1bff7 ami-006d280940ad4a96c
eu-west-2ami-0a8d5d5404d742349 ami-08fe9ea08db6f1258
eu-west-3ami-04063714230353180 ami-04563f5eab11f2b87
me-south-1   ami-08b83b2026662508c ami-0492a01b319d1f052
sa-east-1ami-001f53dee8cfb04c7 ami-05e16feea94258a69
us-east-1ami-031d1abcdcbbfbd8f ami-04d70e069399af2e9
us-east-2ami-0b1808bb4e7ed5ff5 ami-04100f1cdba76b497
us-west-1ami-09d02110862e0e6f6 ami-014c78f266c5b7163
us-west-2ami-0803feda130c01d47 ami-023b7a69b9328e1f9

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE65xaF5r2LDCTz+zyV68+Bn2yWDMFAl5BsFAACgkQV68+Bn2y
WDO+kw/+NLDyDol2h9biIiZDE6G3nh7DpG98im0I1e03zviQ5pD98c5ZcQSlqfhE
r1mt2owvLfj+qRQRBc2y22Z/w/ZOOnzzkw88xgIatrOoabPFZiVjVWYsA6Yn69IR
QliRlAVUWaq2GknbSETG6rv32G6/8nUT0L4vKpEcTS4nXmUA4t25mnrlBqtFsK9P
0huqcXrqCG7MlVWgP0xJetFTFLgnXpwfzu/xlzB3woe4npUZEYKJ2PIvL3Gqd1Z7
71aHYCJfyyXRRgZ4hbzsr4WBt1+3JbJyfjqGFPbiZp/3UI1stscCT2srxENDyUBH
mX42lchqol65QPZRO69vwEkSzlePGPVpMonoEitE77690hBhUDrwhYuWIDmBistP
I1cFc0v+0YknJdI/lwRBwz4HhoD3wWWICVGPBrghMVhQA/Pd5CLZvEr/olaudKyk
ok8y60VN97ccczA74pdfTimbWfXKC2SDzFo4Oi4QJsNJmhQPQFfg0viQ4Mp36T/V
iaU1ogB4DWFzNvup+VgOxK9bcNYSN1r1hmOsAWOOBpzsde7IACktfS39EuUowYfm
3x5aoX1QcvXn3fx27rAHUlDxfwEHjssA0WiZT/g89Tv761UPDI7R7Vjg7dlvEi4p
O90yEbuClYcRRQx/DWOe9o92UyHLeiKm9G/iWC1SomtWLOsSIBs=
=AGIn
-END PGP SIGNATURE-



Re: Contacting cloud team for hosting blends.debian.net [elb...@debian.org: Re: Any sponsoring for a VM possible (Was: Rackspace ending free hosting for blends.debian.net)]

2020-01-24 Thread Noah Meyerhans
On Fri, Jan 24, 2020 at 12:39:53PM +, Marcin Kulisz wrote:
> I'd say in the worst case scenario you could host it on one of the old 
> accounts
> and then migrate it out to the final one if it's not ready right now.
> Hopefully you've got this automated :-)

The only reason the old account would be any easier is that we don't
have any expectations that the resources there are provisioned with any
sort of tooling/automation, so we wouldn't feel quite so guilty about
doing things by hand.  It's... technically an option I guess. :)

Bastian and I have talked a little bit about using Lightsail[1] within
the new engineering account.  It doesn't require as much infrastructure
to be set up (VPCs, subnets, route tables, security groups, etc), so it
isn't blocked on us updating our terraform configuration to define all
these resources.  Lightsail has a 4-core, 16 GB RAM, 320 GB SSD option
that sounds good for this use case.

There are a couple issues with Lightsail:

Buster isn't available yet, so we'd need to start with a stretch
instance and upgrade it.  Not a show-stopper, but it adds some work.

No IPv6 support. (If you have an AWS account, please contact your TAM
and request this.)

noah

1.  https://aws.amazon.com/lightsail/



Re: Contacting cloud team for hosting blends.debian.net [elb...@debian.org: Re: Any sponsoring for a VM possible (Was: Rackspace ending free hosting for blends.debian.net)]

2020-01-23 Thread Noah Meyerhans
On Thu, Jan 23, 2020 at 10:15:33PM +0100, Andreas Tille wrote:
> > Currently the machine has 16GB memory 200GB sdd.  From current usage
> > the minimum requirement is 8GB memory (16 is certainly better since
> > it is running a UDD clone and teammetrics database) and 80GB sdd.
> > 
> > Is there anybody who could offer such a machine for long term usage.

Yes, in general this is something we can and should do.  We haven't done
this yet in the new SPI-owned AWS accounts, which are the right ones to
use here (assuming we decide to use AWS; obviously there are other
options), so there are some administrative and technical details to work
out.

How soon do you need this?

noah



Re: IRC meetings - time slots

2020-01-23 Thread Noah Meyerhans
On Tue, Jan 14, 2020 at 11:56:14PM +0100, Tomasz Rybak wrote:
> Till now we were having meetings on Wednesdays, 19:00UTC.
> It was morning in USA, and evening (but after work) in Europe.
> Should we keep this time, or change it?

I am fairly flexible, at the moment.  Any time within about ±3 hours of
20:00 UTC should work for me.  I can likely do later, but can't commit
to much earlier.

noah



Re: Debian Buster AWS AMI

2020-01-22 Thread Noah Meyerhans
On Wed, Jan 22, 2020 at 05:15:55PM -0300, Sergio Morales wrote:
>Where I can find information about the release window for the official
>Debian AMI on AWS?
>Anyone knows of any blocker for this?

The AMIs are available today:
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster

The same AMIs will be available via the AWS Marketplace as soon as we
get some paperwork in order.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-17 Thread Noah Meyerhans
On Thu, Jan 09, 2020 at 05:22:17PM -0500, Noah Meyerhans wrote:
> I've confirmed that 4.19.87 with changes cherry-picked from 50ee7529ec45
> claims to have entropy at boot:
> 
> admin@ip-172-31-49-239:~$ cloud-init analyze blame
> -- Boot Record 01 --
>  02.88900s (init-network/config-ssh)
>  ...
> 
> The change applies cleanly to our kernel tree, so this would appear to
> be a possible solution.
> 
> I've opened https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948519
> against the kernel discuss the entropy issue in general, and will follow
> up there with a proposal for getting this change backported.

The kernel team would prefer that any backport of 50ee7529ec45 to stable
branches happen upstream, which is sensible.  I'll follow up with the
stable kernel maintainers to see about making this happen, if they're
willing.

In the mean time, regardless of where the backport happens, there's no
possibility of getting this kernel change into 10.3.  So, I'd like to
revisit my original proposal of adding haveged to the arm64 EC2 image
configuration.  Haveged is used in debian-installer for buster (but not
bullseye+, see below), so there is precident for its use within Debian.
IMO, this is the best option available in the short term.  It results in
a far better user experience on the instances in question, and is a
fairly unintrusive change to make.

Background on haveged in d-i:
Haveged was added to d-i in commit c47000192 ("Add haveged-udeb [linux]
to the pkg-lists/base") in response to bug #923675 and is used in
buster.  More recently, with the addition of the in-kernel entropy
collection mechanisms we've been discussing here, the removal of haveged
has been proposed for bullseye.
https://lists.debian.org/debian-boot/2019/11/msg00077.html  It has not
yet been removed, though.

Similarly, I would expect that we would remove haveged from the
generated buster images once the kernel's entropy jitter-entropy
collector is available for buster.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-14 Thread Noah Meyerhans
On Tue, Jan 14, 2020 at 03:01:23PM +, Luca Filipozzi wrote:
> > If we want to extend the cloud kernel to support other services, we need
> > to do more than just enable virtio-rng.  Somebody need to come up with a
> > complete list of devices that are needed for the service in question,
> > and work with the kernel team ensure that support for all of them is
> > enabled in the cloud kernel.
> 
> Folks working on the CCP, etc.: is it of interest to you to use the same
> cloud kernel? Does this improve our users' experience to have the same
> kernel across the different providers?

At present the cloud kernel's only optimizations consiѕt of disabling
device drivers that are highly unlikely to be seen in a cloud
environment.  So the user experience is the same, except for the larger
/lib/modules/$(uname -r) directory and the larger initramfs image.  The
size of the initramfs does, of course, contribute to boot latency by
taking longer to uncompress, but I haven't quantified the difference
yet.  So for now, the cloud flavour is the conservative choice, in that
we know it will work and the drawbacks of using it are fairly minor.

There is talk of making some additional changes.  Bug #947759 contains a
decent summary of things that are being considered.  There is also
#941284, but my inclination is to not implement that suggestion.  In any
case, we'll need to consider the impact of any proposed changes on the
user experience in the supported clouds on an ongoing basis.

In an ideal world, we might be able to provide distinct flavours for
each cloud, since e.g. it makes no sense to enable the Amazon ENA
ethernet driver on kernels targeting environmnents other than AWS, but
that would require more resources for diminishing returns.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-10 Thread Noah Meyerhans
On Fri, Jan 10, 2020 at 03:52:53AM +, Luca Filipozzi wrote:
> Two questions (pretend i'm 6yo):
> 
> (1) why can't AWS offer virtio-rng support (other than "we already offer
> a RDRAND on amd64") and should Debian actively encourage their adding
> this support?

We can certainly ask.  However, it is very clear that EC2 is well aware
of the existence of virtio-rng (just look at who wrote the QEMU
virtio-rng implementation, for example), so, without wanting to
speculate too much, I'm going to guess that the decision to not offer it
is an intentional one, rather than an oversight.  If I learn more, and
the organization is willing to share it publicly, I'll pass it along.

> (2) what prevents our image having virtio-rng support (if it doesn't
> already)?

The cloud kernel flavour currently only targets AWS and Azure, because
people have put effort into making it support those services.  The
images that we generate for those services use that kernel.  The images
that we generate for other cloud services use the standard kernel, which
does have virtio-rng support.

If we want to extend the cloud kernel to support other services, we need
to do more than just enable virtio-rng.  Somebody need to come up with a
complete list of devices that are needed for the service in question,
and work with the kernel team ensure that support for all of them is
enabled in the cloud kernel.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-09 Thread Noah Meyerhans
On Thu, Jan 09, 2020 at 01:22:30PM -0500, Noah Meyerhans wrote:
> Our 5.4 kernel in sid does not suffer from a lack of entropy at boot on
> arm64 EC2 instances.  I guess it could be due to the "random: try to
> actively add entropy rather than passively wait for it" that tytso
> mentioned earlier.  I'm going to try to cherry-pick that into 4.19 and
> see if things speed up.  Since we're already running it in 5.4, I guess
> it's safe...

I've confirmed that 4.19.87 with changes cherry-picked from 50ee7529ec45
claims to have entropy at boot:

admin@ip-172-31-49-239:~$ cloud-init analyze blame
-- Boot Record 01 --
 02.88900s (init-network/config-ssh)
 ...

The change applies cleanly to our kernel tree, so this would appear to
be a possible solution.

I've opened https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948519
against the kernel discuss the entropy issue in general, and will follow
up there with a proposal for getting this change backported.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-09 Thread Noah Meyerhans
On Thu, Jan 09, 2020 at 04:57:24PM +, Luca Filipozzi wrote:
> > > >> I'd encourage those of you who are in position to make Amazon listen
> > > >> to get with the program and support virtio-rng.  :-)
> > > > Noah: chances of AWS supporting virtio-rng?
> > > I wonder if the correct criterion for the cloud image is compatibility
> > > with AWS and GCP only. I suppose a large number of deployment are based
> > > on private cloud environments (OpenStack etc.). In addition to AWS and
> > > GCP, there is also Azure, which is based on Hyper-V, which has a low
> > > chance of getting support for virtio-rng for obvious reasons.
> > 
> > The cloud kernel flavour currently targets AWS and Azure only.  Hence
> > the lack of support for virtio-rng.
> 
> How is entropy starvation at boot solved for x86-64 in AWS / Azure?

RDRAND is available on amd64, and contributes early entropy.

Our 5.4 kernel in sid does not suffer from a lack of entropy at boot on
arm64 EC2 instances.  I guess it could be due to the "random: try to
actively add entropy rather than passively wait for it" that tytso
mentioned earlier.  I'm going to try to cherry-pick that into 4.19 and
see if things speed up.  Since we're already running it in 5.4, I guess
it's safe...

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-09 Thread Noah Meyerhans
On Thu, Jan 09, 2020 at 01:18:24PM +0100, Adam Dobrawy wrote:
> >> I'd encourage those of you who are in position to make Amazon listen
> >> to get with the program and support virtio-rng.  :-)
> > Noah: chances of AWS supporting virtio-rng?
> I wonder if the correct criterion for the cloud image is compatibility
> with AWS and GCP only. I suppose a large number of deployment are based
> on private cloud environments (OpenStack etc.). In addition to AWS and
> GCP, there is also Azure, which is based on Hyper-V, which has a low
> chance of getting support for virtio-rng for obvious reasons.

The cloud kernel flavour currently targets AWS and Azure only.  Hence
the lack of support for virtio-rng.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
On Wed, Jan 08, 2020 at 07:18:33PM -0500, Theodore Y. Ts'o wrote:
> Another approach would be to cherry pick 50ee7529ec45 ("random: try to
> actively add entropy rather than passively wait for it").  I'm pretty
> confident that it's probably fine ("it's fine.  it's fine.  Really,
> it's fine") for x86.  In particular, at least x86 has RDRAND, so even
> if it's utterly predictable to someone who has detailed information
> about the CPU's microarchitecture, it probably won't be a diaster.

Of course, another possibility would be to use the 5.4 kernel from
buster-backports, once it's uploaded, since it'll contain 50ee7529ec45
already.  I can confirm that ssh host key generation under Linux 5.4
does not block for lack of entropy.  We'll also at that point have the
option of using the cloud kernel flavour, when that's available.  I
don't really like the idea of using something that doesn't get support
from the security team, and I'd probably want to switch to the
buster-backports kernel for amd64 as well, if we were to do this.  It's
not what I prefer, but it is an option worth mentioning.

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
On Wed, Jan 08, 2020 at 07:18:33PM -0500, Theodore Y. Ts'o wrote:
> I was under the impression that Amazon provided virtio-rng support for
> its VM's?  Or does that not apply for their arm64 Vm's?  If they do
> support virtio-rng, it might just be an issue of building the cloud
> kernel with that option enabled.

RDRAND is used for amd64, via the RANDOM_TRUST_CPU kernel config option.
That is not available for arm64.  The rough equivalent there is
apparently RANDOM_TRUST_BOOTLOADER, which uses the EFI_RNG protocol.
It's only available in Linux 5.4 at the moment, and not currently
supported on EC2.  It seems like we should consider backporting this.

> Another approach would be to cherry pick 50ee7529ec45 ("random: try to
> actively add entropy rather than passively wait for it").  I'm pretty
> confident that it's probably fine ("it's fine.  it's fine.  Really,
> it's fine") for x86.  In particular, at least x86 has RDRAND, so even
> if it's utterly predictable to someone who has detailed information
> about the CPU's microarchitecture, it probably won't be a diaster.

Thanks, this is worth looking at, at least in the absense of
RANDOM_TRUST_BOOTLOADER.

> Upstream, it's enabled for all architectures, because Linus thinks
> hanging at boot is a worse problem than a insufficiently initialized
> CRNG.  I'm not at all convinced that it's safe for all ARM and RISC-V
> CPU's.  On the other hand, I don't think it's going to be any worse
> that haveged (which I don't really trust on all architectures either),
> and it has the advantage of not requiring any additional userspace
> packages.

...Although this really isn't a ringing endorsement. :(

noah



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
On Wed, Jan 08, 2020 at 09:24:25PM +, Jeremy Stanley wrote:
> > I've seen reactions like this, but never an explanation.  Has anyone
> > written up the issues?  Given that "fail to boot" isn't a workable
> > outcome, it'd be useful to know exactly what risks one accepts when
> > using haveged.
> 
> While you're at it, defining "fail to boot" would be nice. Just
> because sshd won't start, it doesn't necessarily mean the machine
> isn't "booted" in some sense, only that maybe you can't log into it
> (substitute httpd and inability to browse the Web sites served from
> it, or whatever you prefer).

To be clear, the problem isn't a failure to boot, but rather a several
minute pause during boot.  In the default images, the pause occurs
during ssh host key generation, but it's possible that other services
would be impacted in actual production scenarios, particularly since
user-provided cloud-config would not be processed until after the
config-ssh module completes.

For reference, here's the "systemd-analyze blame" and "cloud-init
analyze blame" output showing the delay:

admin@ip-10-0-1-42:~$ systemd-analyze blame
2min 27.763s cloud-init.service
 26.080s cloud-final.service
  2.774s networking.service
  2.065s cloud-init-local.service
  1.554s cloud-config.service
  ...

admin@ip-10-0-1-42:~$ cloud-init analyze blame
-- Boot Record 01 --
 25.26800s (modules-final/config-scripts-user)
 145.79700s (init-network/config-ssh)
 00.62600s (modules-config/config-grub-dpkg)
 00.49900s (init-local/search-Ec2Local)



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
On Wed, Jan 08, 2020 at 12:50:04PM -0800, Ross Vandegrift wrote:
> I know of two other options:
> - pollinate
> - jitterentropy-rngd
> 
> pollinate downloads seeds remotely, which feels wrong - and itself may
> require random numbers.  I've never tried jitterentropy.

IMO these are roughly equivalent to haveged, in that they're userspace
accumulators of entropy that try to seed the kernel.  I think I prefer
haveged's approach, but I'm really not qualified to judge.



Re: lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
On Wed, Jan 08, 2020 at 08:17:13PM +, Luca Filipozzi wrote:
> Every time I propose the use of haveged to resolve entropy starvation, I
> get reactions from crypto folks saying that it's not a valid solution.
> They invariably suggest that passing hardware RNG through to the VM is
> the appropriate choice.
> 
> The latest such reaction being from mjg59. See:
> https://twitter.com/mjg59/status/1181423056268349441
> https://twitter.com/LucaFilipozzi/status/1181426253636755457

Yeah, this is my understanding as well.  But it's not like the haveged
developers are clueless, either, and there's a decent amount of research
behind their approach.  I can't pretend to understand the details of it,
though.

Even if passing entropy from the host to the VM is the right approach,
it's not something we can take advantage of today, due to lack of
support both within EC2 and within Debian.  I'll follow up with the
kernel team to gauge their level of support for enabling
CONFIG_RANDOM_TRUST_BOOTLOADER (and backporting it to buster).

If the kernel team is supportive of the
EFI_RNG+CONFIG_RANDOM_TRUST_BOOTLOADER approach, would folks be in favor
of enabling haveged temporarily, until kernel support is available, or
is it better to avoid it completely?

noah



lack of boot-time entropy on arm64 ec2 instances

2020-01-08 Thread Noah Meyerhans
The buster arm64 images on Amazon EC2 appear to have insufficient
entropy at boot, and thus take several minutes to complete the boot
process.

There are a couple of potential fixes (or at least workarounds) for this
problem, but none is entirely perfect.

Option 1:

We add haveged to the arm64 EC2 AMI.  This appears to work, and is
something we can do today.  The debian-installer has previously used
haveged to ensure reasonable entropy during installation, so there is
some precident for this.

Option 2:

There is a mechanism by which the VM host can pass entropy to the guest
at boot time using the EFI_RNG protocol.  This won't require any
additional software in our images, but it has a couple of other notable
drawbacks:

  a. It depends on kernel functionality from Linux 5.4.  This could
 probably be backported to 4.19, but it would take work.
  b. It isn't clear that we want this functionality enabled globally. It
 is not currently enabled in our generic 5.4 kernel configs for
 arm64.  If it's not desirable on the generic kernel, we could 
 enable it only on the cloud flavour, but we don't currently have a 
 cloud flavour for 4.19.
  c. It requires EFI_RNG support from EC2, which is not currently
 available.  We can request this, but I don't know when/if they
 would provide it.

I'm not aware of any other options.  Given the above, it seems that
haveged is the only really feasible choice right now.  Does anyone
disagree with that assessment?  Are there options I've missed?



Re: generic (cloud) image problems

2019-12-29 Thread Noah Meyerhans
On Sun, Dec 29, 2019 at 11:16:20PM +0100, Thomas Goirand wrote:
> >> Right, both bridge-utils and vlan are required to setup a bridge or
> >> vlan from *within* /etc/network/interfaces.
> > 
> > That's not really true.  bridge-utils and vlan add some additional
> > syntax, but everything can be accomplished with some basic pre/post-up
> > commands in the interfaces entry.
> 
> Yes, indeed. But that's precisely what you don't want to have to do
> because it's too complicated. Best is if you can just use the "normal"
> syntax, with something like:
> 
> auto vlan80
> iface vlan80 inet static
>   vlan-raw-device bond0
>   address 10.0.0.1/24

I agree that current support is not perfect, but I don't think there's
any reason for our users to feel blocked by this lack of support.
100% of expected functionality is available.

> Probably, the issue is in ifupdown, which probably should be "fixed" to
> use ip correctly, rather than the legacy tools. Whoever's fault, the
> direct effect is that it gets less user friendly, and we should fix that.

Unfortunately, AFAICT, ifupdown is also pretty much dead.  It hasn't
seen any major feature development in quite some time, and hasn't seen
an upload to unstable in nearly a year.

I suspect that the Right Way to do this on a modern Debian system would
be to use systemd-networkd and systemd netdev configuration.  I don't
know enough about this right now to provide an example, or to figure out
if/how this is usable with our current interface configuration
management approach, but systemd.netdev(5) provides some basic examples.

noah



Re: generic (cloud) image problems

2019-12-28 Thread Noah Meyerhans
On Fri, Dec 27, 2019 at 10:33:16PM +0100, Christian Tramnitz wrote:
> Right, both bridge-utils and vlan are required to setup a bridge or
> vlan from *within* /etc/network/interfaces.

That's not really true.  bridge-utils and vlan add some additional
syntax, but everything can be accomplished with some basic pre/post-up
commands in the interfaces entry.



Re: generic (cloud) image problems

2019-12-28 Thread Noah Meyerhans
On Sat, Dec 28, 2019 at 12:23:04PM +0100, Christian Tramnitz wrote:
> There is no workaround for the lack of vlan and bridge-utils though.
> It would be great it we could get those two included into the base
> image.

Both vlan and bridge-utils are legacy tools that are replaced by ip(8)
on modern GNU/Linux systems.  You should use that instead.  I do not
believe it would be desirable to include the legacy tools in the default
images.

If you really aren't willing to use ip(8), then I suggest constructing
your own VM images based on our configuration.  This is a fully
supported use case and somethat that we do consider a reasonable option
for people who need customizations on top of what we've build.  I
documented the process a while back at
https://noah.meyerhans.us/blog/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/
and since then we've made it easier by eliminating the need to
explicitly call fai-diskimage.  If you make you package modifications to
e.g. the package_config/DEBIAN file, then you can build an image using
"make buster-image-generic"

noah



Re: generic (cloud) image problems

2019-12-27 Thread Noah Meyerhans
On Fri, Dec 27, 2019 at 11:39:00PM +0100, Thomas Goirand wrote:
> >> it is virtually impossible to use the image for cloud-init based 
> >> deployment:
> >> While cloud-init can also install missing packages, the install would
> >> happen *after* package installation and network setup. bridge-utils
> >> and vlan are required if the network setup has either a vlan or a
> >> bridge and gnupg2 is required to add 3rd party repos (otherwise
> >> "apt-key add" won't work). Both - network access and 3rd party repos -
> >> may be required for further steps of an deployment (i.e. to install
> >> automation tools like salt, chef or puppet).
> > 
> > I describe a workaround for the lack of "apt-key add" functionality at
> > https://salsa.debian.org/cloud-team/debian-cloud-images/issues/17#note_126311
> > 
> > We (the cloud team) have basically concluded in recent conversations
> > that cloud-init's usage of apt-key is a bug and should be removed.
> > 
> > noah
> 
> Though until that's fixed, Christian is right to complain.

Yes, but the workaround I provided completely replaces the missing
functionality.  IMO if the docs described the process in the workaround,
rather than the existing behavior based on apt-key, we'd be done.

noah



Re: generic (cloud) image problems

2019-12-27 Thread Noah Meyerhans
On Fri, Dec 27, 2019 at 02:16:05PM +0100, Christian Tramnitz wrote:
> However, I'm running into multiple problems staging a system through 
> cloud-init.
> Without the packages
> - bridge-utils
> - vlan
> - gnupg2
> it is virtually impossible to use the image for cloud-init based deployment:
> While cloud-init can also install missing packages, the install would
> happen *after* package installation and network setup. bridge-utils
> and vlan are required if the network setup has either a vlan or a
> bridge and gnupg2 is required to add 3rd party repos (otherwise
> "apt-key add" won't work). Both - network access and 3rd party repos -
> may be required for further steps of an deployment (i.e. to install
> automation tools like salt, chef or puppet).

I describe a workaround for the lack of "apt-key add" functionality at
https://salsa.debian.org/cloud-team/debian-cloud-images/issues/17#note_126311

We (the cloud team) have basically concluded in recent conversations
that cloud-init's usage of apt-key is a bug and should be removed.

noah



Re: updating cloud-init in stable

2019-12-26 Thread Noah Meyerhans
On Wed, Dec 25, 2019 at 12:39:19PM +0100, Thomas Goirand wrote:
> >> Though if we're to have a Buster branch, best would probably be to just
> >> call the branch debian/buster, and start the fork from the debian/18.3-6
> >> tag (so we have a meaningful history). I usually have the habit to never
> >> start a debian/stable branch before it's actually needed, though now is
> >> the time, probably.
> >>
> >> At the moment, I cannot see your branch in Salsa. I guess you don't have
> >> direct write access to the package (which is probably needed for this
> >> kind of operation). Can someone grant it to you?
> > 
> > Noah's access is correct, he should be able to push a new branch to the 
> > repo.
> > I imagine he kept it in his space for now in case anyone had questions:
> >   https://salsa.debian.org/noahm/cloud-init/commits/buster-p-u
> > 
> > It's a no change backport from the current master.  That seems like a good 
> > plan
> > to me.  Is there a reason you'd like to go back to 18.3-6?
> 
> I just would like to keep a relevant Git history of what's in Buster.
> It's not very important though... :)

We should create a branch based on the point that will require the
minimal changes in order to achieve what we want.  Since the goal is to
get the version in testing into stable-updates, we should branch from
that version (tag debian/19.3-1).  Changes necessary to build that for
stable are trivial.  Changes from any other version require more work
and result in a larger delta from the starting point.



Re: updating cloud-init in stable

2019-12-26 Thread Noah Meyerhans
On Wed, Dec 25, 2019 at 12:05:08PM +0100, Thomas Goirand wrote:
> I can see it, though I do expect we just work on a single Git
> repository, not with everyone forking everything. I've opened a buster
> branch on Salsa, using the debian/18.3-6 tag the entry point. We can
> start from there.

Sending merge requests from a private fork is a very common workflow for
git-based projects.



Re: updating cloud-init in stable

2019-12-24 Thread Noah Meyerhans
On Wed, Dec 25, 2019 at 12:42:49AM +0100, Thomas Goirand wrote:
> > In order to get that process started, I have constructed a "buster-p-u"
> > branch on salsa and begun testing cloud-init 19.3 on buster on AWS. [2]
> 
> Though if we're to have a Buster branch, best would probably be to just
> call the branch debian/buster, and start the fork from the debian/18.3-6
> tag (so we have a meaningful history). I usually have the habit to never
> start a debian/stable branch before it's actually needed, though now is
> the time, probably.

Yeah, my intent was definitely not for this branch to be the final
branch from which we perform uploads, etc.  Consider it a feature
branch, and my intent was to submit a merge request for review at a
later time.  FWIW, there are apparently no changes needed, aside from
the debian/changelog update, to facilitate a buster backport.

> At the moment, I cannot see your branch in Salsa. I guess you don't have
> direct write access to the package (which is probably needed for this
> kind of operation). Can someone grant it to you?

Hm.  Can you not access
https://salsa.debian.org/noahm/cloud-init/tree/buster-p-u ?  Gitlab
claims that the fork is public.

noah



updating cloud-init in stable

2019-12-23 Thread Noah Meyerhans
In our most recent meeting, we discussed possibly updating cloud-init in
stable. [1]  This would be our first first time performing a
feature-update to a stable release, even though the stable release
managers some time ago indicated a willingness to let us do that.

In order to get that process started, I have constructed a "buster-p-u"
branch on salsa and begun testing cloud-init 19.3 on buster on AWS. [2]

There was some mention of cloud-init modifications to support Azure
during the meeting, and waldi has an ACTION to follow up on them.  Would
an update to 19.3 include these changes, or are they something
additional?  What's the status of them?

If anybody wants to help test cloud-init, I have an apt repo and a
pre-built AMI that I can share.  Or you can build your own packages
based on the branch on salsa.  Feedback and bugfixes would certainly be
welcomed.  Testing on non-AWS providers would be especially helpful.

I'm also hoping to update the awscli and boto packages, but those
packages aren't maintained by the cloud-team, so the situation is
somewhat different.  I will follow up with the maintainers separately.

noah

1. http://meetbot.debian.net/debian-cloud/2019/debian-cloud.2019-12-11-19.00.txt
2. https://salsa.debian.org/noahm/cloud-init/tree/buster-p-u



Re: IRC meeting: Wedensday, 2019-12-11

2019-12-10 Thread Noah Meyerhans
On Tue, Dec 10, 2019 at 08:46:08AM +0100, Tomasz Rybak wrote:
> I remind everyone that our next meeting will take place
> this Wednesday, 2019-12-11, at 19:00 UTC.

I won't be able to make this one because of work committments.  Items
I would have wanted to discuss include:

1. Still no word on AWS Marketplace Seller Agreement acceptance from
SPI.  Can a delegate please ping them again?

2. The thread at [1] makes me wonder if we should consider trying to
update cloud-init in our stable images.  This could be done via a full
update to stable, via stable-updates, as we've discussed with regard to
cloud SDKs and tools (arguably cloud-init fits into this category,
despite being vendor-agnostic).  Alternatively, we could simply provide
a package via buster-backports, and include that in the images.  I'll
start a new thread on this on debian-cloud@l.d.o.

3. I have begun work on introducing a "cloud optimized" kernel for
arm64, similar to what we've already got for amd64. [2]

4. I still need to post somewhere (blog, bits, etc) about our daily sid
images, as discussed at the last IRC meeting.

noah

1. https://lists.debian.org/debian-cloud/2019/12/msg8.html
2. https://salsa.debian.org/kernel-team/linux/merge_requests/193



Re: User-data And cloud-init

2019-12-09 Thread Noah Meyerhans
On Mon, Dec 09, 2019 at 06:18:15PM +0200, Michael Kanchuker wrote:
>Is there an official image like with Jessie or Stretch?

Yes, details are at https://wiki.debian.org/Cloud/AmazonEC2Image/Buster

It is not yet available on the AWS Marketplace because we are still
blocked on some legal details...

Unfortunately, I think even buster does not contain a new enough
cloud-init to support text/jinja2 userdata parts.  At least according to
the cloud-init docs, that feature wasn't added until version 19.3, which
we don't even have in sid yet.  Buster contains 18.3.

text/jinja2 is documented for 19.3 at:
https://cloudinit.readthedocs.io/en/19.3/topics/format.html

And note its absense from the 19.2 docs at:
https://cloudinit.readthedocs.io/en/19.2/topics/format.html

noah



Re: Configuring /etc/resolv.conf ?

2019-12-06 Thread Noah Meyerhans
On Fri, Dec 06, 2019 at 04:42:18PM +0100, Dick Visser wrote:
> I'm struggling to add custom nameservers to /etc/resolv.conf.
> The file gets overwritten on reboot, but I can't find out where this is done.
> Any ideas?

On our cloud images, resolv.conf is managed by dhclient, which is
invoked by ifup and is responsible for setting up the network interfaces
based on DHCP negotiation with a remote service provided by the cloud
provider.

The /sbin/dhclient-script shell script contains a function
make_resolv_conf(), which generates and installs the new resolv.conf.
If needed, you can redefine that function by placing a script fragment
in /etc/dhcp/dhclient-enter-hooks.d/

noah



Re: Handling predictive network interfaces 'sometimes'?

2019-12-04 Thread Noah Meyerhans
On Wed, Dec 04, 2019 at 01:16:27PM +1100, paul wrote:
> I'm reworking my old VPN server, and will use the Debian 10 AMI in AWS. I've
> noticed that predictable network interface names are enabled for t3 servers,
> but not t2 - I have test setup on a t2.micro and a t3.micro, and only the t3
> has predictable interface names. I'm trying to write up some Ansible
> templates for this new vpn setup.

Actually, predictable interface names are enabled everywhere.  There are
implementation differences between the t2 and t3 instance types that
change udev's behavior with regard to how interface names are chosen.

The ENA network device used on t3 instances appears on the PCI bus of
the instance.  So when udev inspects the device, it finds information
that it uses to derive a consistent name for the device (see 'udevadm
info /sys/class/net/ens5' for the information that it works from).

T2 instances are based on Xen and use the Xen netfront (vif) interface.
These interfaces aren't PCI devices, so udev can't generate a name based
on the PCI bus ID. Compare the 'udevadm info' output for a t2 with that
of a t3.  Because Debian doesn't enable the MAC address based naming
scheme, udev ends up leaving the kernel's interface name in place on t2.

> I don't play around with iptables a lot (my netadmin-fu is weak), but what's
> the best way to go about writing a set of firewall rules that will satisfy
> both an eth0 and an ens5? Just simply duplicate the rule for each naming
> type? Disable predictable names somehow (google is confusing on how,
> exactly)? I'd like to end up with a template that 'just works' without
> having to know about this t2/t3 difference issue. It's not the end of the
> world if I can't, but I'd like to avoid surprising 'future me' down the
> road.

You can disable predictable interface naming by passing "net.ifnames=0"
to the kernel command line (edit /etc/default/grub) if you want to
disable interface renaming completely.  But a better approach would be
to update your firewall configuration to not hardcode a specific
interface name.  You probably can get what you want by identifying the
interface associated with your default route, which you can get reliably
by with "ip -o route show default"

noah



Re: Buster AMI interface names

2019-10-18 Thread Noah Meyerhans
On Fri, Oct 18, 2019 at 08:35:34AM -0700, Ross Vandegrift wrote:
> On Fri, Oct 18, 2019 at 07:15:23AM +0200, Geert Stappers wrote:
> > Where to sent patches for  `configure-pat.sh`?
> 
> I don't know, I'm not familiar with it.

The canonical source for this script is the aws-vpc-nat package for
Amazon Linux. It's available in the Amazon Linux yum repositories.
Sources can be retrieved with "yumdownloader --source aws-vpc-nat"

There aren't currently public vcs repositories for Amazon Linux
packages, so patches to the SRPM would need to be sent via AWS support
channels. I can probably help push changes through.

Note also that the script does not appear to be published under an open
source license, so it probably shouldn't be redistributed publicly. This
issue would also be worth raising with AWS support.



Re: Buster AMI interface names

2019-10-17 Thread Noah Meyerhans
Hello.

On Fri, Oct 18, 2019 at 12:39:15AM +0200, Dick Visser wrote:
> I'm happily using the new Buster AMI, but I noticed that the image has
> consistent device naming enabled, so my instances have their single
> interface called "ens5".

Can you explain why this causes problems for you? We want to keep device
renaming enabled for consistency with standard Debian installations, and
we aren't aware of specific problems with it that necessitate it being
disabled.

noah



Re: AWS Marketplace re:Invent session - Debian

2019-10-10 Thread Noah Meyerhans
On Thu, Oct 10, 2019 at 03:02:48PM +0100, Marcin Kulisz wrote:
> > This is only email I got about this, so maybe I'm missing something
> > here. But - is this something we shold talk about during sprint next
> > week?
> 
> IMO it makes sense to have a chat about it. If we want Debian to be more
> visible and used it wouldn't hurt to do that.
> 
> But I think problem in here is going to be not with technicalities per se but
> with bringing people working on docker images to the team.

Assuming SPI has signed off on the user agreements for AWS marketplace
access, we'll probably want to spend time on that topic in general, and
ideally get the buster AMIs listed there. We should keep container image
publication in mind as we work on that.

One thing that's worth talking about, regardless of whether the Docker
image maintainers are part of the cloud team or not, is how we control
access to the marketplace publication process. At present, the only way
to publish is via the web console. Access is controlled by IAM
permissions, and we'll need to determine whether or not the permissions
allow us to control publication access on a granualar enough basis to
suit our needs. [1] Roles that can publish AMIs should not necessarily
have the ability to publish container images, and vice versa. At the
moment, I'm not sure if that's possible, since there aren't distinct
actions for AMI publication and image publication, and resource level
access isn't supported, so we might have to figure something out. [2]

noah

1. 
https://docs.aws.amazon.com/marketplace/latest/userguide/detailed-management-portal-permissions.html
2. 
https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awsmarketplacecatalog.html



Re: Sprint 2019

2019-10-09 Thread Noah Meyerhans
On Tue, Oct 08, 2019 at 01:22:20PM -0400, Jonathan D. Proulx wrote:
> Do we have an attendee count for room setup, I've variously hear 13
> and 40...

The currently confirmed roster is at
https://wiki.debian.org/Sprints/2019/DebianCloud2019 and shows about 13
people. I wouldn't expect much deviation from that.

noah



Re: Debian 10 Buster AWS AMI

2019-10-05 Thread Noah Meyerhans
> Looking forward to official Buster image soon. Please let me know if I
> can be of any assistance.

Details about the official buster AMIs can be found at
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster, and these AMIs are
available today.

The Marketplace listings, when they are available, will reference the
same images. There are additional license terms that need to be accepted
by SPI before we can publish to the marketplace. We hope to have this
addressed within the next couple of weeks.

Is there a specific reason why you're interested in the Marketplace
listing, as opposed to the AMIs listed on the wiki?

noah



Re: Releasing Buster for AWS

2019-09-18 Thread Noah Meyerhans
On Wed, Sep 18, 2019 at 10:22:17PM -0500, Stephen Gelman wrote:
> On Sun, Sep 08, 2019 at 07:49:45PM +0200, Bastian Blank wrote:
> > > If no-one shouts I will do the first release of Buster for AWS with both
> > > amd64 and arm64 tomorrow. Azure needs to be done anyway.
> 
> Seems this didn’t happen.  What are the current blockers for getting an AMI 
> released?  Anything I can do to help?
> 

It was released several days ago.
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster



Re: Releasing Buster for AWS

2019-09-08 Thread Noah Meyerhans
On Sun, Sep 08, 2019 at 07:49:45PM +0200, Bastian Blank wrote:
> If no-one shouts I will do the first release of Buster for AWS with both
> amd64 and arm64 tomorrow.  Azure needs to be done anyway.

Do it. A lot of users will be happy to have buster AMIs. The remaining
points that were unresolved in
https://salsa.debian.org/cloud-admin-team/debian-cloud-images-release/merge_requests/5
aren't relevant to this release, so they can be discussed later.

noah



Re: Is Eucalyptus upstream dead?

2019-09-03 Thread Noah Meyerhans
On Tue, Sep 03, 2019 at 10:24:24AM +0100, kuLa wrote:
> > on my side I would have no objections with a removal.
> 
> Should we actively ask for removal or wait till normal bugs will become RC and
> removal for all py2 packages is going to be compulsory?
> I personally am ok with both.

In my experience, early removal is preferable. It gives users an
indication that they should be looking for alternatives now, while
things are still reasonably safe to use. They can migrate in their own
time frame. Whereas if we wait until a (possibly security related) RC
bug, the transition is much more abrupt for the users.

The big question to me is whether the packages should be removed from
(old)stable. In general, I'd say yes for the same reasons as above. By
keeping the packages in the archive, we are presenting a level of
support for them that we may not actually be prepared to meet.



Re: Using GitLab CI (was: Moving daily builds out of main debian-cloud-images project)

2019-09-02 Thread Noah Meyerhans
On Mon, Sep 02, 2019 at 05:10:55PM +0200, Thomas Goirand wrote:
> State what? That we're supposed to build the cloud images on Casulana?
> As much as I know, that has always been what we were supposed to do. You
> alone decided that the gitlab's CI was the way to go.

Thomas, you seem to be under the mistaken impression that building
images from GitLab pipelines implies that the builds are not happening
on casulana. That is not the case. The builds coordinated by salsa *do*
happen on casulana.

> We're not only building images for Sid/Unstable, but also for stable. In
> such case, we want the images to be built *only* when needed, ie: when a
> package is updated in security.debian.org, or when we have a point
> release. That's what was done for the OpenStack images since Stretch.

This, also, is fully compatible with salsa-driven builds happening on
casulana.

Does this address your concerns regarding Bastian's proposal?

noah



Re: Moving daily builds out of main debian-cloud-images project

2019-09-02 Thread Noah Meyerhans
On Sun, Sep 01, 2019 at 12:40:50PM +0200, Bastian Blank wrote:
> As mentioned during the last meeting, I would like to move the daily
> builds out of the main debian-cloud-images project.  The new project
> reponsible for them would exist in a different group, so we don't longer
> need to guard access to the main cloud-team group that strict.
> 
> Disadvantages of this move:
> - Visibility of daily builds is reduced as they are in a new group.
> - Code and config updates for now need explicit changes in the -daily
>   (and the same in the already existing -release) repo to become active.
> 
> Advantages:
> - Access credentials for vendor and Debian infrastructure only exist in
>   the new group, so accidently leaking them is way harder.
> - All jobs in this new group will run on Debian infrastructure.
> - We gain the possibility to actually test upload procedures, which may
>   need access to credentials.
> 
> Any objections?

+1 to this proposal. Reduced visibility of the daily builds is of
minimal impact. It's likely that most users of the daily builds will
either be members of the cloud team or people that have been directed to
the daily buids by members of the cloud team.

noah



Re: List of stuff running on the Debian AWS account

2019-08-27 Thread Noah Meyerhans
On Tue, Aug 27, 2019 at 10:32:41PM -0300, Antonio Terceiro wrote:
> > Do we have a list of stuff that runs on our old AWS account?  As we need
> > to migrate all of this to our new engineering account, it would be nice
> > to actually know what runs there.  It would be even better if we know
> > how this is all setup.
> 
> ci.debian.net runs there. 1 master server and 12 workers. what exactly
> do you mean with "how this is all setup"? the stu
> 
> there is also https://collab.debian.net/ which is run by Valessio Brito
> (but the instance was created by me).

Were the instances created by hand? Or using a tool like AWS
CloudFormation, TerraForm, etc? Are they managed using some kind of
configuration management system, or is it all manual?

For some background, the AWS account under which these instances are
running is owned by an individual DD, not SPI/Debian. We have created a
new account that is properly manageable and officially recognized by
SPI. We'd like to migrate as much as possible to the new account.

The old account won't go away completely, as it is where the pre-buster
AMIs live, and they can't be migrated between accounts. So there's not
an immediate sense of urgency, but we'd like to get things moved as soon
as possible.

Practically speaking, moving services to the new account will involve
launching replacement instances. If they were created/managed by a tool,
rather than by hand, this is much easier, hence Bastian's question.

noah


signature.asc
Description: PGP signature


Re: sharing an rbnb (Debian Cloud Sprint Fall 2019)

2019-08-18 Thread Noah Meyerhans
On Sun, Aug 18, 2019 at 10:36:51AM +0100, Marcin Kulisz wrote:
> > I'm very much for sharing a big airbnb with any of you as well. I've
> > searched too, and it's a way cheaper than hotels indeed. I don't mind
> > walking a bit if it's to get the comfort of a private place. So, count
> > me in your airbnb searches! Anyone else to join?
> 
> I agree with zigo, tho 1st have to figure out if I'm able to go

I'm potentially interested in this. It'll depend on how much (if any) my
employer is willing to put towards this. I'll try to answer definitively
in the next few days.



Re: Last week on IRC

2019-08-18 Thread Noah Meyerhans
On Sun, Aug 18, 2019 at 07:06:27AM +0100, kuLa wrote:
> >New regions on old AWS account: need root account for that.
> 
> Above is not fully correct but anyway it's been sorted and Debian
> cloud images thanks to Noah should be now available in the new regions
> as well.

It's been sorted out for all existing public regions, but future regions
will still need somebody to manually enable them. I'm not 100% sure, but
I believe this must be done via the web UI (that is, there is no API).



Re: Debian Cloud Sprint Fall 2019

2019-08-15 Thread Noah Meyerhans
On Thu, Aug 15, 2019 at 03:38:00PM -0400, Jonathan Proulx wrote:
> Accommodation
> -
> there's a lot of choice (and it is all fairly priicey
> unfortunately). As a visitor Noah may actually have better info than I

Last time I was in the neighborhood, I stayed at the Fairfield Inn &
Suites, which is a bit further away (roughly 20 minutes on foot) but
much more reasonably priced. The hotel's web site suggests that prices
are high for the days in question, though, which makes me wonder if it's
an unusually busy week:
https://www.marriott.com/hotels/travel/bosbg-fairfield-inn-and-suites-boston-cambridge/
and https://goo.gl/maps/sHfbdEC9j9R7cs3b6

noah



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
On Thu, Aug 08, 2019 at 05:56:44PM -0700, Tarjei Husøy wrote:
> > The AMIs in the AWS marketplace should be launchable in the new regions.
> > See https://aws.amazon.com/marketplace/pp/B073HW9SP3 and let me know if
> > it'll work for you. The Marketplace AMIs are identical to the ones we
> > publish directly.
> 
> I wasn’t able to get this to work. I usually launch instances via the
> API, but the API returns zero AMIs for the account (379101102735) in
> the ap-east-1 region. I tried via the web console too, that errors
> with "AWS Marketplace was unable to proceed with the product you
> selected. Please try selecting this product later."

Interesting. You might consider asking your AWS support people why
you're not able to launch AMIs that the Marketplace reports as
available.

> > I'll see about getting the new regions enabled for the Debian account
> > that we use for publishing the AMIs on
> > https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch
> 
> Great, thanks! If you have web access it’s quite easy to activate,
> just click  “My Account”, scroll down to “AWS Regions” and click the
> buttons.

Unfortunately, I don't have the necessary permission to opt our AWS
account into those regions...

noah



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
> Amazon recently launched two new regions, Hong Kong (ap-east-1) and
> Bahrain (me-south-1). All new regions after March 20, 2019 come on a
> opt-in basis [1], thus you might not have seen them show up unless you
> saw the news when they were introduced. Would it be possible to have
> stretch images published for these regions?

Yep, that's an oversight on my end. The need to opt in to new regions
is not something I've fully internalized yet.

The AMIs in the AWS marketplace should be launchable in the new regions.
See https://aws.amazon.com/marketplace/pp/B073HW9SP3 and let me know if
it'll work for you. The Marketplace AMIs are identical to the ones we
publish directly.

I'll see about getting the new regions enabled for the Debian account
that we use for publishing the AMIs on
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

noah



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
Package: cloud.debian.org
Severity: important
Control: submitter -1 Tarjei Husøy 

Hi,

Amazon recently launched two new regions, Hong Kong (ap-east-1) and Bahrain 
(me-south-1). All new regions after March 20, 2019 come on a opt-in basis [1], 
thus you might not have seen them show up unless you saw the news when they 
were introduced. Would it be possible to have stretch images published for 
these regions?

Thanks!

[1]: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html

Have a splendid day!

—
Tarjei Husøy
Co-founder, megacool.co


Re: Regular meeting of the team

2019-08-05 Thread Noah Meyerhans
On Mon, Aug 05, 2019 at 05:57:57PM +0100, Marcin Kulisz wrote:
> > 1900 UTC makes it 2100 Geneva time. I'd very much prefer something
> > during work hours if possible. Or is it that the majority of us is doing
> > this away from office hours?
> 
> I suggested this time having in mind that quite a few ppl are in the US, but I
> don't anticipate any issues with my own timetable if we'd make it a bit
> earlier thus I'm fine with earlier time if this fits other ppl as well.

I could go as early as 15:00 UTC. Later is better. :)



Re: Regular meeting of the team

2019-08-05 Thread Noah Meyerhans
On Sun, Aug 04, 2019 at 10:15:32PM +0200, Tomasz Rybak wrote:
> > So I'd say Wednesday 7th of Aug at 1900UTC or maybe we could use
> > something like
> > doodle.com to coordinate this?
> 
> I propose the same weekday (Wednesday) and hour (19:00 UTC),
> but let's move to next week (so 2019-08-14).
> 7th might be a bit too close for organize.
> 
> Any objections or remarks?

I can make either proposed date work, at 19:00 UTC.

noah



Re: Debian 10 Buster AWS AMI

2019-07-21 Thread Noah Meyerhans
On Sat, Jul 20, 2019 at 11:26:38PM +0300, Mustafa Akdemir wrote:
>Can i use Debian GNU/Linux 9 (Stretch) AMI by upgrading to Debian
>GNU/Linux 10 (Buster) for Wordpress web site server until Debian GNU/Linux
>10 (Buster) AMI will be published. May it cause any problem by upgrading
>Debian GNU/Linux 9 (Stretch) AMI?

Hi Mustafa.

Upgrading an instance from stretch to buster should be essentially the
same as any other stretch->buster upgrade.

If desired, you can also create your own AMIs from scratch using the
same FAI configs that the cloud team uses to generate the official
images. In that case, the resulting image will be essentially
indistinguishable from the official ones, except they'll be owned by
your account. The steps to do this are:

install fai-server, fai-config, and fai-setup-storage (>= 5.7) and
qemu-utils.

Clone the FAI configs from 
https://salsa.debian.org/cloud-team/debian-cloud-images.git

Generate an image using:
/usr/sbin/fai-diskimage --verbose --hostname debian \
--class 
DEBIAN,CLOUD,BUSTER,BACKPORTS,EC2,IPV6_DHCP,AMD64,GRUB_CLOUD_AMD64,LINUX_IMAGE_BASE,LAST
 \
--size 8G --cspace /path/to/debian_cloud_images/build/fai_config \
/tmp/image.raw

Then write the resulting /tmp/image.raw file to a dedicated 8 GB EBS
volume with:
# dd if=/tmp/image.raw of=/dev/FOO bs=512k

Then register the EBS volume as an AMI using the 'aws ec2
register-image' command from the awscli package. Be sure to enable
EnaSupport and SriovNetSupport. You may find the wrapper script at
https://salsa.debian.org/noahm/ec2-image-builder/blob/master/bin/volume-to-ami.sh
convenient. This script is used in the publication of the stretch AMIs,
but not the buster AMIs. The process is essentially the same, though.

noah



Re: Debian 10 Buster AWS AMI

2019-07-10 Thread Noah Meyerhans
On Wed, Jul 10, 2019 at 05:58:16PM +0300, Mustafa Akdemir wrote:
>When Debian 10 Buster AWS AMI will be created and added to AWS
>Marketplace?

Unfortunately, we have some administrative details to work out regarding
our publication account with AWS. It will be published as soon as we
resolve the details...

noah



signature.asc
Description: PGP signature


Re: Presenting Debian to the user at cloud provider marketplaces

2019-07-07 Thread Noah Meyerhans
On Sun, Jul 07, 2019 at 11:49:22PM +0200, Thomas Goirand wrote:
> Why must Debian users on Azure see the word "credativ"? To me, it's as
> if I uploaded the OpenStack image to cdimage.debian.org, with filename
> "openstack-debian-image-provided-by-zigo.qcow2". This feels completely
> inappropriate.
> 
> Can this be removed?

Credativ sponsored this work. Is it really unreasonable to acknowledge
this? We have quite a long history of displaying public acknowledgements
of our sponsors' contributions.



signature.asc
Description: PGP signature


Re: AWS AMI entry for ap-southeast-2 (Sydney) missing from Stretch AMI page

2019-07-05 Thread Noah Meyerhans
On Thu, Jul 04, 2019 at 11:49:13PM -0700, Noah Meyerhans wrote:
> amd64: ami-0776bf2e6645ef887
> arm64: ami-00781f5d2e3a6d2ab

Correction, the AMI IDs for ap-southeast-2 are:

amd64: ami-069a1bfa76dd19320
arm64: ami-00781f5d2e3a6d2ab



signature.asc
Description: PGP signature


Re: Debian Cloud Sprint Fall 2019

2019-07-02 Thread Noah Meyerhans
On Tue, Jul 02, 2019 at 02:12:27PM -0700, Zach Marano wrote:
>I propose we start planning the next Debian Cloud Sprint.

Sounds like a plan. With DebConf coming up, I suspect there might be an
opportunity to do a bit of planning face-to-face, as well as via email
etc.

I've created a wiki entry for the sprint, with all details left TBD:
https://wiki.debian.org/Sprints/2019/DebianCloud2019

>I offer that we (Google) host this year in Seattle sometime in
>October. Does anyone have any comments, ideas, or issues with
>starting this planning process?  Alternatively, we did talk about
>hosting the next sprint on the east coast of the US or Canada. If
>that is something people are interested in, I am willing to look
>into that as well. The downside being that all large cloud
>providers are based in Seattle and may not be able to get as many
>people to attend.

I personally prefer Seattle, but we've dragged the Europeans all the way
out here enough that we should probably give them a break. I could
probably make October on the east coast work. Preferences would be
Boston, then NYC.

I've reached out to contacts at MIT to see if they'd be able to provide
a venue for this year in the Boston area. I know they've got some people
quite active in the Debian and OpenStack cloud areas who could provide
sponsorship.



Re: AWS Debian Stretch Marketplace AMI doest'not allow launch t3a/m5a Amazon EC2 instance

2019-05-24 Thread Noah Meyerhans
On Tue, May 21, 2019 at 11:14:02AM +0300, Eugeny Romanchenko wrote:
>Is it possible for you to add you current Marketplace image to the list of
>supported for t3a/m5a AWS instances?

I've submitted a request to update the AWS Marketplace listing. The new
listing will use the latest stretch 9.9 AMIs (as visiable at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch) and will support
newer instance types.

The submission must be reviewed by Marketplace staff at AWS. This can
take anywhere from a few hours to a few days. If you're using the
current Marketplace listings, you should receive an email notification
from AWS when the listing is updated.

noah



signature.asc
Description: PGP signature


Re: Cloud Image Finder Prototype

2019-05-20 Thread Noah Meyerhans
On Mon, May 20, 2019 at 03:38:25PM -0300, Arthur Diniz wrote:
>The first thing we did was a [2][1] [3]Low Fidelity Prototype, this was
>just a draft that we based to came  up with the [4][2] [5]High Fidelity
>Prototype.

These look great!

>Also we think that is important that if you could tell us what feature do
>you  expect in an Image Finder that we could not leave it behind.

A couple of bits of feedback based on what I've seen of the prototypes:

1. It'd be good to use a static (bookmarkable) URL for the provider
details pages. If, for example, I'm an openstack user, I don't want to
have to click through several pages of general purpose information
(including the list of all the providers, etc) every time I want to look
up the latest openstack images. I want to bookmark a page that gives me
the latest openstack images.

2. For the managed cloud services, most users are going to use the
images already published by Debian to the specific cloud they're
interested in. Most people aren't going to download the raw images as
shown in the openstack example we currently have. So we'll need to think
about how we want to present the provider-specific details in a way
that'll be most familiar to somebody working with that provider's
services on a daily basis. That will likely differ somewhat based on the
cloud provider.

Very good looking start so far. I look forward to seeing more.

noah



signature.asc
Description: PGP signature


Bug#929263: cloud.debian.org: /usr/sbin not in default $PATH

2019-05-20 Thread Noah Meyerhans
Control: severity -1 wishlist

> This is a historical convention, going back decades, that only the
> system administrators needs to run the programs in /sbin and
> /usr/sbin.  So to avoid users getting confused when they might run
> those programs and get "permission denied", historically normal users
> won't have /sbin and /usr/sbin in their path.  However many system
> administrators will have their own custom dot files which do include
> those directories in their paths.
> 
> That assumption is perhaps less likely to be true for servers running
> in cloud VM', but making things be different for cloud VM's as
> compared to standard Debian configurations also has downsides in terms
> of causing greater confusion.  So my suggestion would be for you to
> simply have your own custom dotfiles which can set a PATH different
> from the default.

At this point, I think it'd be worth revisiting, at the project level,
the historical tradition of leaving the sbin directories out of non-root
paths. Setting aside all the single user desktop and laptop systems,
there are enough alternative ways to grant restricted root (file ACLs,
etc), and to run in alternate filesystem namespaces (e.g.  containers),
that the functional distinctions that lead to the original directory
split are probably applicable in a minority of situations these days.

This isn't something that I feel strongly about, though. Anybody who
does should retitle this bug appropriately and reassign it to the
'general' pseudopackage, whereupon it can be discussed on debian-devel.
Otherwise it should get tagged wontfix, unless someone thinks this is an
appropriate change to introduce at the cloud image level (I would not
agree with this).

noah



signature.asc
Description: PGP signature


Bug#929263: cloud.debian.org: /usr/sbin not in default $PATH

2019-05-20 Thread Noah Meyerhans
On Mon, May 20, 2019 at 11:26:00AM +0200, Jorge Barata González wrote:
>Vagrant image debian/stretch64 v9.6.0
>/usr/sbin is not included by default in $PATH
> 
>```
>vagrant@stretch:~$ service
>-bash: service: command not found
>vagrant@stretch:~$ /usr/sbin/service
>Usage: service < option > | --status-all | [ service_name [ command |
>--full-restart ] ]
>vagrant@stretch:~$ echo $PATH
>/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>```

That path is set from /etc/profile, which is not modified by the vagrant
images from the default that Debian installs. /usr/sbin is not in the
default PATH in Debian normally.

If you want to discuss changing this project-wide, we could certainly do
so, but that would be quite a bit broader in scope than
cloud.debian.org.

noah



Amazon EC2: stretch AMIs updated to 9.9

2019-04-30 Thread Noah Meyerhans
Yesterday I published new stretch AMIs for Amazon EC2 for both arm64 and
amd64 architectures.  The AMIs refresh all package version to those
included in Debian 9.9 (stretch), per the release announcement at
https://www.debian.org/News/2019/20190427

The AMI details, as usual, are available at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

The corresponding update in the AWS Marketplace is still pending. I
recommend using the AMIs listed on the above wiki, rather than using the
Marketplace, for up-to-date images.

noah



Re: Proposing to proceed with creating AWS and Azure accounts for Debian

2019-04-30 Thread Noah Meyerhans
On Tue, Apr 30, 2019 at 06:25:47PM +0100, nSed rickm wrote:
>Hi my name is Sedrick I recently joined this mailing Iist to get to know
>more about the debian cloud team .I submited a proposal for GSoC with
>debian this year for the cloud image finder .I would like to catch up on 
>all previous emails . I will greatly appreciate being directed where to
>find them so I can read them. Thanks .

Hi Sedrick. Please see https://lists.debian.org/debian-cloud/ for the
archived discussions from this list.

You may find the minutes from our most recent sprint particularly
interesting:
https://lists.debian.org/debian-cloud/2018/11/msg8.html



Re: Status of Debian accounts and legal agreements with cloud providers

2019-04-04 Thread Noah Meyerhans
On Thu, Apr 04, 2019 at 07:51:22PM +0100, Marcin Kulisz wrote:
> > > > Let me know if this stalls;  I can put you in touch with someone on
> > > > the Azure team.
> > 
> > The Azure team asked some time ago for an email address to attach as
> > owner to those accounts.  Also we need that to attach AWS accounts.  Do
> > we have this address in the meantime?
> 
> I don't think so.

Some time ago (following the 2017 cloud sprint, IIRC), we created
aws-ad...@debian.org. See #7163 in rt.debian.org, if you have access to
that.

This was created with a less well-developed understanding of our account
needs than what we came up with at the 2018 sprint, but it is not
currently being used for anything and we can easily repurpose it for the
new AWS accounts.

Per the original request, the membership should be:
noahm
jeb
93sam
kula

We should probably add the cloud delegates and (maybe?) an SPI
representative to it if we're going to use it.

Messages to that alias are being archived at master.d.o:~debian

noah



Re: Cloud-init datasource ordering

2019-04-03 Thread Noah Meyerhans
On Thu, Apr 04, 2019 at 09:27:11AM +1300, Andrew Ruthven wrote:
> > > Would it be possible to move the Ec2 datasource up the list like "[
> > > NoCloud, AltCloud, ConfigDrive, OpenStack, Ec2, CloudStack,
> > > ${DIGITAL_OCEAN_SOURCE} MAAS, OVF, GCE, None ]"? This also seems to
> > > be in line with expectations on how the datasources have been
> > > sorted before dc1abbe1.
> > > 
> > If we do that, then OpenStack people are going to wait 120 seconds.
> > So,
> > bad idea...
> 
> Hmm, this situation is likely going to just get worse as more
> datasources are added.
> 
> Can we reduce the timeout?
> 
> Try datasources in parallel and first one that responds wins?
> 
> Is it worth having multiple images with the order set appropriately? 

Yes, I think the expectation is that you should be overriding the
default datasource list to specify only the source(s) relevant to your
particular deployment platform. The list can be specified in the
debconf cloud-init/datasources value.

For example, we specify an appropriate value for our Amazon EC2 images
at 
https://salsa.debian.org/cloud-team/debian-cloud-images/blob/master/config_space/debconf/EC2

noah



Re: Status update at DebConf 19

2019-03-29 Thread Noah Meyerhans
On Fri, Mar 29, 2019 at 03:09:50PM -0300, Lucas Kanashiro wrote:
> I think DebConf is the perfect place to share with the Debian community
> the work we have been doing and collect feedback :)

+1, this is a great idea. There should also be a BoF for people
interested in a more interactive session. We've done such things before.

> BTW do you intend to attend DebConf 19? If you have any doubts about
> this year DebConf I can help since I am part of the local team.

I don't know if I'll be able to make it. I have a bunch of other
international travel between now and then and may be unable to schedule
another trip. I hope to resolve the question soon, either way. I should
be available virtually if I'm not able to make it in person.

noah



Bug#925530: cloud.debian.org: Debian docker images pointing to github for bug tracking

2019-03-26 Thread Noah Meyerhans
On Tue, Mar 26, 2019 at 12:25:12PM +0100, Lucas Nussbaum wrote:
> On https://hub.docker.com/_/debian, there's:
> 
> > Where to file issues:
> > https://github.com/debuerreotype/docker-debian-artifacts/issues
> 
> Are those official images? I'm surprised by official Debian images
> pointing to a non-free web service. I would expect the BTS to be used
> for bug tracking.

Well, Docker Hub itself is a non-free service. Further, there are other
official Debian components (packages in main) that use GitHub for their
primary work coordination, so this is not without precident.

> Also, there's:
> > Where to get help:
> > the Docker Community Forums, the Docker Community Slack, or Stack Overflow

Those are Docker's official help channels.

With all that said, the Debian Docker images aren't covered under the
cloud.debian.org pseudopackage, so I guess you'll need to follow up with
tianon or paultag... Or open an issue on GitHub. ;)

noah



signature.asc
Description: PGP signature


<    1   2   3   4   >