Re: Call for Testing: Stretch Cloud Images on AWS

2017-02-02 Thread Noah Meyerhans
On Thu, Feb 02, 2017 at 04:29:11PM +0800, gustavo panizzo wrote:
> I saw your blog post (which I've attached to this email), then the next
> time I needed an EC2 instance I tested the images on a non-IPv6 region
> (SG) and an IPv6 enabled VPC
> 
> overall the image looks fine, no extraneous things, sysctl is clean,
> etc. great job. :)

Interesting that you bring up sysctl. I consider it a bug that we're
currently running with an unmodified set of sysctl variables. Apparently
you disagree. My reasoning is that the kernel defaults are intended to
be very broadly applicable, but the cloud AMI is a more specific use
case and it should be possible to provide a more appropriate set of
defaults for various settings. We can tune our sysctl settings towards
server optimizations because we know we're not running on a device like
a laptop or mobile device.

> could you move the configuration for eth1 to eth8 to
> /etc/network/interfaces.d/? also can you _please_ move out of
> /usr/local the helper? 

I think moving most interface configs to interfaces.d is reasonable and
will do that. I had considered it previously but did not, mostly out of
laziness.

Where would you prefer the interfaces helper script live, if not
/usr/local? Because it does not belong to a package, I don't think it
belongs in a first-level /usr subdirectory. I suppose ideally it will
get added to a package, but I'm not sure it's worth packaging on its
own. Maybe it could be added to ifupdown?

> - cloud-init complains when net-tools is not installed (it appears to
> work anyway) bug #853926

It's probably best to explicitly install net-tools, at least until
cloud-init is updated.

> - I'd like to see all locales installed (but I understand that is a topic
> for another discussion)

Thanks for the suggestion. One thing that other distros have done is
provide a "minimal" AMI that contains the most basic set of tools needed
to function (i.e. not much more than a bare debootstrap install +
sshd and cloud-init and their dependencies), and a full-featured
variant. If we were to do that, maybe it'd make sense to provide locales
in the featureful variant. OTOH, it should be pretty straightforward for
a user to configure desired locales via user-data provided to cloud-init
at launch time, so this may not be necessary.

> I know my complains are mostly esthetics, but is part of the user
> experience the first time he/she logins into an instance.

Noted. Thank you for your feedback.

noah



signature.asc
Description: PGP signature


Re: Call for Testing: Stretch Cloud Images on AWS

2017-02-24 Thread Noah Meyerhans
On Fri, Feb 24, 2017 at 10:19:53AM +0800, gustavo panizzo wrote:
> I found another issue on the image
> 
> $ cat /etc/mailname
> ip-10-0-0-64.us-west-2.compute.internal

Nice catch, thank you!

> and this is not a bug, but a diference in opinion
> 
> $ grep security /etc/apt/sources.list
> deb http://security.debian.org/ jessie/updates main
> 
> instead you can use
> deb http://cloudfront.debian.net/debian-security jessie/updates main
> 
> traffic has no cost :)

Also a good idea. I want to double check that this mirror gets pushes
from security-master. Otherwise I agree that we should use it.

> let me know if you prefer emails or bug reports

Bug reports are easier to keep track of, so I'd prefer that in the
future.

Thank you for the feedback!

noah



signature.asc
Description: PGP signature


Re: Cloud Team blog or ... what?

2017-02-14 Thread Noah Meyerhans
On Tue, Feb 14, 2017 at 12:32:37AM +0100, Laura Arjona Reina wrote:
> >a few days ago Noah blogged about new Stretch AMIs for testing and was
> >wandering how are we going to let people know about changes etc.
> >(https://noah.meyerhans.us/blog/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/)
> 
> What a coincidence! Yesterday I added a micronews about that blog post, but 
> forgot to rebuild the site to actually publish it. I rebuilt the site some 
> minutes ago, and then this mail arrived the press@ mailbox :)
> 
> Of course you can contribute the micronews yourselves; we can miss some news 
> or not be sure which one from different sources to choose, so any help on 
> that is welcome.

This is great, thank you! I'll be sure that I get future updates
published to micronews.

Another thing I've been considering is that we intend the official cloud
images to be ready and released at the same time as the primary stretch
announcement. (Well, I intend this anyway, and am assuming that the rest
of the cloud team has no major objections.) Ideally the release
announcement will at least make mention of the availability of cloud
images for whichever platforms are ready at release time (AWS, MS Azure,
Google, Docker, etc). What's the best way to ensure that this happens?
Should we try to come up with the appropriate wording and pass it along
to press@? Is that going to reach the right people?

noah



signature.asc
Description: PGP signature


Bug#693945: raising severity

2016-11-04 Thread Noah Meyerhans
Control: severity 693945 important
Control: severity 831848 important

Per discussion among the cloud team, I'm raising these bugs to
important. We want to be sure we release stretch with at least basic
support for the cloud.debian.org pseudopackage.

noah



signature.asc
Description: PGP signature


Re: my progress

2016-11-06 Thread Noah Meyerhans
On Fri, Nov 04, 2016 at 09:03:39PM -0400, Sam Hartman wrote:
> I pushed to git://git.debian.org/cloud/fai-cloud-images.git.

I've got a FAI config targeting jessie on EC2. It seems to work well and
at this point I'm at the point of tweaking the packages lists and
configuration to match JEB's images as closely as possible. It shouldn't
be hard to merge my work into Sam's repo, but I haven't done it yet.

I've introduced two classes:

EC2: Contains EC2-specific packages, debconf settings, etc

DEVEL: Installs a number of packages useful in a full-featured Debian
environment, such as git, devscripts, build-essential, etc. The
motivation is that we could generate "minimal" images simply by leaving
omitting this class.

My workflow has largely involved local development with testing taking
place in EC2. In reality there's only a single step that needs to be
performed on an actual instance, though. ('dd' of the generated image to
an attached EBS volume) A possible "production" workflow might be:

1. Run 'fai-diskimage' locally on a Debian-managed host.
2. Perform automated analysis of FAI logs for sanity tests.
3. Mount generated image locally and perform filesystem-level tests.
4. Launch an EC2 instance and attach an EBS volume to it as sdb.
5. scp the generated image to the EC2 instance and 'dd' to sdb.
6. snapshot the EBS volume and register it as an AMI
7. Perform any desired AMI tests, such as launching it with different
user-data configuration, etc.
8. Migrate the AMI to different regions, mark public, register with
marketplace, etc.

The 'dd' component of step 5 is the only one that actually needs to run
on an EC2 instance.

I'm sure JEB already has infrastructure for most of these steps, and we
should continue using that where appropriate.

noah



signature.asc
Description: PGP signature


Re: AWS build workflow (was Re: my progress)

2016-11-07 Thread Noah Meyerhans
On Mon, Nov 07, 2016 at 08:23:10AM +, Marcin Kulisz wrote:
> That's true but there is a bit which makes me quite uncomfortable to be
> precise it's that to do all this stuff from within Debian infra we need to 
> keep
> AWS IAM keys on it with permissions for spinning up and down instances etc.

Yes. The keys would need to be associated with a role granted access to
the following API calls:

* Run instance
* Describe instance
* Create volume
* Attachh volume
* Create snapshot
* Describe snapshot
* Register AMI
* Terminate instance

I'm not sure what facilities are provided on debian.org machines for
managing access to sensitive material such as IAM credentials in an
automated way.

> From my conversation with JEB kind of vision emerged that we could have
> combination of api gateway and lambda listening on the api point and those
> would spin up instance with Pettersson ssh key (public part ofc) and specific
> IAM role on it to allow to do DD and all AWS related dance. Once whole process
> is done it'll just destroy AWS instance and wait for the next build.
> Clean and neat use of "the cloud" I'd say.

My recollection from the sprint is that we agreed that we'd like to
build the images on official Debian infrastructure to the extent
possible, which is why I proposed that workflow. However, I agree that
there are alternatives that make use of some of the other AWS services
such as Lambda, KMS, etc.

noah



signature.asc
Description: PGP signature


Hash sum mismatches

2016-11-04 Thread Noah Meyerhans
James, you said you fixed this! ;P

admin@ip-10-0-0-64:/var/log/fai/cloud3/last$ sudo apt update
Get:1 http://cloudfront.debian.net sid InRelease [219 kB]
Ign http://cloudfront.debian.net jessie InRelease
Get:2 http://cloudfront.debian.net jessie-updates InRelease [145 kB]
Get:3 http://cloudfront.debian.net jessie-backports InRelease [166 kB]
Hit http://cloudfront.debian.net jessie Release.gpg
Hit http://cloudfront.debian.net jessie Release 
Get:4 http://security.debian.org jessie/updates InRelease [63.1 kB]
Get:5 http://security.debian.org jessie/updates/main amd64 Packages [314 kB]
Get:6 http://security.debian.org jessie/updates/contrib amd64 Packages [2,506 B]
Get:7 http://cloudfront.debian.net sid/contrib Translation-en [49.2 kB]
Get:8 http://security.debian.org jessie/updates/non-free amd64 Packages [14 B]
Get:9 http://security.debian.org jessie/updates/contrib Translation-en [1,211 
B]  
Get:10 http://security.debian.org jessie/updates/main Translation-en [168 kB]
Get:11 http://security.debian.org jessie/updates/non-free Translation-en [14 B]
Get:12 http://cloudfront.debian.net sid/main Translation-en [5,553 kB]
Get:13 http://cloudfront.debian.net sid/non-free Translation-en [83.4 kB]
Get:14 http://cloudfront.debian.net jessie-updates/main amd64 
Packages/DiffIndex [5,932 B]
Get:15 http://cloudfront.debian.net jessie-updates/contrib amd64 Packages [32 B]
Get:16 http://cloudfront.debian.net jessie-updates/non-free amd64 
Packages/DiffIndex [736 B]
Get:17 http://cloudfront.debian.net jessie-updates/contrib Translation-en [14 B]
Get:18 http://cloudfront.debian.net jessie-updates/main 
Translation-en/DiffIndex [2,704 B]
Get:19 http://cloudfront.debian.net jessie-updates/non-free 
Translation-en/DiffIndex [736 B]
Get:20 http://cloudfront.debian.net jessie-backports/main Sources/DiffIndex 
[27.8 kB]
Get:21 http://cloudfront.debian.net jessie-backports/main amd64 
Packages/DiffIndex [27.8 kB]
Get:22 http://cloudfront.debian.net jessie-backports/main 
Translation-en/DiffIndex [27.8 kB]
Hit http://cloudfront.debian.net jessie/main amd64 Packages
Hit http://cloudfront.debian.net jessie/contrib amd64 Packages
Hit http://cloudfront.debian.net jessie/non-free amd64 Packages
Hit http://cloudfront.debian.net jessie/contrib Translation-en
Hit http://cloudfront.debian.net jessie/main Translation-en
Hit http://cloudfront.debian.net jessie/non-free Translation-en
Get:23 http://cloudfront.debian.net sid/main amd64 Packages [7,317 kB]
Get:24 http://cloudfront.debian.net sid/contrib amd64 Packages [56.0 kB]
Get:25 http://cloudfront.debian.net sid/non-free amd64 Packages [82.9 kB] 
Get:26 http://cloudfront.debian.net jessie-backports/main 
2016-11-04-0227.52.pdiff [294 B]
Get:27 http://cloudfront.debian.net jessie-backports/main 
2016-11-04-0827.34.pdiff [1,461 B]
Get:28 http://cloudfront.debian.net jessie-backports/main 
2016-11-04-1427.43.pdiff [344 B]
Get:29 http://cloudfront.debian.net jessie-backports/main 
2016-11-04-2028.25.pdiff [1,263 B]
Get:30 http://cloudfront.debian.net jessie-backports/main 
2016-11-04-2028.25.pdiff [1,263 B]
Get:31 http://cloudfront.debian.net jessie-backports/main amd64 
2016-11-04-0227.52.pdiff [689 B]
Get:32 http://cloudfront.debian.net jessie-backports/main amd64 
2016-11-04-0827.34.pdiff [1,536 B]
Get:33 http://cloudfront.debian.net jessie-backports/main amd64 
2016-11-04-1427.43.pdiff [245 B]
Get:34 http://cloudfront.debian.net jessie-backports/main amd64 
2016-11-04-2028.25.pdiff [226 B]
Get:35 http://cloudfront.debian.net jessie-backports/main amd64 
2016-11-04-2028.25.pdiff [226 B]
Fetched 14.3 MB in 7s (1,868 kB/s)
W: Failed to fetch 
http://cloudfront.debian.net/debian/dists/sid/non-free/binary-amd64/Packages  
Hash Sum mismatch
E: Some index files failed to download. They have been ignored, or old ones 
used instead.



signature.asc
Description: PGP signature


Re: my progress

2016-11-12 Thread Noah Meyerhans
And I just pushed some changes to introduce basic stretch support via a
STRETCH class. The following commandline resulted in a working image for
EC2:

fai-diskimage -u stretch8g -S8G -cDEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 
~/disk.raw

I haven't done anything to optimize the configuration for stretch, but
the image does boot and the resulting instance properly sets up login
access via cloud-init.



signature.asc
Description: PGP signature


Re: my progress

2016-11-11 Thread Noah Meyerhans
On Fri, Nov 04, 2016 at 09:03:39PM -0400, Sam Hartman wrote:
> I pushed to git://git.debian.org/cloud/fai-cloud-images.git.

It looks like you're the only person with write access to this repo. It
seems like changing group ownership to 'scm_cloud' and granting group
write permission is the right fix.

zobel, are you able to help with this?

noah




signature.asc
Description: PGP signature


Re: my progress

2016-11-13 Thread Noah Meyerhans
On Sun, Nov 13, 2016 at 06:17:33AM -0500, Sam Hartman wrote:
> Noah> I've modified the class/DEBIAN.var file such that the default
> Noah> behavior is to generate images for jessie. We can add a
> Noah> STRETCH class in order to generate images for testing. I'd
> Noah> rather use "stable", but it's preferable to use the release
> Noah> codename in many places and there's no easy way to resolve
> Noah> stable to the current codename AFAIK. Maybe there should be?
> 
> I thought we agreed the fai efforts would be for stretch and we would
> not be converting our existing jessie work.
> I don't mind having jessie support if it's easy, but for example if we
> run into situations where  it adds extra code to work around bugs fixed
> in stretch but present in jessie, I'd like to push back.

Yes, I think you're right.

I based my work on jessie initially because we have JEB's high quality
jessie images to use as a baseline for comparison, and I found that
useful.

In practice, activating a JESSIE or STRETCH class still needs to happen
via the fai-diskimage commandline anyway, and these classes override the
release variable, so defining a default value for that variable in
class/DEBIAN.var was not actually useful. My inclination is to require
the user to activate a specific release class, and for FAI to abort if
one isn't provided.

noah




signature.asc
Description: PGP signature


Re: my progress

2016-11-11 Thread Noah Meyerhans
On Sun, Nov 06, 2016 at 11:38:33AM -0800, Noah Meyerhans wrote:
> > I pushed to git://git.debian.org/cloud/fai-cloud-images.git.
> 
> I've got a FAI config targeting jessie on EC2. It seems to work well and
> at this point I'm at the point of tweaking the packages lists and
> configuration to match JEB's images as closely as possible. It shouldn't
> be hard to merge my work into Sam's repo, but I haven't done it yet.

Ok, I've just pushed to fai-cloud-images. Because my work started in a
different repo, it was difficult to preserve history, so I ended up
pushing everything in a single commit.

My volume-to-ami script is included at the top level. It's certainly not
the right place for it, but there isn't an obvious right place. Maybe
create a bin directory?

In an effort to reduce repetition, I've introduced an "apt_cdn" variable
that can be set in a cloud-specific class. This lets us generate
sources.list files based on templates that refer to mirrors running in
the local cloud infrastructure, e.g. cloudfront for AWS.

I've modified the class/DEBIAN.var file such that the default behavior
is to generate images for jessie. We can add a STRETCH class in order to
generate images for testing. I'd rather use "stable", but it's
preferable to use the release codename in many places and there's no
easy way to resolve stable to the current codename AFAIK. Maybe there
should be?

I've generated ami-113e9d71 based on the following command line:
fai-diskimage -u cloud3 -S3G 
-cDEBIAN,JESSIE,JESSIE64,AMD64,BACKPORTS,GRUB_PC,DEVEL,CLOUD,EC2

That AMI is marked public in case anybody wants to try it as is, though
I may delete it at some point.

cloud-init works, along with root FS resize. Console output is logged to
the serial port and is available via the get-console-output API. The
'admin' user is created with the ssh key given at boot time. pstree
shows:

systemd-+-agetty
|-agetty
|-atd
|-cron
|-dbus-daemon
|-dhclient
|-exim4
|-rsyslogd-+-{in:imklog}
|  |-{in:imuxsock}
|  `-{rs:main Q:Reg}
|-sshd---sshd---sshd---bash---pstree
|-systemd-journal
|-systemd-logind
`-systemd-udevd

Systemd-analyze critical-chain ssh.service shows that, between kernel
and userspace, it takes 40s before the instance is ready for use, so
it'd be nice to optimize this:

ssh.service @36.153s
└─cloud-init.service @27.088s +9.040s
  └─cloud-init-local.service @12.938s +14.136s
└─basic.target @12.938s
  └─timers.target @12.938s
└─systemd-tmpfiles-clean.timer @12.938s
  └─sysinit.target @12.937s
└─networking.service @6.533s +6.403s
  └─local-fs.target @6.532s
└─mdadm-raid.service @6.112s +419ms
  └─systemd-udevd.service @6.099s +3ms
└─systemd-tmpfiles-setup-dev.service @5.353s +745ms
  └─kmod-static-nodes.service @4.642s +710ms
└─system.slice @4.640s
  └─-.slice @4.640s


These values compare favorably to our current AWS AMIs, although there
are some notable differences that should be addressed. In particular, my
image runs exim4 and atd, while the marketplace AMI does not. We
probably should stick with the marketplace AMI's behavior for
consistency. My image fixes 785457 by not running extraneous gettys on
tty1-6.

My AMI has 550 packages installed, while the marketplace AMI has only
318. I have not compared image generation when performed without the
DEVEL class being enabled. I suspect it'll still have more packages than
marketplace, but not by as wide a margin.

noah



signature.asc
Description: PGP signature


Bug#846583: cloud.debian.org: AWS Image should enable DHCPv6 client

2016-12-13 Thread Noah Meyerhans
On Sat, Dec 10, 2016 at 08:57:22PM +0100, Bernhard Schmidt wrote:
> I don't think this will ever be fixed with ifupdown. I think
> systemd-networkd and NetworkManager do the right thing here, but I have
> never had a look at either for maintaining a _server_. So I will not
> propose switching to those.

I've put together a workaround for ifupdown that does the right thing
for both dual-stack and ipv4-only instances:

https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/commit/?h=dhcpv6=ff79069df0eb08634f52a355e5c578c10532479c

This change isn't merged into the master branch in the fai-cloud-images
repo, and I'd like others on the cloud team to review it and see if
there's anything I'm missing or that could be improved. Of course, it
should go without saying that such a hack is unfortunate, and it'd be
preferable if ifupdown did the right thing, but as you say, it probably
never will.

Something similar should work for the jessie images, but I'm not
familiar with how they're generated. jessie doesn't have
/lib/ifupdown/wait-for-ll6.sh, so that functionality may need to be
incorporated into a helper script similar to the one in my branch.

The key is that /etc/network/interfaces is populated with entries of the
form:

iface eth0 inet dhcp
iface eth0 inet6 manual
  up /usr/local/sbin/inet6-ifup-helper
  down /usr/local/sbin/inet6-ifup-helper

The inet6-ifup-helper takes care of starting and stopping dhclient -6 on
the appropriate interfaces. It should only exit 0, so it shouldn't
interfere with ifupdown's operation.

I've tested this configuration on the primary interface and on secondary
interfaces on instances in both dual-stack and v4-only networks and it
works as expected.

I don't really like the idea of installing actual code on AMIs from the
fai-cloud-images repo, but I'm not really interested in packaging
inet6-ifup-helper either.

I've created public AMIs ami-290ca649 (us-west-2) and ami-95a7fdf0
(us-east-2) based on these changes if anyone would like to perform
additional testing...

noah



signature.asc
Description: PGP signature


Re: Bug#846583: cloud.debian.org: AWS Image should enable DHCPv6 client

2016-12-14 Thread Noah Meyerhans
On Wed, Dec 14, 2016 at 08:17:28AM +0100, Thomas Lange wrote:
> I wonder why you need to source /usr/lib/fai/subroutines for importing
> the ifclass subroutine. If your scripts are bash scripts, this
> function should be already available.

Hm. I have no idea why I thought that would be necessary. Confirmed that
it's not, so I've removed that.

noah



signature.asc
Description: PGP signature


Re: Packaging AWS Agents for Debian

2016-12-05 Thread Noah Meyerhans
On Mon, Dec 05, 2016 at 06:02:12PM +0800, gustavo panizzo (gfa) wrote:
> Watching how I can improve the current status of Debian support at $DAYJOB
> I realized that AWS agents are not packaged even if their license
> allows it
> 
> I wonder if packaging them, on main, would make Debian supported by AWS?
> 
> When I said supported by AWS I mean Debian having the same status as Amazon 
> Linux, 
> RH, and Ubuntu in docs like this
> 
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sysman-install-ssm-agent.html
> http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html

Note that for each of the supported OSes referenced in the above docs,
the agent is packaged and distributed by AWS directly, and is not part
of the distribution. So being present in the distro is not a requirement
for support. I will follow up with contacts at AWS and see if I can
learn more details about what makes a distro eligible for support. I
suspect that demand from AWS customers is a large part of it.

Regarding the list of specific agents you provided, yes, each of them is
potentially a candidate for inclusion in Debian. It's simply a matter of
somebody being sufficiently motivated to do that work...

> Maybe there are other agents out there that I don't know about it

Amazon ECS Init should also be packaged.
https://github.com/aws/amazon-ecs-init

> I'm *really* interested in feedback from from Amazon folks here, if Debian 
> doesn't
> get the supported status by AWS, I won't have the time to own this
> effort (I don't know ruby so I can't help with the CodeDeploy agent
> anyway)

I suspect that there are two things that need to happen for AWS to
officially consider Debian supported in their docs, etc:

1. Customers have to be asking for official support. If there's not
sufficient customer demand, it's probably not even on AWS's radar. I
don't not know what would constitute "sufficient".

2. There could be formal partnership agreements between the distro and
AWS. I'm not sure, and I don't know what the terms of such an agreement
would be, so I have no idea if this would be an option for Debian.

Obligatory Disclaimer: I work for AWS, but I am not involved in
determining what is supported by them. I'm not speaking on their behalf
here.

noah



signature.asc
Description: PGP signature


Re: Debian Stretch on GCE

2017-03-29 Thread Noah Meyerhans
On Wed, Mar 29, 2017 at 10:18:36AM -0700, Zach Marano wrote:
>Yeah I guess I'm asking what the official Debian cloud image build process
>is looking like (or if its been started at all).

The current FAI config has at least some support for GCE, but
unfortunately I don't think anybody has done any meaningful work on the
it since November. I've done a fair bit of work on generic image content
and some AWS details, and am currently working on more complete
automation around the build process. This same work will need to be
adapted to GCE/Azure/etc, but that really shouldn't be very difficult
for somebody familiar with the services and their APIs. 

I've attached a short writup of the tool that I'm currently thinking
about. My intent is to build it in such a way that support for
additional platforms can be added relatively easily, but if nobody else
is working on these platforms I may cut some corners initially.

Alternatively, if someone want to start hacking on GCE support in the
FAI configs without waiting for that whole tool to exist, you can start
by reading my blog post on using FAI to generate AWS images and adapting
it for GCE. Simply replacing occurrances of EC2 with GCE in the FAI
class lists will likely get you a long way toward generating basic
images. 

https://noah.meyerhans.us/blog/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/

You'll likely want to implement a GCE version of the AMI registration
script if you do that:

https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/tree/volume-to-ami.sh

noah

# Debian FAI Cloud Image Builder

## Synopsis

./generate-image CONFIGFILE

## Description

This tool is intended to be used to generate official and unofficial
(Derivative, Custom, etc) Debian AMIs for use on AWS.

In order to accomplish this, the tool performs the following steps:

1. Launch an EC2 instance with the following properties:
  * Instance has a public IP mapped to it.
  * Instance's security group permits ssh access.
  * Instance is configured with a cloud-init userdata script (see below).
  * Instance has a secondary EBS volume attached.
2. Run FAI to generate a disk image.
3. 'dd' the disk image to the attached EBS volume.
4. Snapshot the EBS volume.
5. Register the snapshot as an AMI.
6. Perform validation steps on the AMI.
7. Publish the AMI to supported AWS regions.
8. Mark the AMI as public.

The workflow may split AMI creation, validation, and publication into
discrete steps if we want to require manual confirmation before proceding.

## Configuration

Configuration of this tool is performed via configuration files; there are
no command-line options. Configuration files use the YAML syntax and
consist of three sections:

1. AWS Global configuration:
  * What AWS profile to use?
  * What region to run it?
2. Instance configuration:
  * Security group and subnet.
  * ssh keypair name
  * Instance type
  * AMI ID
3. Image configuration:
  * Git repository and commit ID for FAI configuration.
  * FAI class list.

## See also

## Author

## Copyright


Re: IAM permissions adjustment on AWS

2017-08-10 Thread Noah Meyerhans
On Thu, Aug 10, 2017 at 08:28:44AM +0100, kuLa wrote:
> I'm recently fiddling a lot with permissions on the Debian AWS account and 
> it's
> been pointed to me that it's worth considering updating IAM settings a bit.
> 
> Having above in mind and that DDs are already trusted enough :-) I'm thinking
> about giving a full RO to all DDs which are having access to the AWS account.

Yes please. The current restrictions are very difficult to work with.
Broader RO access is a good start.

Not only RO permissions need to be updated. I recently replaced my MFA
device and found that I don't even have permission to update my IAM
role's MFA settings. (Do I even have permission to change my own
password? I haven't tried yet.)

noah



signature.asc
Description: PGP signature


Stretch AWS AMIs updated for stretch r1 release

2017-07-23 Thread Noah Meyerhans
I've updated the stretch AWS AMIs following today's stable point
release.

The AMIs are owned by AWS account ID 379101102735 and are named
debian-stretch-hvm-x86_64-gp2-2017-07-22-75922.

Regional AMI IDs are:
ap-northeast-1: ami-42769724
us-west-2: ami-52c7df2b
eu-west-2: ami-5c4b5a38
us-east-1: ami-5e203d48
ap-southeast-2: ami-5f7e613c
ap-southeast-1: ami-62ff6c01
ap-south-1: ami-6440380b
us-west-1: ami-7df8d11d
eu-west-1: ami-88e60ff1
sa-east-1: ami-8d3b4ce1
us-east-2: ami-987f5ffd
eu-central-1: ami-ea4fe285
ca-central-1: ami-ff2b949b
ap-northeast-2: ami-ff67b991

Availability of the AMIs via the AWS marketplace is still pending
completion of the publication workflow, which is in progress.

The release announcement for stretch r1 is at
https://www.debian.org/News/2017/20170722

These details are also on the wiki at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

noah



signature.asc
Description: PGP signature


Bug#865965: InvalidBlockDeviceMapping error while creating a new ami based on stretch one

2017-06-26 Thread Noah Meyerhans
On Mon, Jun 26, 2017 at 09:55:31AM +0200, Xavier Lembo wrote:
>  I've tried today to use my jessie working script on the eu-west-1 stretch 
> ami (ami-e79f8781)
> 
>  My custom ami must have a specific size for first partition, so , i use a 
> specific block device mapping.
> 
>  On the Stretch AMI, it fails with message:
> 
>  amazon-ebs: Error launching source instance: InvalidBlockDeviceMapping: The 
> device 'xvda' is used in more than one block-device mapping
>  ==> amazon-ebs:  status code: 400, request id: 
> ec5463c3-2498-4084-a1aa-825b24b07287
> 
>  I've tried to check ebs differences between the working jessie ami and this 
> one
> 
>  and the difference is in the naming of the block device:

Are you able to launch a standalone instance of the stretch AMI? I don't
think the issue is with the AMI's block-device mapping specifically.
What's the block-device mapping configuration you're trying to specifiy
with packer? My guess is that you're ending up specifying both "xvda"
and "/dev/xvda" with packer, which is causing the conflict because
they reference the same canonical device.

I've verified that the strech AMI in eu-west-1 does launch correctly
with its current block-device mapping settings:

curl -s 169.254.169.254/latest/meta-data/placement/availability-zone ; echo
eu-west-1a
admin@ip-172-31-34-141:~$ curl -s 169.254.169.254/latest/meta-data/ami-id ; echo
ami-e79f8781
admin@ip-172-31-34-141:~$ curl -s 
169.254.169.254/latest/meta-data/block-device-mapping/ami/ ; echo
xvda
admin@ip-172-31-34-141:~$ curl -s 
169.254.169.254/latest/meta-data/block-device-mapping/root/ ; echo
xvda
admin@ip-172-31-34-141:~$ lsblk
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda202:00   8G  0 disk 
└─xvda1 202:10   8G  0 part /

Thanks
noah



Re: Urgent: Clound images for Stretch announcement

2017-06-15 Thread Noah Meyerhans
On Wed, Jun 14, 2017 at 12:09:26AM +0200, Ana Guerrero Lopez wrote:
> > In the current Debian Stretch announcement (still open for editing),
> > we mention only the OpenStack images. I was wondering about the availability
> > of other official images: GCE, AWS, Azure?
> > 
> > Please, see the current announcement at
> > https://anonscm.debian.org/cgit/publicity/announcements.git/tree/en/drafts/stretch-release.wml
> > line 161. I would like to add the other images and move it all to their
> > own paragrpah.
> 
> Thanks for the quick replies Zach and Bastian.
> 
> It's finally going to be too short to have everybody replying properly
> on time. I also doubt we're going to be able to squeeze all the relevant
> information in a small paragraph in the announcement and I'm not liking a
> lot the idea of pointing to external sources in the stretch announcement.
> 
> New plan: publishing a blog post in bits.debian.org about Debian stretch
> images for the cloud on Tuesday 20 June.

Sounds good to me. I will have images built for AWS pretty much as soon
as "stable" points to stretch on the mirrors. The details will be
documented at https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch as
with the previously released "beta" stretch images.

> >From the Debian Cloud Sprint [1] and the wiki page [2] looks like the
> idea of "official images" is still work in progress. So for the
> purpose of this blog post I'll consider all images are unofficial
> and the information presented by the post will be merely to inform
> users how they can run Debian in their favorite cloud provider.

Fair enough. User experience should be about the same either way.
Details to be worked out are pretty much entirely internal, in terms of
various image generation workflows and where they run.

> I have created a pad https://pad.riseup.net/p/stretchcloudimages with
> some questions. I'd like you to add there all the relevant information
> and I will do my best to create a nice blog post.

I believe most of the details you're looking for are covered on the
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch, except for support
and documentation details. In general support is via
debian-cloud@lists.d.o, #debian-cloud, or the cloud.debian.org bts
pseudopackage. Documentation is wiki.debian.org, though there's not a
lot to speak of.

Thanks
noah



signature.asc
Description: PGP signature


Re: AMI storage permissions

2017-10-08 Thread Noah Meyerhans
On Mon, Sep 25, 2017 at 05:25:40PM +0100, Jacob Smith wrote:
> Would it be possible to make the snapshots, used by the AMIs on the
> 379101102735 account, public so that the AMI can be copied?

I've updated the AMI publication tools to mark the backing snapshots
public. I've published stretch 9.2 AMIs with the updated tooling, so
their snapshots should be public.

The current AMI details are listed at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

noah



signature.asc
Description: PGP signature


Re: Building cloud images in sandbox VMs

2017-10-13 Thread Noah Meyerhans
On Fri, Oct 13, 2017 at 01:48:53PM +0200, Emmanuel Kasper wrote:
> >> > Building
> >> > 
> >>
> >> > any further. We will need to look into tools for making new VMs.
> >> I wonder what is meant by "making new VM".
> >> You do mean creating the disk image for the VM or starting the VM with
> >> a tool like virsh?
> > 
> > If I recall correctly this is about creating ephemeral vms (possibly from
> > template) on demand to used them as build machines for cloud images.
> 
> I had a look at various possible tools which could make that possible,
> here is a short summary.
> If people have more details, please share, not flame.
> 
> Background reason: you need root rights for most of the build tools, and
> the cduser on the build server is an unpriviledged user.
> So we want to use sandbox VMs for the builds.

The proposed solutions all assume that the builder VM must reside on
hardware owned by Debian. I assert that this is not necessary, and that
a VM on a cloud platform is sufficient (for that cloud platform's
images, at the very least). Thus, my preferred solution for creating a
builder vm is in essence:

$ aws ec2 run-instances --image-id ami-foo \
  --user-data file://ec2-userdata.yaml

Where the ec2-userdata.yaml contains configuration for cloud-init
telling it how to set up and run FAI and ami-foo is the current public
stretch AMI on AWS. The resulting VM is completely disposable. Any
desired state, from logs to the entire disk image, can be preserved if
desired.

It should be straightforward to port this to other cloud platforms.

noah



signature.asc
Description: PGP signature


Re: dieas for the sprint

2017-10-16 Thread Noah Meyerhans
On Sun, Oct 15, 2017 at 11:14:01PM -0600, Ben Howard wrote:
>Thank you for the idea, however, after looking at it, I see FAI as
>technology akin to KickStart, MAAS, AutoYast, etc. 

Ben, as you weren't at last year's cloud sprint, you likely missed that
we settled on using FAI as the cloud image generation tool then. I've
been using it since then to build the Debian images available in the AWS
Marketplace. (As far as I know, other clouds haven't yet adopted it, but
I think this will be an issue to fix during this sprint.) It does not
attempt to duplicate any of what cloud-init is doing, and in fact still
runs cloud-init as usual.

Adding support for additional cloud to our FAI configs should be very
nearly trivial. If they don't require any special configuration or
packages baked into the image, then there's really no configuration at
all. If they do require specific configuration (e.g. for EC2 we want to
bake the aws-cli and boto packages into the image) then that's easy to
do.

The FAI configs for building vagrant boxes and EC2 AMIs are at
https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/tree/

While snapshot-based image customization is, of course, supported, we
also explicitly wanted to support users who want to construct images
"from scratch" that derive from our configs. This is straightforward in
FAI, and I posted a blog giving an example of our one might do it:

https://noah.meyerhans.us/blog/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/

At last year's sprint, we demoed the various available image generation
tools. I suspect we'll do something similar again this year, so you'll
get to see the system in action.

noah



signature.asc
Description: PGP signature


Re: Generating a cloud / VM kernel package

2017-08-27 Thread Noah Meyerhans
On Sat, Aug 26, 2017 at 05:18:45PM +0100, Ben Hutchings wrote:
> > Thomas, can you elaborate why you think this a good idea? Is this about
> > boot time of the kernel image? The thing I really do not want to have is
> > additional kernel source uploads to the archive for just those cloud
> > kernel images, but you already considered that a bad idea (from what I
> > read between your lines).
> 
> When the Google Cloud people talked to me about slow booting, it turned
> out that reconfiguring initramfs-tools to MODULES=dep made a big
> improvement.  That is likely to be a sensible configuration for most
> cloud images.

I'm not sure that'll work for us. The image generation is not generally
expected to occur on cloud instances (though in practice it certainly
may).

OTOH, the list of required modules may be small enough for us to
enumerate the ones we need for booting in /etc/initramfs-tools/modules.
I will look into this, and we'll see what it does to boot times.

noah



signature.asc
Description: PGP signature


Re: Generating a cloud / VM kernel package

2017-08-27 Thread Noah Meyerhans
On Mon, Aug 28, 2017 at 01:31:31AM +0100, Ben Hutchings wrote:
> > OTOH, the list of required modules may be small enough for us to
> > enumerate the ones we need for booting in /etc/initramfs-tools/modules.
> 
> ...and then you could use MODULES=list.  initramfs-tools will still
> follow module static dependencies in this case.
> 
> > I will look into this, and we'll see what it does to boot times.
> 
> Note that the saving will mainly be in time to load the initramfs -
> which on Google Compute Engine is done through BIOS disk services that
> have very low performance.  The mere presence of the unneeded modules
> in the initramfs won't cause them to be loaded into the kernel and
> shouldn't make much difference to the time taken to boot after this
> point.

On Amazon's HVM instance families, the initramfs is read from "local"
disk, which may be network-attached or actually local. I haven't
profiled load times in great depth, but my guess is that reading and
uncompressing the image would be the biggest contributors to the load
time. In my experimentation, uncompressing an 18 MB initramfs takes
roughly 500 ms of clock time when read from network storage. That's not
completely insignificant, but considering the fragility of MODULES=list
or MODULES=dep, I'm not sure it's the best place to look for
optimizations right now.

noah



signature.asc
Description: PGP signature


Re: Generating a cloud / VM kernel package

2017-08-28 Thread Noah Meyerhans
On Sun, Aug 27, 2017 at 04:16:50PM +0200, Thomas Goirand wrote:
> Basically, the only thing that I want to see is a specific config for
> that kernel, nothing else. Otherwise, it's going to be too much
> maintenance work. Indeed, it should *not* be a different source upload,
> that's too much work as well. There also may be some optimization that
> we could do.
> 
> Also, I don't see this happening without a prior agreement from the
> kernel team (which means probably that Ben has to agree). On our side,
> we could prepare a list of kernel modules that we do *not* want.

You might consider looking at what Ubuntu did to their kernel.
https://insights.ubuntu.com/2017/04/05/ubuntu-on-aws-gets-serious-performance-boost-with-aws-tuned-kernel/
suggests that they did more than just disable some modules, but it's
light on details.

If we're able to come up with a specific list of proposed changes, we'll
probably be able to have a more fruitful conversation.

noah



signature.asc
Description: PGP signature


Re: IAM permissions adjustment on AWS

2017-09-03 Thread Noah Meyerhans
On Sun, Sep 03, 2017 at 11:34:30PM +0200, Thomas Goirand wrote:
> BTW, how do I generate the @(#*$& image manifest? Uploading an image to
> amazon is such a pain ... :/

The manifest is created by ec2-bundle-vol. However, it sounds like
you're trying to generate what Amazon calls an "instance store backed
AMI", which probably isn't what you want. We don't even publish such
AMIs for the semi-official stretch cloud images. I suspect that that's
why you're having credential issues; you're making API calls that aren't
usually called by the ImageBuilders group members.

Instead create an EBS-backed volume, which uses the network-attached EBS
storage as the root volume. In that case, the steps are:

dd your raw image to an EBS volume
Snapshot the EBS volume
Register the snapshot as an AMI

I use the script at [1] when generating the semi-official AWS images.
The only required parameters to it are a volume ID. It assumes that
you've already generated the raw image and written it to the volume with
dd.

The only API calls are CreateSnapshot[2] and RegisterImage[3]

1. 
https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/tree/volume-to-ami.sh
2. 
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateSnapshot.html
3. https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RegisterImage.html



signature.asc
Description: PGP signature


Re: Possibilities for a special Azure or cloud Linux package

2017-12-17 Thread Noah Meyerhans
On Fri, Dec 15, 2017 at 08:03:51PM +0100, Bastian Blank wrote:
> > > We at credativ are responsible for maintaining the Azure cloud images.
> > > We got asked by Microsoft to explore the possibilities of introducing a
> > > specialised Linux image for this plattform into Debian.  The main
> > > enhancements we look at would be:
> > > - faster boot of the instance,
> > > - smaller memory footprint of the running kernel, and
> > > - new features.
> > 
> > However, if it is possible to create a single flavour that provides
> > those sorts of enhancements for multiple cloud platforms, I think that
> > would be worthwhile.
> 
> I have some initial findings for a kernel using a derived config.  I
> reduced the boot time by 5 seconds (from 30 to 25).  The installed size
> was reduced from 190 to 50MB.
> 
> Microsoft published a patch set against 4.13 and 4.14
> https://github.com/Microsoft/azure-linux-kernel
> they would like to add.
> 
> Now the question is if other cloud providers would like to follow such a
> path by using and careing for a specialised linux image for this
> platforms?

I'd be interested in helping to support work for a cloud kernel and
verifying its functionality on EC2. I can't, however, make a lot of
promises about how much time I can commit to this effort.

In our previous thread on this topic[1] it was suggested[2] that a tuned
initramfs config might go a long way toward reducing boot times. It's
not an investigation to which I've been able to devote much time,
unfortunately, but I think we should pursue that path to completion
before we look at patching the kernel or providing custom builds. Has
this been done on any cloud platform yet?

noah

1. https://lists.debian.org/debian-cloud/2017/08/msg00025.html
2. https://lists.debian.org/debian-cloud/2017/08/msg00032.html


signature.asc
Description: PGP signature


Re: Debian Stretch AMI on AWS Marketplace + Meltdown

2018-02-04 Thread Noah Meyerhans
On Sun, Feb 04, 2018 at 02:55:52PM +1100, Michael Schams wrote:
> However, the AWS scan tool rejects the AMI due to the following issue:
> 
> (quote) "Vulnerabilities detected - The following vulnerabilities were
> detected and must be addressed: CVE-2017-5754 [3]."

Unfortunately, only the AWS marketplace people know what their scan is
looking for here. You'll probably need to reach out to them for help.

Interesting that they're detecting problems here but they didn't have
any issues with the original AMI. It's probably just that they hadn't
yet implemented Meltdown checks when I registered mine.

noah



signature.asc
Description: PGP signature


Re: AWS: a lot of entries in /etc/sudoers.d/90-cloud-init-users

2018-02-15 Thread Noah Meyerhans
On Thu, Feb 15, 2018 at 12:02:22PM +0200, Andrei Popenta wrote:
>I want to ask about an issue I noticed with my AWS AMI, that is created
>from the official Debian Jessie AMI.

Have you checked the stretch AMIs for the same issue?

>It seems like after each time I create a new AMI, entries for admin users
>are added to  /etc/sudoers.d/90-cloud-init-users.

You could start with a bug report against cloud-init, since that's the
package that owns this file. It's possible that the root cause is
elsewhere, or that the bug is fixed in more recent versions, but it's
reasonable to start with cloud-init.

noah



signature.asc
Description: PGP signature


Re: Cloud Team on salsa.debian.org

2017-12-28 Thread Noah Meyerhans
On Thu, Dec 28, 2017 at 07:59:59AM +0100, Marcin Kulisz wrote:
> > Now, as salsa is in beta - should we start using it?
> 
> I think so, even if 'shit happens' we have copies of the repos locally and the
> team can be recreated, also atm I'm going to push to alioth and salsa at the
> same time.

Agreed. I've gone ahead an imported the fai-cloud-images repo from
alioth to gitlab. See
https://salsa.debian.org/cloud-team/fai-cloud-images

It looks fine to me, and I've updated all my git origins to reference
salsa instead of git.d.o.

I have not yet done so, but I believe we should mark the git.debian.org
repo read-only in order to reduce the potential for confusion.

noah




signature.asc
Description: PGP signature


Re: Packaging terraform?

2018-06-21 Thread Noah Meyerhans
On Thu, Jun 21, 2018 at 10:03:32PM +0200, Thomas Goirand wrote:
> I have to admit I don't know a lot about proprietary apis. But as for
> OpenStack, APIs are quite stable, and (almost?) always backward
> compatible thanks to API micro-versions and auto-discoverability. In
> fact, I haven't found yet an example of something that broke API
> backward compatibility.
> 
> Do we see such breakage often in AWS / Azure / GCE? I'm amazed to read
> this, really. How come customers aren't complaining about this then?

No, AWS is extremely disciplined about maintaining backwards
compatibility. I imagine GCE and Azure are the same. The issue is that
new APIs are being added *all the time* by all the providers supported
by terraform. You could certainly package it, but IMO it's not really
worth including in stable because it would lag so far behind. Even
maintaining packages for unstable and stable-backports would amount to
running on a treadmill.

Consider the terraform AWS provider:
https://github.com/terraform-providers/terraform-provider-aws/releases
It has a release on a weekly basis. GCE and Azure are bi-weekly:
https://github.com/terraform-providers/terraform-provider-google/releases
https://github.com/terraform-providers/terraform-provider-azurerm/releases

I'm not at all opposed to having these packages easily available, but I
see it being a lot of work and a generally thankless job.

noah



Debian AWS status

2018-07-27 Thread Noah Meyerhans
Because I don't anticipate being able to make it to the cloud BoF at
Debconf, I'm sending this update here. Please feel free to discuss any
of the issues; just make sure to record the conversation in the
minutes so I can respond.

Current status
==

Stretch
---

Stretch images are built using FAI and are actively maintained and
kept up to date with content changes, e.g. base system security
updates and point releases.

Images are published via the AWS Marketplace and directly on the
Debian wiki (https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch)

As of 2018-07-23, 8882 unique AWS accounts have used the stretch AMIs
via the AWS Marketplace. No data is available on the number of actual
instances, the instance types in use, or the percentage of those
accounts that are actively using the AMIs.

No data is available on the use of the instances published on the
wiki.

Jessie
---

Jessie images are built using bootstrap-vz, but are *not* actively
maintained. They had previously been maintained by James Bromberger
, but he is not actively involved anymore. No updates
of any kind since at least January of this year. We should probably
try to fix this, but it'd be nice to get help from James to make sure
we're building something sufficiently similar to the current
AMIs. Volunteers are welcome. Alternatively, we should mark them as
"retracted" in the AWS Marketplace, which will let current users
continue to use them but will prevent them from showing up in search
results.

As of 2018-07-23, there are 27891 unique AWS accounts that have used
these AMIs via the Marketplace. As with Stretch, there is no data
regarding how the images are being used or how actively.

Buster
---

FAI configs were updated to support buster at the previous cloud
sprint hosted by Microsoft last fall, and the resulting images were
given cursory testing. They haven't been tested on an ongoing
basis. As the release approaches, we should be building and testing
more frequently.

TODO


Network configuration
---

We use
a hack 
(https://salsa.debian.org/cloud-team/fai-cloud-images/blob/master/config_space/files/usr/local/sbin/inet6-ifup-helper/EC2)
to work around
[bug #804396](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804396)
and make ifupdown work without manual configuration in IPv4 and
dualstack AWS VPCs. The script could probably be improved, integrated
with udev rather than relying on a pre-configured
`/etc/network/interfaces`, etc. Additionally, the script isn't even
packaged, but it should be if we're going to ship it on the public
images.

Will buster continue to use ifupdown, or is it switching to
systemd-networkd? If the latter, is ifupdown still available? Last
time I tried, I was unable to get a single networkd configuration to
behave correctly in all of the different VPC network configuration
possibilities.

cloud-init
---

There's a new version available. It should probably be packaged soon
to give us time for testing before the buster release.

Specialized AMIs?
---

Kubernetes, installed using the KOPS cluster management tool, uses our
AMIs by default. We should be engaging with them to ensure that things
are working as well as possible.

AWS publishes some custom Amazon Linux-based AMIs as an optimization
for certain use cases. E.g. there's a "minimal" AMI with a reduced set
of software, an ECS Optimized AMI pre-configured with Docker and the
necessary support software to integrate with the ECS container
orchestration software. Do our users want similar AMIs? Even if we
don't publish the AMIs, we could accept contributions to our FAI
configs allowing users to easily build their own derivative images.

AWS Account Ownership
---

The main Debian AWS account still lists James Bromberger as the
primary owner and contact. I've done preliminary work to get it moved
to a Debian role account. We still need to work with AWS and James to
get the ownership and contact details updated. The email alias is
aws-ad...@debian.org and it's maintained by DSA. Current members are
myself, jeb, 93sam, and kula.

Special AWS regions
---

AWS has reached out to me because (at least) one of their customers
have asked them about the availability of our AMIs in the AWS GovCloud
region. This is a special region built to support US government
customers bound by ITAR regulations, and AWS accounts by default don't
have access to it. ITAR requires that people directly interacting with
the region in any kind of administrative manner be US citizens within
US territory. I meet these requirements and can perform the necessary
tasks to make images available in GovCloud, but so far haven't had
time to complete the necessary administrative tasks to get set up in
that region. Once we have access to GovCloud, integration of that
region into our existing AMI publication workflow should be reasonable
straightforward. Volunteers to chase down the various administrative
loose ends and get us up and running in GovCloud would be
appreciated. We need 

Re: DebConf18 Cloud Team BoF summary

2018-07-31 Thread Noah Meyerhans
Thanks, Tomasz, for putting together the summary. Sorry I couldn't be
there.

On Tue, Jul 31, 2018 at 12:37:51PM +0200, Tomasz Rybak wrote:
> There was intensive discussion related to automated/unattended
> upgrades of our images: whether we should do it at all (there may be
> run in environment w/o internet access), when (usage needs, avoiding
> killing mirror when thousands of machines try to perform upgrade at
> once, etc.). 

I probably shouldn't be surprised by this, but I am. The installation of
unattended-upgrades by default was something we decided on, and
announced to -devel, two years ago. I'll restate my previous position on
this:

On a well-maintained system, u-u is trivial to disable if that's the
admin's desire. On a non well-maintained system, u-u is essential for
the safety of the user, the cloud provider, and the internet at large.

If there are changes we can make to the configuration we install in
cloud environments, those can be discussed, but as far as I'm concerned
the basic default availability of u-u is beyond debate.

> Some vendors upgrade during restart, but it lengthens boot time, which
> matters when VM is run for short time (common use case for clouds). No
> consensus was found - but we should check what Ubuntu does.

Minor nit: Package updates are installed during *first* boot, not
typically *reboot*. The distinction is important for two reasons:

1. cloud-init, which is typically what's handling this task, makes a
distinction between first boot and subsequent reboots, and typically
only installs updates on first boot.

2. Cloud instances are very often never rebooted. They boot once, and
are replaced. (There are, of course, exceptions; we need to be aware of
both use cases.)

noah



signature.asc
Description: PGP signature


Re: Building cloud images using Debian infrastructure

2018-08-09 Thread Noah Meyerhans
On Wed, Aug 08, 2018 at 05:47:06PM -0400, Jimmy Kaplowitz wrote:
> > No, the main reason it isolation.  The builds take some global
> > resources, loop devices, and may not return them in case of some errors.
> 
> Google builds their official GCE Debian images inside transient GCE
> instances, solely for isolation purposes (they use the Debian cloud team
> build tools, probably still bootstrap-vz until we get FAI sufficiently
> working). To be clear, nothing about that needs to be in GCE, except for
> a few implementation details of their particular build harness. Regular
> VMs work fine.

At the Microsoft-hosted cloud sprint I proposed using cloud-provider VMs
for builds targeting that provider. This is not because of any
provider-specific behavior, but rather because the cloud providers
provide all the isolation, resource management, and automation hooks
that we could ask for.  I still maintain that it's the better approach,
but was told at the time that the builds need to happen on Debian-owned
hardware, and that we had users specifically insisting on this. I'm not
convinced by that argument, nor have I heard anything from AWS users
expressing concern that the images are being built on AWS.  Meanwhile I
have been building all the AWS images using small (t2.micro), transient
EC2 instances and a small shell script to handle the VM lifecycle and
have managed to completely avoid the complexity of that giant whiteboard
drawing from the sprint...

https://salsa.debian.org/noahm/ec2-image-builder/blob/master/bin/launch-fai-builder.sh

> I support the goal of isolation, but transient VMs can serve the same
> purpose in a workflow that's more easily portable between casulana,
> GitLab CI (I presume?), a personal dev laptop, and anywhere else one
> might want to reproduce the flow. Which seems like a win for maximizing
> how easy it is for people to hack on this - and also for companies like
> Google to converge further with us on tooling.

Indeed. Some time ago, I posted on my blog about how users can use our
build tooling to generate their own custom AMIs that derive from our FAI
configs. The workflow is identical, because it uses common
infrastructure. A build process that relies on custom Debian
infrastructure is not going to be useful to users, meaning they'll have
to use a different workflow to build images, with different bugs, edge
cases, failure modes, etc. (Note that the post was written before the
above mentioned small shell script was written, so there are more steps.
I should update that post...)

https://noah.meyerhans.us/blog/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/

noah



signature.asc
Description: PGP signature


Re: Stretch EC2 information doesn't appear to be correct

2018-02-27 Thread Noah Meyerhans
On Tue, Feb 27, 2018 at 11:10:48PM -0500, Christopher David Howie wrote:
> * AMI name is "debian-stretch-hvm-x86_64-gp2-2018-02-22-67467"
> 
> * it is owned by AWS account ID 379101102735
> 
> * For region us-east-1, the AMI ID is ami-0dc82b70

Apologies, this was my fault. The AMIs are generated and globally
replicated, but not marked public until testing is complete. The process
of making the AMIs public is triggered by and after the AMIs have been
validated, but I forgot to do that this time. I've marked the existing
AMIs public, so you should be able to access ami-0dc82b70 in us-east-1
now, along with the rest of the regional AMIs. I'll see about
prioritizing work to avoid this mistake in the future, or at least to
generate some kind of notification in case it happens again.

> * There is an AMI with this name:
> "debian-stretch-hvm-x86_64-gp2-2018-02-22-67467-572488bb-fc09-4638-8628-e1e1d26436f4-ami-0dc82b70.4"
> -- but it is owned by a different AWS account (ID 67959241), and the
> AMI ID is different (ami-22be575f).

This is the AMI owned and republished by the AWS Marketplace. If you
launch the stretch AMI via the listing at
https://aws.amazon.com/marketplace/pp/B073HW9SP3, you'll get this AMI.
The contents are bit-for-bit identical to the AMIs on the Stretch wiki.

Apologies again for the confusion.

noah




signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-14 Thread Noah Meyerhans
On Sun, Oct 14, 2018 at 06:02:23PM +0100, Steve McIntyre wrote:
> One of our (many!) discussion topics at our sprint last week was
> announcing the end of support for our published Jessie (Debian 8.x)
> images. We strongly recommend that all our existing users should move
> forward to Stretch (Debian 9.x) to ensure that they have continuing
> security support. Jessie is no longer officially supported by Debian,
> so our advertised cloud images should also reflect that.

I've submitted requests to the AWS Marketplace to remove our Jessie
listings. They haven't yet acted on these requests, but they should do
so in the coming week.

I will also update the Jessie (and wheezy!?) listings on the wiki to
indicate that they're unsupported. I will remove references to specific
AMI IDs, but will include a note that previous versions of the wiki
pages contain such entries if somebody really really needs them and
understands what they're getting by launching one.

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-21 Thread Noah Meyerhans
On Fri, Oct 19, 2018 at 04:14:47PM +0200, Raphael Hertzog wrote:
> > The main thing: concerns were raised by several of the cloud platforms
> > people that LTS security doesn't seem to be working very well. They're
> > not seeing fixes happening for known issues, and so at the moment they
> > don't have trust in the process.
> 
> Really? This is the first time I hear such feedback. Can you put me in
> touch with the person(s) who made those claims so that I can try to have
> more concrete information about the alleged problems?

I'm sure a lot of it is a matter of perception, but the level of
integration of LTS with the stable lifecycle does not seem as deep as
someone familiar with Debian stable might expect it to be. For example,
security announcements being published to a list other than
debian-security-announce makes it feel very unofficial and does not
invoke the same level of confidence in the commitment (it is somewhat
remeniscent of the secure-testing effort).

Lack of integration with packages.debian.org and incomplete coverage of
the archive also present problems. For exaple, despite the existence of
DLA 1531, I cannot find evidence of a 4.9 kernel for jessie on
packages.debian.org except in jessie-backports, and backports is well
documented as not having official security support. (Again, I realize
that this may be a matter of visibility and perception.)

For my part, as maintainer of the images on AWS, I don't want to prevent
people currently using the jessie images from continuing to do so. I
simply don't want new (to AWS or to Debian) users from starting out with
jessie. As such, I've made the jessie listings slightly less
discoverable using AWS interfaces, and have noted their deprecation on
the relevant Debian wikis. Somebody who is familiar with LTS and
interested in using it is certainly welcome to do so, though.

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-22 Thread Noah Meyerhans
On Sun, Oct 14, 2018 at 10:53:19AM -0700, Noah Meyerhans wrote:
> I've submitted requests to the AWS Marketplace to remove our Jessie
> listings. They haven't yet acted on these requests, but they should do
> so in the coming week.

It sounds like the removal of the jessie listings has taken effect, and
this is apparently causing some pain for users. The AWS Marketplace team
has reached out to be to relay that they've been contacted by multiple
customers who are still relying on the jessie and are confused by their
disappearance. It seems that a good number of them are aware of LTS and
are expecting to make use of it.

I wonder if it might be worth it to continue to list the jessie images?

I also wonder if it might be worth it to update them with the 4.9 kernel
from LTS security? It's necessary for full KPTI, and thus the most
complete mitigations for the spectre/meltdown bugs, etc. As I understand
the concerns raised at the cloud sprint, most of them were around the
kernel. If the LTS team is keeping 4.9 fresh in jessie, these concerns
may be addressed.

As it is, a freshly booted instance of the latest published jessie AMI
has >100 outstanding package updates, so some kind of update is
definitely warranted if we're going to keep publishing them. I don't
mind doing this work, but these AMIs were created by jeb using
bootstrap-vz, and I don't know how that works or where the configuration
for them lives.

What do people think? Does anybody have particularly strong objections
to putting the AWS Marketplace listing for jessie back up?  I think we
may have been hasty with the EOL of the jessie images, at least on AWS.

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-22 Thread Noah Meyerhans
On Mon, Oct 22, 2018 at 11:34:14AM -0700, Larry Fletcher wrote:
> So far the security updates for the Jessie kernel have stayed in the 3.16.*
> range, and I doubt if that will change.

Try apt-cache policy linux-image-4.9-amd64

> A few weeks ago I ran a test "dist-upgrade" to Stretch and was surprised to
> see that it installed Exim.  I don't use an MTA and Jessie didn't have one
> when I installed it, so I guess this is something new.  I prefer a minimal
> image like Jessie.  (If I needed an MTA I would use Exim, but other people
> might want to use something else.)

Exim is not installed on the stretch AMIs. If it was installed during an
upgrade, that's most likely because of a Recommends declared in another
package being installed. You can remove it.

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-22 Thread Noah Meyerhans
On Mon, Oct 22, 2018 at 07:12:54PM +, Luca Filipozzi wrote:
> Is this true: the goal of the Debian Cloud Team is to make roughly equievalent
> Debian Cloud Images for the various cloud providers so that
> Debian users have a consistent (to the degree possible) experience with
> Debian in each cloud?

That is a goal.

> If true, is it also true for Debian LTS Cloud Images?  I can accept that
> it might not be, especially if the Debian Cloud Team _isn't_ taking
> accountability for these LTS images (regardless of who is responsible
> for the work: Credative for Azure, etc.).

As a user, I wouldn't expect there to be any visible differences, aside
from package updates to address issues, between an oldstable image
generated during the security team's support window and an LTS image
generated after the handoff. So I don't know that I see a reason why
this *shouldn't* be a goal for the cloud team.

To a very large degree, all I would expect as a user of the LTS images
is that I have fewer pending updates on first boot than I would if I
launched the latest 8.x images generated during its oldstable window.

Clearly the project as a whole is still trying to figure out how the LTS
effort relates to the rest of the project. With a relatively small
number of people being paid to work on it, and potential a lack of
interest among the developer community in general in supporting such old
software, I can see how it might not be sustainable. However, there is
clear user demand for it, or we wouldn't be having this conversation. So
I think we're better served by figuring out how to make LTS work (in
cloud environments and more generally) than by trying to figure out how
we can say no.

> > Note that I'm not necessarily proposing that we provide regularly
> > updates LTS images for the full duration of the LTS lifecycle.
> 
> This, combined with the per-provider approach, suggests that the Debian
> Cloud Team isn't accountable for the LTS images? Which would then lead
> to a question about how to publish the LTS images.

jessie is somewhat a special case, since it completely predates the use
of FAI for our image construction, and largely predates the existence of
the cloud team in its current form. As a result, it would require
additional work on the cloud team's part in order to support it at the
same level as stretch and future releases. It's unlikely that anybody is
going to do the work to fully integrate it with the rest of the
gitlab-driven CI pipeline, so image generation may be somewhat more
manual. So today, there is additional ongoing effort required. Nobody is
required to put in that effort, but I am willing to do so in order to
support our users, and others may be as well.

For future LTS releases, ongoing support after the LTS handoff is
probably relatively easy, since we'll already have all the automation
built out and the ongoing effort will be small.

Is there any reason to expect that the LTS team will not support the
cloud team or cloud users, should we (or our members individually)
decide to continue to publish images on one or more cloud platforms?

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-22 Thread Noah Meyerhans
On Mon, Oct 22, 2018 at 05:52:15PM +, Luca Filipozzi wrote:
> I object because, at the 2018 Debian Cloud Sprint, we collectively
> decided that we were not offering Debian LTS Cloud Images.  Are we
> changing our decision?  I'd like to see collective decision making, not
> one-offs for each platform.

That's precisely why I asked. However, I don't want to focus too much on
process or collective decision making. Every provider is different. The
existing images available for a given provider, and their adoption, is
different. The level of support given by the provider themselves is
different. The level of effort involved in continuing to support jessie
images varies by provider. Etc, etc.

I propose to revisit the decision because I think it was made in haste,
and with incomplete information. Zach expressed concern that there were
security issues not being fixed in jessie by the LTS team. My assumption
was that these issues were related specifically to the Intel speculation
class of bugs, complete mitigations for which rely on KPTI, which is not
present in the 3.16 kernels from jessie. However, it seems that LTS is
currently shipping a 4.9 kernel, in addition to the original 3.16
kernel, which was uploaded by the kernel team. (Not that you can
discover any of this on packages.d.o or similar resources, which IMO is
a real problem in terms of the legitimacy of LTS.)

If there are other specific issues that Zach (or anybody else) can point
to that haven't received attention from the LTS team, we should consider
those. Perhaps he was referring to something besides the kernel.

Given the availability of linux 4.9 in LTS, I am less attached to the
decision we made at the sprint. Given the impact that the decision has
had some some of the users of the jessie AMIs on AWS, I am interested in
revisiting it.

Note that I'm not necessarily proposing that we provide regularly
updates LTS images for the full duration of the LTS lifecycle. I'm not
actually proposing any timeline at this point, though I'll come up with
one if people want. I'm simply suggesting that we consider that we have
users who do want LTS, and we should support them to the extent that
we're able.



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-23 Thread Noah Meyerhans
On Tue, Oct 23, 2018 at 10:04:33AM +0200, Thomas Goirand wrote:
> >> I object because, at the 2018 Debian Cloud Sprint, we collectively
> >> decided that we were not offering Debian LTS Cloud Images.  Are we
> >> changing our decision?  I'd like to see collective decision making, not
> >> one-offs for each platform.
> > 
> > That's precisely why I asked. However, I don't want to focus too much on
> > process or collective decision making. Every provider is different.
> 
> We're supposed to be deciding collectively. I don't agree that every
> provider is different for this specific case.

I disagree.

> > The existing images available for a given provider, and their adoption, is
> > different.
> 
> No. On every platform, seeing Jessie going away is painful. Please don't
> consider your own case as a special one. This isn't helping anyone, and
> this isn't the case. I'd very much prefer if we were having consistency.

I disagree.

The level of adoption matters, and is potentially quite
different. A newer cloud platform, or one that was not supported by
Debian, might not have any users of jessie at all. So the matter is
irrelevant to them.

The specifics of this issue relate to the discoverability and
presentation of the jessie AMIs in the AWS Marketplace. Different cloud
providers may have different systems with different behavior. For
example, there may be other ways to indicate that a release is
deprecated while still letting it show up in search results. I do not
want to have to cater to the lowest common featureset. I want to use the
mechanisms provided by the cloud provider in the native way.

noah



signature.asc
Description: PGP signature


Re: Announcing EOL for Jessie images

2018-10-23 Thread Noah Meyerhans
On Tue, Oct 23, 2018 at 11:51:26AM +0100, David Osborne wrote:
>Thank you... so the amis themselves should remain indefinitely?

I'd expect "release" AMIs to remain indefinitely, yes. There's strong
precident for this within AWS in general; it's part of the pledge to not
break users. For example, Amazon's own Fedora Core 6 AMIs from the very
beginning of EC2 are still available and usable in all the places, with
the same AMI IDs, as ever.

What I took down the other day (and have since reverted while we
continue this discussion), was the listings within the marketplace. If
you were already using the marketplace AMIs, they'd continue working,
but you wouldn't be able to discover them using the search interface.

It's worth noting that it's never been required that you use the
marketplace. The latest AMI IDs are always shared publicly from the
Debian account. You can find the details for those on the Debian wiki:

https://wiki.debian.org/Cloud/AmazonEC2Image/



signature.asc
Description: PGP signature


Re: Stretch AMI - user-data doesn't run, but cloud-init claims SUCCESS ?

2018-11-04 Thread Noah Meyerhans
On Sat, Nov 03, 2018 at 10:19:25PM -0600, Jim Freeman wrote:
>2018-10-01 stretch AMI (no idea if this is a regression?)
>/var/log/cloud-init.log claims user-data was run, when it fact it was not,
>with tracebacks and log messages (attached) leading me to think that
>failure is somehow getting mis-cast as success?
>Any confirmations of other failures/successes of user-data would be
>much appreciated ...

It seems that there are some invalid characters in your user-data
script. Cloud-init is printing a message at WARNING-level severity
indicating that it can't run the script, and the following stack trace
explains why:

> 2018-11-02 22:36:11,164 - util.py[WARNING]: Failed calling handler 
> ShellScriptPartHandler: [['text/x-shellscript']] (text/x-shellscript, 
> part-001, 2) with frequency once-per-instance
> 2018-11-02 22:36:11,170 - util.py[DEBUG]: Failed calling handler 
> ShellScriptPartHandler: [['text/x-shellscript']] (text/x-shellscript, 
> part-001, 2) with frequency once-per-instance
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/cloudinit/handlers/__init__.py", line 
> 103, in run_part
> payload, frequency)
>   File "/usr/lib/python3/dist-packages/cloudinit/handlers/shell_script.py", 
> line 43, in handle_part
> util.write_file(path, payload, 0o700)
>   File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1747, in 
> write_file
> content = encode_text(content)
>   File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 154, in 
> encode_text
> return text.encode(encoding)
> UnicodeEncodeError: 'utf-8' codec can't encode character '\udca9' in position 
> 14: surrogates not allowed

What's in /var/log/cloud-init-output.log after this failure?



signature.asc
Description: PGP signature


Bug#910654: cloud.debian.org: cloud-init apt module can't add GPG keys; dirmngr missing

2018-10-09 Thread Noah Meyerhans
On Tue, Oct 09, 2018 at 11:01:33AM +, Daniel Strong wrote:
> Stderr: gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or 
> directory
> gpg: connecting dirmngr at '/root/.gnupg/S.dirmngr' failed: No such file 
> or directory
> gpg: keyserver receive failed: No dirmngr

gnupg has only a Recommends on dirmngr, not a Depends. When we build the
cloud images, we don't install recommends for most packages.  We should
fix this by explicitly adding dirmngr to the list of packages.

Thanks for reporting this.

noah



signature.asc
Description: PGP signature


Re: Oracle Cloud Infrastructure

2018-12-27 Thread Noah Meyerhans
On Wed, Dec 26, 2018 at 09:19:56AM -0500, Paul Graydon wrote:
>  We are now standardizing on FAI to build our images. If you want the
>  Debian Oracle image to become official, it will have to use that.
> 
>I took a quick look at FAI a few weeks back, but the dependency on
>DHCP/tftp made it more than a little complicated to run in our cloud
>environment (where we've been traditionally building images).  As the
>requirement is to build in your infrastructure, I guess that isn't an
>issue :)  I'll grab a look, after the Christmas break, at setting up a
>simple VirtualBox environment and see what's what.

FAI doesn't actually require tftp or dhcp when generating cloud images,
and the related packages are only listed as Recommends, so they don't
need to be installed at all. FAI can be used for imaging physical
systems, which is where the tftp functionality is used.

noah



signature.asc
Description: PGP signature


Bug#915127: cloud.debian.org: Please add AWS image for new ARM instances

2018-11-30 Thread Noah Meyerhans
It's on its way. A newer ENA driver is required for working network, so that's 
kind of a blocker.


On November 30, 2018 10:17:06 AM PST, Phil Endecott 
 wrote:
>Package: cloud.debian.org
>Severity: wishlist
>
>Dear Maintainer,
>
>AWS have recently announced new instance types that use the 64-bit ARM 
>(aka aarch64) architecture.  Machine images are currently available for
>
>"Amazon Linux 2", RHEL, Ubuntu and Fedora.  It would be great to also 
>have official Debian images.
>
>
>Many thanks,  Phil.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Any plans for AWS ARM image?

2019-01-10 Thread Noah Meyerhans
On Thu, Jan 10, 2019 at 11:08:08AM +0100, Bastian Blank wrote:
> > > > Are there any plans to offer Debian AMIs for them?
> > > See
> > > https://salsa.debian.org/cloud-team/debian-cloud-images/merge_requests/46
> > I see that the MR has now been merged. What would be the next step?
> 
> SPI working with Amazon to get new projects and contractual
> relationships up and running, which are not tied to single developers as
> the current ones we use are.

For stretch, I think we should use the same account that we've been
using for our amd64 AMIs. Doing otherwise would be confusing to users.
So, the next step for us in that case is to wait until a kernel with the
necessary changes is included in the archive. That should happen with
the next stretch point release, which should happen relatively soon. I
don't think there's a firm date for it yet, though.

For buster and beyond, we should definitely get the new accounts in
place. We should probably get on that Real Soon Now...

noah



signature.asc
Description: PGP signature


Re: Any plans for AWS ARM image?

2019-01-10 Thread Noah Meyerhans
On Thu, Jan 10, 2019 at 07:22:18PM +0100, Lucas Nussbaum wrote:
> Would it make sense to publish an unofficial image and ask for feedback
> from users?

Let's wait until we have candidate stretch kernel packages that fix
#918330 and especially #918188. I believe at least the latter has a fix
on salsa. Otherwise, I'm happy to publish images. Quite a few people
have asked about them.

> I am playing with the idea of doing archive rebuilds on AWS using arm64
> instead of amd64.

Why not both?



Re: Building cloud images using Debian infrastructure

2018-09-17 Thread Noah Meyerhans
On Sat, Sep 15, 2018 at 09:54:04PM +0200, Bastian Blank wrote:
> > ### EC2 upload
> > 
> > No idea yet.  There is a script in our repo to create a single image.
> > But I see nothing that would handle the whole pipeline with replicating
> > and registering AMI in all the regions.
> 
> Shouldn't we have something already?  Even the wiki does not provide
> information.

The scripts are at https://salsa.debian.org/noahm/ec2-image-builder,
where they've always been (well, accounting for the alioth->salsa
migration). Check the list archives if you missed it previously.

Please remember that the stretch EC2 AMIs have been updated pretty much
constantly in all AWS regions (with every kernel or core package update,
and every point release) since stretch was released. I consider
publication of our FAI-based images to AWS to be pretty much a solved
problem.

Note that I don't really consider these scripts complete, in that they
don't necessarily all have as nice a UI as they might, and they could
all be a little more flexible. But they work for me, and I'm not really
inclined to work on them further as it's clear that some others don't
like how I've solved the problem and the inability to reach a
satisfactory conclusion to the debates around how/where to build images
has left me without motivation for further involvement...

noah



signature.asc
Description: PGP signature


Re: locales vs. locales-all

2019-01-26 Thread Noah Meyerhans
On Fri, Jan 25, 2019 at 02:56:53PM +0100, Bastian Blank wrote:
> While installing locales-all is pretty handy and avoids a lot of
> problems, I realized that is comes with a huge drawback: it makes the
> images a lot larger (230MB uncompressed and over 100MB compressed).  It
> makes them so much larger that building images comparable with legacy
> Azure build process gets problematic on salsa.

We install locales on the stretch AMIs for Amazon EC2. I haven't heard
anybody complain about that. So I'm in favor of continuing to do that,
rather than installing locales-all.

noah



signature.asc
Description: PGP signature


Re: Sprint work items

2019-01-27 Thread Noah Meyerhans
> > Noah
> > continue maintaining the stretch images for AWS

Still happening. I also did some driver backporting so we'll be able to
support the Amazon EC2 arm64-based instances with the next point
release. Buster will also support these instances.

> > developing buster aws images

Some progress here. The FAI configs build a reasonably usable image
today, so there's not a whole lot that needs critical attention. Waldi
and I have been working on improvements to the network configuration
that should help AWS and other cloud services.

> > automatic building/registering from casulana

Build for arm and amd64 is driven by the gitlab runners at this point.
There is not currently a pipeline stage for AMI registration, largely
due to the following:

> > coordinate creating new AWS account (including publishing on gov cloud)

There hasn't been any progress here. :(

noah



signature.asc
Description: PGP signature


Bug#925530: cloud.debian.org: Debian docker images pointing to github for bug tracking

2019-03-26 Thread Noah Meyerhans
On Tue, Mar 26, 2019 at 12:25:12PM +0100, Lucas Nussbaum wrote:
> On https://hub.docker.com/_/debian, there's:
> 
> > Where to file issues:
> > https://github.com/debuerreotype/docker-debian-artifacts/issues
> 
> Are those official images? I'm surprised by official Debian images
> pointing to a non-free web service. I would expect the BTS to be used
> for bug tracking.

Well, Docker Hub itself is a non-free service. Further, there are other
official Debian components (packages in main) that use GitHub for their
primary work coordination, so this is not without precident.

> Also, there's:
> > Where to get help:
> > the Docker Community Forums, the Docker Community Slack, or Stack Overflow

Those are Docker's official help channels.

With all that said, the Debian Docker images aren't covered under the
cloud.debian.org pseudopackage, so I guess you'll need to follow up with
tianon or paultag... Or open an issue on GitHub. ;)

noah



signature.asc
Description: PGP signature


Re: Cloud-init datasource ordering

2019-04-03 Thread Noah Meyerhans
On Thu, Apr 04, 2019 at 09:27:11AM +1300, Andrew Ruthven wrote:
> > > Would it be possible to move the Ec2 datasource up the list like "[
> > > NoCloud, AltCloud, ConfigDrive, OpenStack, Ec2, CloudStack,
> > > ${DIGITAL_OCEAN_SOURCE} MAAS, OVF, GCE, None ]"? This also seems to
> > > be in line with expectations on how the datasources have been
> > > sorted before dc1abbe1.
> > > 
> > If we do that, then OpenStack people are going to wait 120 seconds.
> > So,
> > bad idea...
> 
> Hmm, this situation is likely going to just get worse as more
> datasources are added.
> 
> Can we reduce the timeout?
> 
> Try datasources in parallel and first one that responds wins?
> 
> Is it worth having multiple images with the order set appropriately? 

Yes, I think the expectation is that you should be overriding the
default datasource list to specify only the source(s) relevant to your
particular deployment platform. The list can be specified in the
debconf cloud-init/datasources value.

For example, we specify an appropriate value for our Amazon EC2 images
at 
https://salsa.debian.org/cloud-team/debian-cloud-images/blob/master/config_space/debconf/EC2

noah



Re: Status of Debian accounts and legal agreements with cloud providers

2019-04-04 Thread Noah Meyerhans
On Thu, Apr 04, 2019 at 07:51:22PM +0100, Marcin Kulisz wrote:
> > > > Let me know if this stalls;  I can put you in touch with someone on
> > > > the Azure team.
> > 
> > The Azure team asked some time ago for an email address to attach as
> > owner to those accounts.  Also we need that to attach AWS accounts.  Do
> > we have this address in the meantime?
> 
> I don't think so.

Some time ago (following the 2017 cloud sprint, IIRC), we created
aws-ad...@debian.org. See #7163 in rt.debian.org, if you have access to
that.

This was created with a less well-developed understanding of our account
needs than what we came up with at the 2018 sprint, but it is not
currently being used for anything and we can easily repurpose it for the
new AWS accounts.

Per the original request, the membership should be:
noahm
jeb
93sam
kula

We should probably add the cloud delegates and (maybe?) an SPI
representative to it if we're going to use it.

Messages to that alias are being archived at master.d.o:~debian

noah



Re: Status update at DebConf 19

2019-03-29 Thread Noah Meyerhans
On Fri, Mar 29, 2019 at 03:09:50PM -0300, Lucas Kanashiro wrote:
> I think DebConf is the perfect place to share with the Debian community
> the work we have been doing and collect feedback :)

+1, this is a great idea. There should also be a BoF for people
interested in a more interactive session. We've done such things before.

> BTW do you intend to attend DebConf 19? If you have any doubts about
> this year DebConf I can help since I am part of the local team.

I don't know if I'll be able to make it. I have a bunch of other
international travel between now and then and may be unable to schedule
another trip. I hope to resolve the question soon, either way. I should
be available virtually if I'm not able to make it in person.

noah



Re: Fixes for cloud-init in Debian Stretch

2019-02-21 Thread Noah Meyerhans
On Wed, Feb 20, 2019 at 05:17:11PM +0100, Konstantin wrote:
>Seems that Debian Stretch suffers from this
>bug [1]https://bugzilla.redhat.com/show_bug.cgi?id=1430511
>Please check if this patch can be added to cloud-init
>package 
> [2]https://github.com/larsks/cloud-init/commit/3f8d6dbbbc9ab4679a4820d7cc60265fa67807cd
>Affected cloud-init version is 0.7.9-2
>Error from AWS instance:
>2019-02-20 13:44:29,180 - util.py[WARNING]: Running module
>ssh-authkey-fingerprints ('cloudinit.config.cc_ssh_authkey_fingerprints' from
>
> '/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_authkey_fingerprints.py'>)
>failed

How often does this problem happen for you? I build the Amazon EC2 AMIs
and use them quite regularly, and have never encountered this problem.
So unless there's something non-deterministic about systemd's dependency
handling, I don't think this is your issue.

Are you able to extract the contents /var/log/cloud-init-output.log from
your instances? I'd be curious to see if it provides more insight into
what's failing.

noah



Re: Providing qemu-guest-agent in our images

2019-02-07 Thread Noah Meyerhans
On Thu, Feb 07, 2019 at 10:17:53AM +0100, Thomas Goirand wrote:
> As a follow-up with the discussion about agent inside VMs, I wonder if
> we should install qemu-guest-agent inside our images. Normally, it
> provides what Ted was talking about: hooks for freezing filesystem,
> mysql, etc.

I don't think users of Amazon EC2 would expect this agent to be
installed by default. I'd imagine the case would be similar for GCE and
Azure.  If it makes sense in common openstack deployment scenarios, then
I'm fine with installing it by default there. Otherwise, I'd be happiest
just leaving this to the admin to set up via cloud-init or similar.

noah



signature.asc
Description: PGP signature


Re: AWS Debian Stretch Marketplace AMI doest'not allow launch t3a/m5a Amazon EC2 instance

2019-05-24 Thread Noah Meyerhans
On Tue, May 21, 2019 at 11:14:02AM +0300, Eugeny Romanchenko wrote:
>Is it possible for you to add you current Marketplace image to the list of
>supported for t3a/m5a AWS instances?

I've submitted a request to update the AWS Marketplace listing. The new
listing will use the latest stretch 9.9 AMIs (as visiable at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch) and will support
newer instance types.

The submission must be reviewed by Marketplace staff at AWS. This can
take anywhere from a few hours to a few days. If you're using the
current Marketplace listings, you should receive an email notification
from AWS when the listing is updated.

noah



signature.asc
Description: PGP signature


Bug#929263: cloud.debian.org: /usr/sbin not in default $PATH

2019-05-20 Thread Noah Meyerhans
On Mon, May 20, 2019 at 11:26:00AM +0200, Jorge Barata González wrote:
>Vagrant image debian/stretch64 v9.6.0
>/usr/sbin is not included by default in $PATH
> 
>```
>vagrant@stretch:~$ service
>-bash: service: command not found
>vagrant@stretch:~$ /usr/sbin/service
>Usage: service < option > | --status-all | [ service_name [ command |
>--full-restart ] ]
>vagrant@stretch:~$ echo $PATH
>/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>```

That path is set from /etc/profile, which is not modified by the vagrant
images from the default that Debian installs. /usr/sbin is not in the
default PATH in Debian normally.

If you want to discuss changing this project-wide, we could certainly do
so, but that would be quite a bit broader in scope than
cloud.debian.org.

noah



Bug#929263: cloud.debian.org: /usr/sbin not in default $PATH

2019-05-20 Thread Noah Meyerhans
Control: severity -1 wishlist

> This is a historical convention, going back decades, that only the
> system administrators needs to run the programs in /sbin and
> /usr/sbin.  So to avoid users getting confused when they might run
> those programs and get "permission denied", historically normal users
> won't have /sbin and /usr/sbin in their path.  However many system
> administrators will have their own custom dot files which do include
> those directories in their paths.
> 
> That assumption is perhaps less likely to be true for servers running
> in cloud VM', but making things be different for cloud VM's as
> compared to standard Debian configurations also has downsides in terms
> of causing greater confusion.  So my suggestion would be for you to
> simply have your own custom dotfiles which can set a PATH different
> from the default.

At this point, I think it'd be worth revisiting, at the project level,
the historical tradition of leaving the sbin directories out of non-root
paths. Setting aside all the single user desktop and laptop systems,
there are enough alternative ways to grant restricted root (file ACLs,
etc), and to run in alternate filesystem namespaces (e.g.  containers),
that the functional distinctions that lead to the original directory
split are probably applicable in a minority of situations these days.

This isn't something that I feel strongly about, though. Anybody who
does should retitle this bug appropriately and reassign it to the
'general' pseudopackage, whereupon it can be discussed on debian-devel.
Otherwise it should get tagged wontfix, unless someone thinks this is an
appropriate change to introduce at the cloud image level (I would not
agree with this).

noah



signature.asc
Description: PGP signature


Re: Cloud Image Finder Prototype

2019-05-20 Thread Noah Meyerhans
On Mon, May 20, 2019 at 03:38:25PM -0300, Arthur Diniz wrote:
>The first thing we did was a [2][1] [3]Low Fidelity Prototype, this was
>just a draft that we based to came  up with the [4][2] [5]High Fidelity
>Prototype.

These look great!

>Also we think that is important that if you could tell us what feature do
>you  expect in an Image Finder that we could not leave it behind.

A couple of bits of feedback based on what I've seen of the prototypes:

1. It'd be good to use a static (bookmarkable) URL for the provider
details pages. If, for example, I'm an openstack user, I don't want to
have to click through several pages of general purpose information
(including the list of all the providers, etc) every time I want to look
up the latest openstack images. I want to bookmark a page that gives me
the latest openstack images.

2. For the managed cloud services, most users are going to use the
images already published by Debian to the specific cloud they're
interested in. Most people aren't going to download the raw images as
shown in the openstack example we currently have. So we'll need to think
about how we want to present the provider-specific details in a way
that'll be most familiar to somebody working with that provider's
services on a daily basis. That will likely differ somewhat based on the
cloud provider.

Very good looking start so far. I look forward to seeing more.

noah



signature.asc
Description: PGP signature


Re: Proposing to proceed with creating AWS and Azure accounts for Debian

2019-04-30 Thread Noah Meyerhans
On Tue, Apr 30, 2019 at 06:25:47PM +0100, nSed rickm wrote:
>Hi my name is Sedrick I recently joined this mailing Iist to get to know
>more about the debian cloud team .I submited a proposal for GSoC with
>debian this year for the cloud image finder .I would like to catch up on 
>all previous emails . I will greatly appreciate being directed where to
>find them so I can read them. Thanks .

Hi Sedrick. Please see https://lists.debian.org/debian-cloud/ for the
archived discussions from this list.

You may find the minutes from our most recent sprint particularly
interesting:
https://lists.debian.org/debian-cloud/2018/11/msg8.html



Amazon EC2: stretch AMIs updated to 9.9

2019-04-30 Thread Noah Meyerhans
Yesterday I published new stretch AMIs for Amazon EC2 for both arm64 and
amd64 architectures.  The AMIs refresh all package version to those
included in Debian 9.9 (stretch), per the release announcement at
https://www.debian.org/News/2019/20190427

The AMI details, as usual, are available at
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

The corresponding update in the AWS Marketplace is still pending. I
recommend using the AMIs listed on the above wiki, rather than using the
Marketplace, for up-to-date images.

noah



Re: AWS AMI entry for ap-southeast-2 (Sydney) missing from Stretch AMI page

2019-07-05 Thread Noah Meyerhans
On Thu, Jul 04, 2019 at 11:49:13PM -0700, Noah Meyerhans wrote:
> amd64: ami-0776bf2e6645ef887
> arm64: ami-00781f5d2e3a6d2ab

Correction, the AMI IDs for ap-southeast-2 are:

amd64: ami-069a1bfa76dd19320
arm64: ami-00781f5d2e3a6d2ab



signature.asc
Description: PGP signature


Re: Debian 10 Buster AWS AMI

2019-07-10 Thread Noah Meyerhans
On Wed, Jul 10, 2019 at 05:58:16PM +0300, Mustafa Akdemir wrote:
>When Debian 10 Buster AWS AMI will be created and added to AWS
>Marketplace?

Unfortunately, we have some administrative details to work out regarding
our publication account with AWS. It will be published as soon as we
resolve the details...

noah



signature.asc
Description: PGP signature


Re: Debian Cloud Sprint Fall 2019

2019-07-02 Thread Noah Meyerhans
On Tue, Jul 02, 2019 at 02:12:27PM -0700, Zach Marano wrote:
>I propose we start planning the next Debian Cloud Sprint.

Sounds like a plan. With DebConf coming up, I suspect there might be an
opportunity to do a bit of planning face-to-face, as well as via email
etc.

I've created a wiki entry for the sprint, with all details left TBD:
https://wiki.debian.org/Sprints/2019/DebianCloud2019

>I offer that we (Google) host this year in Seattle sometime in
>October. Does anyone have any comments, ideas, or issues with
>starting this planning process?  Alternatively, we did talk about
>hosting the next sprint on the east coast of the US or Canada. If
>that is something people are interested in, I am willing to look
>into that as well. The downside being that all large cloud
>providers are based in Seattle and may not be able to get as many
>people to attend.

I personally prefer Seattle, but we've dragged the Europeans all the way
out here enough that we should probably give them a break. I could
probably make October on the east coast work. Preferences would be
Boston, then NYC.

I've reached out to contacts at MIT to see if they'd be able to provide
a venue for this year in the Boston area. I know they've got some people
quite active in the Debian and OpenStack cloud areas who could provide
sponsorship.



Re: Presenting Debian to the user at cloud provider marketplaces

2019-07-07 Thread Noah Meyerhans
On Sun, Jul 07, 2019 at 11:49:22PM +0200, Thomas Goirand wrote:
> Why must Debian users on Azure see the word "credativ"? To me, it's as
> if I uploaded the OpenStack image to cdimage.debian.org, with filename
> "openstack-debian-image-provided-by-zigo.qcow2". This feels completely
> inappropriate.
> 
> Can this be removed?

Credativ sponsored this work. Is it really unreasonable to acknowledge
this? We have quite a long history of displaying public acknowledgements
of our sponsors' contributions.



signature.asc
Description: PGP signature


Re: Last week on IRC

2019-08-18 Thread Noah Meyerhans
On Sun, Aug 18, 2019 at 07:06:27AM +0100, kuLa wrote:
> >New regions on old AWS account: need root account for that.
> 
> Above is not fully correct but anyway it's been sorted and Debian
> cloud images thanks to Noah should be now available in the new regions
> as well.

It's been sorted out for all existing public regions, but future regions
will still need somebody to manually enable them. I'm not 100% sure, but
I believe this must be done via the web UI (that is, there is no API).



Re: Debian Cloud Sprint Fall 2019

2019-08-15 Thread Noah Meyerhans
On Thu, Aug 15, 2019 at 03:38:00PM -0400, Jonathan Proulx wrote:
> Accommodation
> -
> there's a lot of choice (and it is all fairly priicey
> unfortunately). As a visitor Noah may actually have better info than I

Last time I was in the neighborhood, I stayed at the Fairfield Inn &
Suites, which is a bit further away (roughly 20 minutes on foot) but
much more reasonably priced. The hotel's web site suggests that prices
are high for the days in question, though, which makes me wonder if it's
an unusually busy week:
https://www.marriott.com/hotels/travel/bosbg-fairfield-inn-and-suites-boston-cambridge/
and https://goo.gl/maps/sHfbdEC9j9R7cs3b6

noah



Re: sharing an rbnb (Debian Cloud Sprint Fall 2019)

2019-08-18 Thread Noah Meyerhans
On Sun, Aug 18, 2019 at 10:36:51AM +0100, Marcin Kulisz wrote:
> > I'm very much for sharing a big airbnb with any of you as well. I've
> > searched too, and it's a way cheaper than hotels indeed. I don't mind
> > walking a bit if it's to get the comfort of a private place. So, count
> > me in your airbnb searches! Anyone else to join?
> 
> I agree with zigo, tho 1st have to figure out if I'm able to go

I'm potentially interested in this. It'll depend on how much (if any) my
employer is willing to put towards this. I'll try to answer definitively
in the next few days.



Re: List of stuff running on the Debian AWS account

2019-08-27 Thread Noah Meyerhans
On Tue, Aug 27, 2019 at 10:32:41PM -0300, Antonio Terceiro wrote:
> > Do we have a list of stuff that runs on our old AWS account?  As we need
> > to migrate all of this to our new engineering account, it would be nice
> > to actually know what runs there.  It would be even better if we know
> > how this is all setup.
> 
> ci.debian.net runs there. 1 master server and 12 workers. what exactly
> do you mean with "how this is all setup"? the stu
> 
> there is also https://collab.debian.net/ which is run by Valessio Brito
> (but the instance was created by me).

Were the instances created by hand? Or using a tool like AWS
CloudFormation, TerraForm, etc? Are they managed using some kind of
configuration management system, or is it all manual?

For some background, the AWS account under which these instances are
running is owned by an individual DD, not SPI/Debian. We have created a
new account that is properly manageable and officially recognized by
SPI. We'd like to migrate as much as possible to the new account.

The old account won't go away completely, as it is where the pre-buster
AMIs live, and they can't be migrated between accounts. So there's not
an immediate sense of urgency, but we'd like to get things moved as soon
as possible.

Practically speaking, moving services to the new account will involve
launching replacement instances. If they were created/managed by a tool,
rather than by hand, this is much easier, hence Bastian's question.

noah


signature.asc
Description: PGP signature


Re: Is Eucalyptus upstream dead?

2019-09-03 Thread Noah Meyerhans
On Tue, Sep 03, 2019 at 10:24:24AM +0100, kuLa wrote:
> > on my side I would have no objections with a removal.
> 
> Should we actively ask for removal or wait till normal bugs will become RC and
> removal for all py2 packages is going to be compulsory?
> I personally am ok with both.

In my experience, early removal is preferable. It gives users an
indication that they should be looking for alternatives now, while
things are still reasonably safe to use. They can migrate in their own
time frame. Whereas if we wait until a (possibly security related) RC
bug, the transition is much more abrupt for the users.

The big question to me is whether the packages should be removed from
(old)stable. In general, I'd say yes for the same reasons as above. By
keeping the packages in the archive, we are presenting a level of
support for them that we may not actually be prepared to meet.



Re: Moving daily builds out of main debian-cloud-images project

2019-09-02 Thread Noah Meyerhans
On Sun, Sep 01, 2019 at 12:40:50PM +0200, Bastian Blank wrote:
> As mentioned during the last meeting, I would like to move the daily
> builds out of the main debian-cloud-images project.  The new project
> reponsible for them would exist in a different group, so we don't longer
> need to guard access to the main cloud-team group that strict.
> 
> Disadvantages of this move:
> - Visibility of daily builds is reduced as they are in a new group.
> - Code and config updates for now need explicit changes in the -daily
>   (and the same in the already existing -release) repo to become active.
> 
> Advantages:
> - Access credentials for vendor and Debian infrastructure only exist in
>   the new group, so accidently leaking them is way harder.
> - All jobs in this new group will run on Debian infrastructure.
> - We gain the possibility to actually test upload procedures, which may
>   need access to credentials.
> 
> Any objections?

+1 to this proposal. Reduced visibility of the daily builds is of
minimal impact. It's likely that most users of the daily builds will
either be members of the cloud team or people that have been directed to
the daily buids by members of the cloud team.

noah



Re: Using GitLab CI (was: Moving daily builds out of main debian-cloud-images project)

2019-09-02 Thread Noah Meyerhans
On Mon, Sep 02, 2019 at 05:10:55PM +0200, Thomas Goirand wrote:
> State what? That we're supposed to build the cloud images on Casulana?
> As much as I know, that has always been what we were supposed to do. You
> alone decided that the gitlab's CI was the way to go.

Thomas, you seem to be under the mistaken impression that building
images from GitLab pipelines implies that the builds are not happening
on casulana. That is not the case. The builds coordinated by salsa *do*
happen on casulana.

> We're not only building images for Sid/Unstable, but also for stable. In
> such case, we want the images to be built *only* when needed, ie: when a
> package is updated in security.debian.org, or when we have a point
> release. That's what was done for the OpenStack images since Stretch.

This, also, is fully compatible with salsa-driven builds happening on
casulana.

Does this address your concerns regarding Bastian's proposal?

noah



Re: Releasing Buster for AWS

2019-09-08 Thread Noah Meyerhans
On Sun, Sep 08, 2019 at 07:49:45PM +0200, Bastian Blank wrote:
> If no-one shouts I will do the first release of Buster for AWS with both
> amd64 and arm64 tomorrow.  Azure needs to be done anyway.

Do it. A lot of users will be happy to have buster AMIs. The remaining
points that were unresolved in
https://salsa.debian.org/cloud-admin-team/debian-cloud-images-release/merge_requests/5
aren't relevant to this release, so they can be discussed later.

noah



Re: Releasing Buster for AWS

2019-09-18 Thread Noah Meyerhans
On Wed, Sep 18, 2019 at 10:22:17PM -0500, Stephen Gelman wrote:
> On Sun, Sep 08, 2019 at 07:49:45PM +0200, Bastian Blank wrote:
> > > If no-one shouts I will do the first release of Buster for AWS with both
> > > amd64 and arm64 tomorrow. Azure needs to be done anyway.
> 
> Seems this didn’t happen.  What are the current blockers for getting an AMI 
> released?  Anything I can do to help?
> 

It was released several days ago.
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster



Re: Regular meeting of the team

2019-08-05 Thread Noah Meyerhans
On Sun, Aug 04, 2019 at 10:15:32PM +0200, Tomasz Rybak wrote:
> > So I'd say Wednesday 7th of Aug at 1900UTC or maybe we could use
> > something like
> > doodle.com to coordinate this?
> 
> I propose the same weekday (Wednesday) and hour (19:00 UTC),
> but let's move to next week (so 2019-08-14).
> 7th might be a bit too close for organize.
> 
> Any objections or remarks?

I can make either proposed date work, at 19:00 UTC.

noah



Re: Regular meeting of the team

2019-08-05 Thread Noah Meyerhans
On Mon, Aug 05, 2019 at 05:57:57PM +0100, Marcin Kulisz wrote:
> > 1900 UTC makes it 2100 Geneva time. I'd very much prefer something
> > during work hours if possible. Or is it that the majority of us is doing
> > this away from office hours?
> 
> I suggested this time having in mind that quite a few ppl are in the US, but I
> don't anticipate any issues with my own timetable if we'd make it a bit
> earlier thus I'm fine with earlier time if this fits other ppl as well.

I could go as early as 15:00 UTC. Later is better. :)



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
On Thu, Aug 08, 2019 at 05:56:44PM -0700, Tarjei Husøy wrote:
> > The AMIs in the AWS marketplace should be launchable in the new regions.
> > See https://aws.amazon.com/marketplace/pp/B073HW9SP3 and let me know if
> > it'll work for you. The Marketplace AMIs are identical to the ones we
> > publish directly.
> 
> I wasn’t able to get this to work. I usually launch instances via the
> API, but the API returns zero AMIs for the account (379101102735) in
> the ap-east-1 region. I tried via the web console too, that errors
> with "AWS Marketplace was unable to proceed with the product you
> selected. Please try selecting this product later."

Interesting. You might consider asking your AWS support people why
you're not able to launch AMIs that the Marketplace reports as
available.

> > I'll see about getting the new regions enabled for the Debian account
> > that we use for publishing the AMIs on
> > https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch
> 
> Great, thanks! If you have web access it’s quite easy to activate,
> just click  “My Account”, scroll down to “AWS Regions” and click the
> buttons.

Unfortunately, I don't have the necessary permission to opt our AWS
account into those regions...

noah



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
> Amazon recently launched two new regions, Hong Kong (ap-east-1) and
> Bahrain (me-south-1). All new regions after March 20, 2019 come on a
> opt-in basis [1], thus you might not have seen them show up unless you
> saw the news when they were introduced. Would it be possible to have
> stretch images published for these regions?

Yep, that's an oversight on my end. The need to opt in to new regions
is not something I've fully internalized yet.

The AMIs in the AWS marketplace should be launchable in the new regions.
See https://aws.amazon.com/marketplace/pp/B073HW9SP3 and let me know if
it'll work for you. The Marketplace AMIs are identical to the ones we
publish directly.

I'll see about getting the new regions enabled for the Debian account
that we use for publishing the AMIs on
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch

noah



Bug#934274: cloud.debian.org: stretch AMIs not available in new regions

2019-08-08 Thread Noah Meyerhans
Package: cloud.debian.org
Severity: important
Control: submitter -1 Tarjei Husøy 

Hi,

Amazon recently launched two new regions, Hong Kong (ap-east-1) and Bahrain 
(me-south-1). All new regions after March 20, 2019 come on a opt-in basis [1], 
thus you might not have seen them show up unless you saw the news when they 
were introduced. Would it be possible to have stretch images published for 
these regions?

Thanks!

[1]: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html

Have a splendid day!

—
Tarjei Husøy
Co-founder, megacool.co


Re: Debian 10 Buster AWS AMI

2019-07-21 Thread Noah Meyerhans
On Sat, Jul 20, 2019 at 11:26:38PM +0300, Mustafa Akdemir wrote:
>Can i use Debian GNU/Linux 9 (Stretch) AMI by upgrading to Debian
>GNU/Linux 10 (Buster) for Wordpress web site server until Debian GNU/Linux
>10 (Buster) AMI will be published. May it cause any problem by upgrading
>Debian GNU/Linux 9 (Stretch) AMI?

Hi Mustafa.

Upgrading an instance from stretch to buster should be essentially the
same as any other stretch->buster upgrade.

If desired, you can also create your own AMIs from scratch using the
same FAI configs that the cloud team uses to generate the official
images. In that case, the resulting image will be essentially
indistinguishable from the official ones, except they'll be owned by
your account. The steps to do this are:

install fai-server, fai-config, and fai-setup-storage (>= 5.7) and
qemu-utils.

Clone the FAI configs from 
https://salsa.debian.org/cloud-team/debian-cloud-images.git

Generate an image using:
/usr/sbin/fai-diskimage --verbose --hostname debian \
--class 
DEBIAN,CLOUD,BUSTER,BACKPORTS,EC2,IPV6_DHCP,AMD64,GRUB_CLOUD_AMD64,LINUX_IMAGE_BASE,LAST
 \
--size 8G --cspace /path/to/debian_cloud_images/build/fai_config \
/tmp/image.raw

Then write the resulting /tmp/image.raw file to a dedicated 8 GB EBS
volume with:
# dd if=/tmp/image.raw of=/dev/FOO bs=512k

Then register the EBS volume as an AMI using the 'aws ec2
register-image' command from the awscli package. Be sure to enable
EnaSupport and SriovNetSupport. You may find the wrapper script at
https://salsa.debian.org/noahm/ec2-image-builder/blob/master/bin/volume-to-ami.sh
convenient. This script is used in the publication of the stretch AMIs,
but not the buster AMIs. The process is essentially the same, though.

noah



Re: Handling predictive network interfaces 'sometimes'?

2019-12-04 Thread Noah Meyerhans
On Wed, Dec 04, 2019 at 01:16:27PM +1100, paul wrote:
> I'm reworking my old VPN server, and will use the Debian 10 AMI in AWS. I've
> noticed that predictable network interface names are enabled for t3 servers,
> but not t2 - I have test setup on a t2.micro and a t3.micro, and only the t3
> has predictable interface names. I'm trying to write up some Ansible
> templates for this new vpn setup.

Actually, predictable interface names are enabled everywhere.  There are
implementation differences between the t2 and t3 instance types that
change udev's behavior with regard to how interface names are chosen.

The ENA network device used on t3 instances appears on the PCI bus of
the instance.  So when udev inspects the device, it finds information
that it uses to derive a consistent name for the device (see 'udevadm
info /sys/class/net/ens5' for the information that it works from).

T2 instances are based on Xen and use the Xen netfront (vif) interface.
These interfaces aren't PCI devices, so udev can't generate a name based
on the PCI bus ID. Compare the 'udevadm info' output for a t2 with that
of a t3.  Because Debian doesn't enable the MAC address based naming
scheme, udev ends up leaving the kernel's interface name in place on t2.

> I don't play around with iptables a lot (my netadmin-fu is weak), but what's
> the best way to go about writing a set of firewall rules that will satisfy
> both an eth0 and an ens5? Just simply duplicate the rule for each naming
> type? Disable predictable names somehow (google is confusing on how,
> exactly)? I'd like to end up with a template that 'just works' without
> having to know about this t2/t3 difference issue. It's not the end of the
> world if I can't, but I'd like to avoid surprising 'future me' down the
> road.

You can disable predictable interface naming by passing "net.ifnames=0"
to the kernel command line (edit /etc/default/grub) if you want to
disable interface renaming completely.  But a better approach would be
to update your firewall configuration to not hardcode a specific
interface name.  You probably can get what you want by identifying the
interface associated with your default route, which you can get reliably
by with "ip -o route show default"

noah



Re: User-data And cloud-init

2019-12-09 Thread Noah Meyerhans
On Mon, Dec 09, 2019 at 06:18:15PM +0200, Michael Kanchuker wrote:
>Is there an official image like with Jessie or Stretch?

Yes, details are at https://wiki.debian.org/Cloud/AmazonEC2Image/Buster

It is not yet available on the AWS Marketplace because we are still
blocked on some legal details...

Unfortunately, I think even buster does not contain a new enough
cloud-init to support text/jinja2 userdata parts.  At least according to
the cloud-init docs, that feature wasn't added until version 19.3, which
we don't even have in sid yet.  Buster contains 18.3.

text/jinja2 is documented for 19.3 at:
https://cloudinit.readthedocs.io/en/19.3/topics/format.html

And note its absense from the 19.2 docs at:
https://cloudinit.readthedocs.io/en/19.2/topics/format.html

noah



Re: Configuring /etc/resolv.conf ?

2019-12-06 Thread Noah Meyerhans
On Fri, Dec 06, 2019 at 04:42:18PM +0100, Dick Visser wrote:
> I'm struggling to add custom nameservers to /etc/resolv.conf.
> The file gets overwritten on reboot, but I can't find out where this is done.
> Any ideas?

On our cloud images, resolv.conf is managed by dhclient, which is
invoked by ifup and is responsible for setting up the network interfaces
based on DHCP negotiation with a remote service provided by the cloud
provider.

The /sbin/dhclient-script shell script contains a function
make_resolv_conf(), which generates and installs the new resolv.conf.
If needed, you can redefine that function by placing a script fragment
in /etc/dhcp/dhclient-enter-hooks.d/

noah



Re: IRC meeting: Wedensday, 2019-12-11

2019-12-10 Thread Noah Meyerhans
On Tue, Dec 10, 2019 at 08:46:08AM +0100, Tomasz Rybak wrote:
> I remind everyone that our next meeting will take place
> this Wednesday, 2019-12-11, at 19:00 UTC.

I won't be able to make this one because of work committments.  Items
I would have wanted to discuss include:

1. Still no word on AWS Marketplace Seller Agreement acceptance from
SPI.  Can a delegate please ping them again?

2. The thread at [1] makes me wonder if we should consider trying to
update cloud-init in our stable images.  This could be done via a full
update to stable, via stable-updates, as we've discussed with regard to
cloud SDKs and tools (arguably cloud-init fits into this category,
despite being vendor-agnostic).  Alternatively, we could simply provide
a package via buster-backports, and include that in the images.  I'll
start a new thread on this on debian-cloud@l.d.o.

3. I have begun work on introducing a "cloud optimized" kernel for
arm64, similar to what we've already got for amd64. [2]

4. I still need to post somewhere (blog, bits, etc) about our daily sid
images, as discussed at the last IRC meeting.

noah

1. https://lists.debian.org/debian-cloud/2019/12/msg8.html
2. https://salsa.debian.org/kernel-team/linux/merge_requests/193



Re: Buster AMI interface names

2019-10-18 Thread Noah Meyerhans
On Fri, Oct 18, 2019 at 08:35:34AM -0700, Ross Vandegrift wrote:
> On Fri, Oct 18, 2019 at 07:15:23AM +0200, Geert Stappers wrote:
> > Where to sent patches for  `configure-pat.sh`?
> 
> I don't know, I'm not familiar with it.

The canonical source for this script is the aws-vpc-nat package for
Amazon Linux. It's available in the Amazon Linux yum repositories.
Sources can be retrieved with "yumdownloader --source aws-vpc-nat"

There aren't currently public vcs repositories for Amazon Linux
packages, so patches to the SRPM would need to be sent via AWS support
channels. I can probably help push changes through.

Note also that the script does not appear to be published under an open
source license, so it probably shouldn't be redistributed publicly. This
issue would also be worth raising with AWS support.



Re: Buster AMI interface names

2019-10-17 Thread Noah Meyerhans
Hello.

On Fri, Oct 18, 2019 at 12:39:15AM +0200, Dick Visser wrote:
> I'm happily using the new Buster AMI, but I noticed that the image has
> consistent device naming enabled, so my instances have their single
> interface called "ens5".

Can you explain why this causes problems for you? We want to keep device
renaming enabled for consistency with standard Debian installations, and
we aren't aware of specific problems with it that necessitate it being
disabled.

noah



Re: Debian 10 Buster AWS AMI

2019-10-05 Thread Noah Meyerhans
> Looking forward to official Buster image soon. Please let me know if I
> can be of any assistance.

Details about the official buster AMIs can be found at
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster, and these AMIs are
available today.

The Marketplace listings, when they are available, will reference the
same images. There are additional license terms that need to be accepted
by SPI before we can publish to the marketplace. We hope to have this
addressed within the next couple of weeks.

Is there a specific reason why you're interested in the Marketplace
listing, as opposed to the AMIs listed on the wiki?

noah



Re: AWS Marketplace re:Invent session - Debian

2019-10-10 Thread Noah Meyerhans
On Thu, Oct 10, 2019 at 03:02:48PM +0100, Marcin Kulisz wrote:
> > This is only email I got about this, so maybe I'm missing something
> > here. But - is this something we shold talk about during sprint next
> > week?
> 
> IMO it makes sense to have a chat about it. If we want Debian to be more
> visible and used it wouldn't hurt to do that.
> 
> But I think problem in here is going to be not with technicalities per se but
> with bringing people working on docker images to the team.

Assuming SPI has signed off on the user agreements for AWS marketplace
access, we'll probably want to spend time on that topic in general, and
ideally get the buster AMIs listed there. We should keep container image
publication in mind as we work on that.

One thing that's worth talking about, regardless of whether the Docker
image maintainers are part of the cloud team or not, is how we control
access to the marketplace publication process. At present, the only way
to publish is via the web console. Access is controlled by IAM
permissions, and we'll need to determine whether or not the permissions
allow us to control publication access on a granualar enough basis to
suit our needs. [1] Roles that can publish AMIs should not necessarily
have the ability to publish container images, and vice versa. At the
moment, I'm not sure if that's possible, since there aren't distinct
actions for AMI publication and image publication, and resource level
access isn't supported, so we might have to figure something out. [2]

noah

1. 
https://docs.aws.amazon.com/marketplace/latest/userguide/detailed-management-portal-permissions.html
2. 
https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awsmarketplacecatalog.html



Re: Sprint 2019

2019-10-09 Thread Noah Meyerhans
On Tue, Oct 08, 2019 at 01:22:20PM -0400, Jonathan D. Proulx wrote:
> Do we have an attendee count for room setup, I've variously hear 13
> and 40...

The currently confirmed roster is at
https://wiki.debian.org/Sprints/2019/DebianCloud2019 and shows about 13
people. I wouldn't expect much deviation from that.

noah



Debian 9.12 AMIs available on Amazon EC2

2020-02-10 Thread Noah Meyerhans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Debian 9.12 (stretch) AMIs for Amazon EC2 are now available.  See the 9.12
release announcement at https://www.debian.org/News/2020/2020020802 and
https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch for more details.

The AMIs are owned by AWS accound ID 379101102735 and have the following
details:

Names:
arm64: debian-stretch-hvm-arm64-gp2-2020-02-10-74089
amd64: debian-stretch-hvm-x86_64-gp2-2020-02-10-73984

Regional AMI IDs:

(region) (arm64)   (amd64)
ap-east-1   ami-08821c58afaf4ad03 ami-0b868fbb4c877f437
ap-northeast-1  ami-09de4ee1721cb8f29 ami-0d1a6c2b848d23a6d
ap-northeast-2  ami-0fcb3bb296f15b982 ami-00df969d08a3ea730
ap-south-1  ami-0c046658d5b68c1af ami-09b1626b27596815f
ap-southeast-1  ami-0e7f0bfc03fb01aef ami-0e13f5fb9f9f3c104
ap-southeast-2  ami-0778a46755a9c389d ami-0d63d6457a180078e
ca-central-1ami-054e5b309c4dca528 ami-09ff1197737556c58
eu-central-1ami-023e1f91c848fc49d ami-09415feedc2d22008
eu-north-1  ami-024bf24155b02c7db ami-098c2f770214112a1
eu-west-1   ami-04e1a4e612eedbbc3 ami-079b9207facfcaf0e
eu-west-2   ami-0b4b1178457b3a06c ami-0746bb5dcdb5f08fe
eu-west-3   ami-08516a90c447806d8 ami-0f4c84f7511a7b98e
me-south-1  ami-0a75c46ee538159e8 ami-091adbf53613eeef1
sa-east-1   ami-00199281329b61d5b ami-0305f692e938ece5b
us-east-1   ami-0ea51afb2084a5bf3 ami-066027b63b44ebc0a
us-east-2   ami-0c8f04a4e82d45248 ami-09887484cc0721114
us-west-1   ami-0413e0d0fc9173aed ami-05b1b0e2065a73a53
us-west-2   ami-0b2a776780bc56851 ami-0d270a69ac13b22c3

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE65xaF5r2LDCTz+zyV68+Bn2yWDMFAl5Bxu8ACgkQV68+Bn2y
WDP+qw//QJiwRs9KcFli2B2KB2eVknaENKHHQou7aUCHUNfHkZ3DcBDsxOEHVRDn
/6+flJj+WDE2HEAcufv+clHpMMizsRfw9JUXRCXw68pR8f/RqYVduFhFxxY9XEc5
OYDvuMyFIrlrF7Ovpy+CuL3TLUsjRTIm9WFhHWkp1Eo6Bqp/P4nuBTi8DCfX1ByR
t9jlX1GPatg8w3uEMOth0ZfnkebgYwcaB56UHUbAo3CU/Bo93+OnLVjnVlFLI8NU
j3uV+/wISDMnMAWoJRyEQ34YSSxZnyT2p0Q+Y9iCNMUm3ojDbgkRiXD7lEWBGAQW
WmmmtaA3iU90OJ20z0WC8wuHv1Adhy2+BUiMkl0XcPRNJa3OAWP2Q9+McXIk/dqv
UFojx+/BfmMtdxy5FOYGXzkIoch0JiFTGWT+I4VGjLnLxADEEXdpxhFnzNFQ4Juq
9Toh7hPZRJJC/lpNDgDOmkUCk48JNvUnnrW9SVRCJ4wpNeKRoYs+qc7AtOk3f8ez
y3KFpmiV1HfCjGO8V1/WWj1aw1mJ0DrYhXorczFAe2PL4durUipkVVYzF2zL2hTw
X7D+Kan+IjT1LakF5LHPzKcrp3czOppCrx1yd9bb9VMt/LQMlsD7wduYxywV11ED
aprlJt7be+XAqbObq7ton/825N7CZjZYUoYOqC9HFbXlCAJNRIE=
=Qf2n
-END PGP SIGNATURE-



Debian 10.3 AMIs available on Amazon EC2

2020-02-10 Thread Noah Meyerhans
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Debian 10.3 (buster) AMIs for Amazon EC2 are now available.  See the 10.3
release announcement at https://www.debian.org/News/2020/20200208 and
https://wiki.debian.org/Cloud/AmazonEC2Image/Buster for more details.

The AMIs are owned by AWS accound ID 136693071363 and have the following
details:

Names:
arm64: debian-10-arm64-20200210-166
amd64: debian-10-amd64-20200210-166

Regional AMI IDs:

(region) (arm64)   (amd64)
ap-east-1ami-8bcc88fa  ami-f9c58188
ap-northeast-1   ami-0282bfbfdd650f4cc ami-0fae5501ae428f9d7
ap-northeast-2   ami-0b7aad3b1c1ab5bf5 ami-0522874b039290246
ap-south-1   ami-074b1202dd6104cba ami-03b4e18f70aca8973
ap-southeast-1   ami-074d2e4d3a12447e7 ami-0852293c17f5240b3
ap-southeast-2   ami-0c7faee4092c73179 ami-03ea2db714f1f6acf
ca-central-1 ami-0e1e0dceab7778252 ami-094511e5020cdea18
eu-central-1 ami-02326335b24f04021 ami-0394acab8c5063f6f
eu-north-1   ami-0cc1803d72d492d0c ami-0c82d9a7f5674320a
eu-west-1ami-013be4b5a86a1bff7 ami-006d280940ad4a96c
eu-west-2ami-0a8d5d5404d742349 ami-08fe9ea08db6f1258
eu-west-3ami-04063714230353180 ami-04563f5eab11f2b87
me-south-1   ami-08b83b2026662508c ami-0492a01b319d1f052
sa-east-1ami-001f53dee8cfb04c7 ami-05e16feea94258a69
us-east-1ami-031d1abcdcbbfbd8f ami-04d70e069399af2e9
us-east-2ami-0b1808bb4e7ed5ff5 ami-04100f1cdba76b497
us-west-1ami-09d02110862e0e6f6 ami-014c78f266c5b7163
us-west-2ami-0803feda130c01d47 ami-023b7a69b9328e1f9

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE65xaF5r2LDCTz+zyV68+Bn2yWDMFAl5BsFAACgkQV68+Bn2y
WDO+kw/+NLDyDol2h9biIiZDE6G3nh7DpG98im0I1e03zviQ5pD98c5ZcQSlqfhE
r1mt2owvLfj+qRQRBc2y22Z/w/ZOOnzzkw88xgIatrOoabPFZiVjVWYsA6Yn69IR
QliRlAVUWaq2GknbSETG6rv32G6/8nUT0L4vKpEcTS4nXmUA4t25mnrlBqtFsK9P
0huqcXrqCG7MlVWgP0xJetFTFLgnXpwfzu/xlzB3woe4npUZEYKJ2PIvL3Gqd1Z7
71aHYCJfyyXRRgZ4hbzsr4WBt1+3JbJyfjqGFPbiZp/3UI1stscCT2srxENDyUBH
mX42lchqol65QPZRO69vwEkSzlePGPVpMonoEitE77690hBhUDrwhYuWIDmBistP
I1cFc0v+0YknJdI/lwRBwz4HhoD3wWWICVGPBrghMVhQA/Pd5CLZvEr/olaudKyk
ok8y60VN97ccczA74pdfTimbWfXKC2SDzFo4Oi4QJsNJmhQPQFfg0viQ4Mp36T/V
iaU1ogB4DWFzNvup+VgOxK9bcNYSN1r1hmOsAWOOBpzsde7IACktfS39EuUowYfm
3x5aoX1QcvXn3fx27rAHUlDxfwEHjssA0WiZT/g89Tv761UPDI7R7Vjg7dlvEi4p
O90yEbuClYcRRQx/DWOe9o92UyHLeiKm9G/iWC1SomtWLOsSIBs=
=AGIn
-END PGP SIGNATURE-



Bug#952563: src:cloud-utils: ec2metadata does not speak EC2 IMDSv2

2020-02-25 Thread Noah Meyerhans
Package: src:cloud-utils
Version: 0.31-1
Severity: important

The ec2metadata command queries a well-known link-local endpoint
(169.254.169.254 in Amazon EC2) to obtain information about the instance
on which it runs.  Last year, AWS released "IMDSv2" in an effort to
protect customers against some potentially severe information leaks
related to accidentally proxying this local data to the network.  Details
at
https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/

IMDSv2 makes use of a session-based protocol, requiring clients to first
retrieve a time-limited session token, and then to include that token with
subsequent requests.

Because the intended purpose of IMDSv2 is to provide an additional layer
of defense against network abuses, customers utilizing it may choose to
disable IMDSv1.  It's important that we facilitate this use case by
supporting IMDSv2 wherever possible.  We should work to add this support
in both bullseye and buster (and potentially stretch if feasible)

noah



Buster is available in the AWS Marketplace

2020-03-02 Thread Noah Meyerhans
Please see https://aws.amazon.com/marketplace/pp/B0859NK4HC for details.

Enjoy, and please leave reviews and ratings on the Marketplace.

noah



Re: Contacting cloud team for hosting blends.debian.net [elb...@debian.org: Re: Any sponsoring for a VM possible (Was: Rackspace ending free hosting for blends.debian.net)]

2020-01-23 Thread Noah Meyerhans
On Thu, Jan 23, 2020 at 10:15:33PM +0100, Andreas Tille wrote:
> > Currently the machine has 16GB memory 200GB sdd.  From current usage
> > the minimum requirement is 8GB memory (16 is certainly better since
> > it is running a UDD clone and teammetrics database) and 80GB sdd.
> > 
> > Is there anybody who could offer such a machine for long term usage.

Yes, in general this is something we can and should do.  We haven't done
this yet in the new SPI-owned AWS accounts, which are the right ones to
use here (assuming we decide to use AWS; obviously there are other
options), so there are some administrative and technical details to work
out.

How soon do you need this?

noah



Re: IRC meetings - time slots

2020-01-23 Thread Noah Meyerhans
On Tue, Jan 14, 2020 at 11:56:14PM +0100, Tomasz Rybak wrote:
> Till now we were having meetings on Wednesdays, 19:00UTC.
> It was morning in USA, and evening (but after work) in Europe.
> Should we keep this time, or change it?

I am fairly flexible, at the moment.  Any time within about ±3 hours of
20:00 UTC should work for me.  I can likely do later, but can't commit
to much earlier.

noah



Re: Contacting cloud team for hosting blends.debian.net [elb...@debian.org: Re: Any sponsoring for a VM possible (Was: Rackspace ending free hosting for blends.debian.net)]

2020-01-24 Thread Noah Meyerhans
On Fri, Jan 24, 2020 at 12:39:53PM +, Marcin Kulisz wrote:
> I'd say in the worst case scenario you could host it on one of the old 
> accounts
> and then migrate it out to the final one if it's not ready right now.
> Hopefully you've got this automated :-)

The only reason the old account would be any easier is that we don't
have any expectations that the resources there are provisioned with any
sort of tooling/automation, so we wouldn't feel quite so guilty about
doing things by hand.  It's... technically an option I guess. :)

Bastian and I have talked a little bit about using Lightsail[1] within
the new engineering account.  It doesn't require as much infrastructure
to be set up (VPCs, subnets, route tables, security groups, etc), so it
isn't blocked on us updating our terraform configuration to define all
these resources.  Lightsail has a 4-core, 16 GB RAM, 320 GB SSD option
that sounds good for this use case.

There are a couple issues with Lightsail:

Buster isn't available yet, so we'd need to start with a stretch
instance and upgrade it.  Not a show-stopper, but it adds some work.

No IPv6 support. (If you have an AWS account, please contact your TAM
and request this.)

noah

1.  https://aws.amazon.com/lightsail/



Bug#866613: cloud-init: Adding Apache v2 license to debian/copyright

2020-02-14 Thread Noah Meyerhans
On Fri, Jun 30, 2017 at 02:14:00PM +, Joonas Kylmälä wrote:
> We need to also take care of asking permission from the authors of
> Debian patches if they can be used under Apache v2 license.

I don't think there's anything copyrightable in any of those
contributions.  Note that none of the debian-specific changes include
any license information as it is.  I'm going to make the change to
debian/copyright to reflect upstream's license.

noah



  1   2   3   4   >