Next team meeting: 2024-06-12 20:00 UTC

2024-06-11 Thread Bastian Blank
Hi

Our next team meeting is scheduled for 2024-06-12 20:00 UTC.  We'll be
on jitsi: https://jitsi.debian.social/DebianCloudMeeting20240612.

Regards,
Bastian

-- 
Extreme feminine beauty is always disturbing.
-- Spock, "The Cloud Minders", stardate 5818.4



Bug#1072878: waagent: places systemd unit into /lib

2024-06-09 Thread Bastian Blank
On Sun, Jun 09, 2024 at 06:38:14PM +0200, Chris Hofstaedtler wrote:
> It appears your upload introduced a new file into /lib:
> /lib/systemd/system/waagent.service

Ah damn, I forgot that Debian builds are done in a completely
unsandboxed environment.  And someone added a broken auto-detection that
triggers then.

Bastian

-- 
Punishment becomes ineffective after a certain point.  Men become insensitive.
-- Eneg, "Patterns of Force", stardate 2534.7



Re: Kernel features and Cloud (and GCE)

2024-05-27 Thread Bastian Blank
Hi Andrew

On Wed, May 22, 2024 at 07:44:33AM -0700, Andrew Jorgensen wrote:
> The Debian images in Google Compute Engine use the Debian cloud
> kernel. This has been working well for us, because it includes the
> VirtIO, NVMe, and gVNIC drivers that are needed for most GCE machine
> types. As we move forward, additional kernel features are needed to
> support all features of current and future machine types.
> 
> For example, we’re going to make an Intel 6300ESB watchdog device
> available, and that needs a driver that’s been in Linux a long time
> but isn’t enabled in the cloud kernel. For that one, another Debian
> user +1’d the request because it would benefit users of other
> KVM-based clouds (including private clouds). We can enumerate other
> examples, but many of those also require backports or a future Debian
> release.

We already backport the Microsoft MANA network driver.  So at least
backports to stable are not that of a problem, if someone does it.
Backports to oldstable are most likely not happening, as this target is
too far off.

> Recently in response to another feature request for the cloud kernel,
> Noah Meyerhans mentioned that, “historically the cloud kernels have
> specifically targeted Amazon EC2 and Microsoft Azure.”

Yes, this is the documented target.  We did never properly add GCP,
because no communication happened.  I think we can do that, if someone
does a due diligence and knows the documentation better then we.

> So we have the problem that the Debian cloud kernel supports some, but
> not all, of the devices our shared users need, and we’re not sure of
> the right way to solve that. We wondered if we should switch the
> images to the generic kernel, or if there’s a way we could help the
> cloud kernel support more clouds, or if there’s a better solution we
> haven’t thought of.

We can support more clouds.  It is just a matter of taking care of it.

I currently play with splitting the modules into multiple different
sets, like almost all other distributions already do.  We would not need
to do multiple builds then and more targeted packages would be possible
if needed.

Regards,
Bastian

-- 
Conquest is easy. Control is not.
-- Kirk, "Mirror, Mirror", stardate unknown



Re: ocfs2_dlmfs missing from the cloud kernel

2024-05-17 Thread Bastian Blank
On Fri, May 17, 2024 at 12:44:32PM +0200, Bastian Blank wrote:
> On Fri, May 17, 2024 at 12:31:51PM +0200, Thomas Goirand wrote:
> > how do I change this?
> You install the non-cloud kernel.

The cloud kernel is limited in scope.  And the decision was that not
everything you can do on platforms is in scope.

So you can open a bug report, but for now I would say it is not in this
scope.  Cloud platforms tend to provide such things and you don't define
them on your own.

Bastian

-- 
It would seem that evil retreats when forcibly confronted.
-- Yarnek of Excalbia, "The Savage Curtain", stardate 5906.5



Re: ocfs2_dlmfs missing from the cloud kernel

2024-05-17 Thread Bastian Blank
On Fri, May 17, 2024 at 12:31:51PM +0200, Thomas Goirand wrote:
> how do I change this?

You install the non-cloud kernel.

Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, "Day of the Dove", stardate unknown



Re: Debian 10 backports repo moved to archives

2024-04-16 Thread Bastian Blank
On Mon, Apr 15, 2024 at 11:45:33AM -0700, Noah Meyerhans wrote:
> The Debian cloud team also builds and ships images with buster-backports
> enabled, and will need to deal with this change.

I just disabled it:
https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/403

Bastian

-- 
If some day we are defeated, well, war has its fortunes, good and bad.
-- Commander Kor, "Errand of Mercy", stardate 3201.7



Re: RESCHEDULED: Next team meeting: 2024-04-11 20:00 UTC

2024-04-09 Thread Bastian Blank
On Mon, Apr 08, 2024 at 04:32:37PM -0700, Ross Vandegrift wrote:
> Apologies, I didn't pay enough attention.  Bastian- would 4/18 work?

Sure.

Bastian

-- 
Emotions are alien to me.  I'm a scientist.
-- Spock, "This Side of Paradise", stardate 3417.3



Re: Next team meeting: 2024-04-10 20:00 UTC

2024-04-04 Thread Bastian Blank
On Thu, Apr 04, 2024 at 11:24:22AM -0700, Ross Vandegrift wrote:
> Tues 4/9, Thurs 4/11, or Fri 4/12 @ 20:00 UTC would work with me.

I could do thursday and friday.

Bastian

-- 
Sometimes a man will tell his bartender things he'll never tell his doctor.
-- Dr. Phillip Boyce, "The Menagerie" ("The Cage"),
   stardate unknown.



Bug#1068107: cloud.debian.org: pull images with compromised xz packages

2024-04-01 Thread Bastian Blank
On Sat, Mar 30, 2024 at 12:44:35PM -0700, Ross Vandegrift wrote:
> Finally, apologies for not being able to do this myself - I still do not have
> my account setup for access to core machines.

Tasks related to this incident are tracked here:
https://salsa.debian.org/ftp-team/xz-2024-incident/-/issues/8

Bastian

-- 
Another dream that failed.  There's nothing sadder.
-- Kirk, "This side of Paradise", stardate 3417.3



Re: Call to GCE metadata/compute in nocloud buster image

2024-03-22 Thread Bastian Blank
On Fri, Mar 22, 2024 at 10:03:29AM +0100, Stephan Müller wrote:
> Can this be related to the underlying genericcloud image? So far, I was 
> unable to find anything with "computeMetadata" in the systemlogs of the VMs.
> I checked the boot log (including cloud-init process) using virsh console 
>  but also found no hint to computeMetadata

This is from cloud-init:
| cloudinit/sources/DataSourceGCE.py:MD_V1_URL = 
"http://metadata.google.internal/computeMetadata/v1/;

Bastian

-- 
Yes, it is written.  Good shall always destroy evil.
-- Sirah the Yang, "The Omega Glory", stardate unknown



Re: Debian 11.9 Azure image release enquiry

2024-02-19 Thread Bastian Blank
Hi Amrutha

On Sun, Feb 18, 2024 at 10:42:00PM +, Devidas Shanbhag, Amrutha wrote:
> Debian 11.9 was released on February 10th, 2024. 
> https://www.debian.org/News/2024/2024021002 When can we expect the vm images 
> in Azure? The images are already available for AWS and GCP.

For Azure there is an additional step to publish them and no-one managed
to click on that yet.  I've done that now, so it should be available in
a few hours.

> On a related note, there was an updated Debian 11.8 release on January 4th 
> 2024. But the latest Debian VM in Azure “Debian:debian-11:11:0.20231228.1609” 
> is still from December 2023.

Debian 12 is the current stable release.  It is better tendered for
overall.

Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, "Day of the Dove", stardate unknown



Re: Next team meeting: 2024-02-14 20:00 UTC

2024-02-14 Thread Bastian Blank
On Wed, Feb 14, 2024 at 12:04:58PM -0800, Ross Vandegrift wrote:
> On Fri, Feb 09, 2024 at 08:56:19AM -0800, Ross Vandegrift wrote:
> > Our next team meeting is scheduled for 2024-02-14 @ 20:00UTC.  We'll be
> > on jitsi at: https://jitsi.debian.social/DebianCloudMeeting20240214
> 
> Looks like that prediction was incorrect - I guess jitsi.debian.social
> has been disabled due to abuse.  This meeting won't be able to happen,
> we can look for alternatives for next meeting.

And I completely missed the date.  Sorry about that.

Bastian

-- 
The heart is not a logical organ.
-- Dr. Janet Wallace, "The Deadly Years", stardate 3479.4



Re: Resignation as cloud team delegate

2024-02-05 Thread Bastian Blank
On Mon, Feb 05, 2024 at 08:59:56PM +0200, Jonathan Carter wrote:
> Let me know what you think,

Looks good.

Bastian

-- 
Superior ability breeds superior ambition.
-- Spock, "Space Seed", stardate 3141.9



Re: using zstd for qcow2 cloud images

2023-12-20 Thread Bastian Blank
On Wed, Dec 20, 2023 at 11:45:18AM -0800, Ross Vandegrift wrote:
> > What is the minimum qemu version for using those files?
> 5.1, I think: qemu seems to have a single block implementation for qemu and
> qemu-img.  zstd was added in [1] & [2], which are in their v5.1.0 tag.

In Debian it is 6.1, aka Bookworm:

| qemu (1:6.1+dfsg-7) unstable; urgency=medium
|  * enable zstd compression support (Build-Depends)

So I assume using this would currently exclude quite some users.

Bastian

-- 
Youth doesn't excuse everything.
-- Dr. Janice Lester (in Kirk's body), "Turnabout Intruder",
   stardate 5928.5.



Re: using zstd for qcow2 cloud images

2023-12-16 Thread Bastian Blank
On Sat, Dec 16, 2023 at 06:15:43PM +0100, Thomas Lange wrote:
> Is it worth to switch?

What is the minimum qemu version for using those files?

Bastian

-- 
Only a fool fights in a burning house.
-- Kank the Klingon, "Day of the Dove", stardate unknown



Next team meeting: 2023-11-08 20:00 UTC

2023-11-07 Thread Bastian Blank
Hi

Our next team meeting is scheduled for 2023-11-08 20:00 UTC.  We'll be
on jitsi: https://jitsi.debian.social/DebianCloudMeeting20231108.

Regards,
Bastian

-- 
The joys of love made her human and the agonies of love destroyed her.
-- Spock, "Requiem for Methuselah", stardate 5842.8



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-10-28 Thread Bastian Blank
On Sun, Aug 06, 2023 at 09:05:39PM +0200, Bastian Blank wrote:
> On Thu, Jul 27, 2023 at 01:39:39PM +0200, Bastian Blank wrote:
> > There exists now a branch "use-identity".  This seems to work with
> > Firefox.  At least the authentication part itself works and I already
> > recorded the correct URL in the application.  You just can't use the
> > extension button, no idea why yet.
> I found the problem, this is different support by Chromium and Firefox
> of the extension stuff.  So both require different manifests.

The extension now works completely in Firefox.  Just call "make".

Bastian

-- 
Wait!  You have not been prepared!
-- Mr. Atoz, "Tomorrow is Yesterday", stardate 3113.2



Bug#1054240: Grub install failure with grub-cloud-amd64

2023-10-19 Thread Bastian Blank
On Thu, Oct 19, 2023 at 07:31:08PM +0200, Alexis CAMILLERI wrote:
> I suggest using grub-probe -t disk instead of grub-probe -t device.
> Disk param will return the disk name instead of the partition, so the sed
> command can be removed and raid device will work.
> 
> local basedev=$(grub-probe -t disk /boot/)

Hmm, not sure why I missed that when I built that.  Will take a look
later.

Bastian

-- 
If there are self-made purgatories, then we all have to live in them.
-- Spock, "This Side of Paradise", stardate 3417.7



Bug#1054240: Grub install failure with grub-cloud-amd64

2023-10-19 Thread Bastian Blank
On Thu, Oct 19, 2023 at 07:31:08PM +0200, Alexis CAMILLERI wrote:
> Installing grub on an i386 server with raid partitioning does not work
> because the script does not manage a raid mount for /boot, due to
> https://salsa.debian.org/cloud-team/grub-cloud/-/blob/debian/0.1.0/debian/grub-cloud-amd64.postinst#L6

You missed the description of this package:

| You don't want to use this package outside of cloud images.

Bastian

-- 
It is a human characteristic to love little animals, especially if
they're attractive in some way.
-- McCoy, "The Trouble with Tribbles", stardate 4525.6



Re: S3-backed snapshot implementation on AWS?

2023-09-24 Thread Bastian Blank
On Sun, Sep 24, 2023 at 04:09:31PM -0700, Noah Meyerhans wrote:
> I agree that it would be best to design something more cloud-oriented.
> However, if there's an existing infrastructure that can be moved as a
> "lift & shift" into AWS now, with architectural refactoring happening
> later, that's an OK place to start.

Currently it does not even support S3.  And the existing software seems
to have two separate services for ingestion and webinterface already.

But if there exists a tool that tries to read all files from the storage
to check them for changes (some fsck type tool), this should not be used
in this way on such a storage.

Bastian

-- 
Is truth not truth for all?
-- Natira, "For the World is Hollow and I have Touched
   the Sky", stardate 5476.4.



Re: S3-backed snapshot implementation on AWS?

2023-09-24 Thread Bastian Blank
On Sun, Sep 24, 2023 at 09:21:16PM +0200, Michael Kesper wrote:
> Be aware that AWS S3, while featuring negligible staorage cost,
> can become very expensive if ever the need arises to get the data back
> out of AWS:
> https://discourse.nixos.org/t/the-nixos-foundations-call-to-action-s3-costs-require-community-support/28672

Sure.  That's why the call was if AWS would sponsor that.

We could of cause also ask Cloudflare if they would sponsor something
like that.  I have absolutely no idea if that infrastructure would be
usable for it.

Regards,
Bastian

-- 
Behind every great man, there is a woman -- urging him on.
-- Harry Mudd, "I, Mudd", stardate 4513.3



Re: Changes to sources.list

2023-09-22 Thread Bastian Blank
Hi

On Wed, Sep 20, 2023 at 10:48:12AM +, Sathish Mathimaran wrote:
> I was testing out the Debian 12 release and found that the sources.list file 
> is different from how it used to be in Debian 11. Our team has written 
> automations around the sources.list to list the security packages and the 
> versions. I have explained the issue we are facing in the stack over flow 
> page 
> (https://stackoverflow.com/questions/77140903/debian-12-bookworm-how-to-get-the-security-repositories-list-from-sources-list).
>  Can you please guide me on how to get the sources.list in the format similar 
> to Debian 11?

You can still use the old sources.list format.  For more information
please take a look at
https://manpages.debian.org/bookworm/apt/sources.list.5.en.html

For more, please specify your real problem, because you seem to be
miss-configuring apt somehow.

Regards,
Bastian

-- 
You!  What PLANET is this!
-- McCoy, "The City on the Edge of Forever", stardate 3134.0



Re: S3-backed snapshot implementation on AWS?

2023-09-22 Thread Bastian Blank
Hi Lucas

On Fri, Sep 22, 2023 at 08:42:10AM +0200, Lucas Nussbaum wrote:
> Could we use the Debian AWS account to host that service?

I would assume that a service like snapshot would be within the scope
for our AWS usage.  Noah?

>   It would
> require one fairly powerful VM, and a large S3 bucket (approximately
> 150-200 TB).

200 TB should be no problem.

However we need to talk about that "one […] VM", because this sounds
like you intend to use AWS as VM hosting, which it is not.

Please think about this in form of services and there should be at least
two:
- the injestor, which can only exist once and writes, and
- the web frontend, which should be able to exist several times and only
  reads.

So you want to plan with running the multiple web frontends with load
balancers and maybe even cloudfront.

Regards,
Bastian

-- 
I object to intellect without discipline;  I object to power without
constructive purpose.
-- Spock, "The Squire of Gothos", stardate 2124.5



Re: Next team meeting: 2023-09-13 20:00 UTC

2023-09-13 Thread Bastian Blank
On Mon, Sep 11, 2023 at 03:43:05PM -0700, Ross Vandegrift wrote:
> Our next team meeting is scheduled for 2023-09-13 20:00 UTC.  We'll be
> on jitsi: https://jitsi.debian.social/DebianCloudMeeting20230913.

I most likely won't be able to attend.

Regards,
Bastian

-- 
Superior ability breeds superior ambition.
-- Spock, "Space Seed", stardate 3141.9



Bug#1051421: cloud-init: Avoid hard dependency on isc-dhcp-client

2023-09-07 Thread Bastian Blank
On Thu, Sep 07, 2023 at 05:50:41PM +0200, Bastian Blank wrote:
> When the following commit is includes:

Just for background information: cloud-init depends on isc-dhcp-client
because it uses the dhclient binary.  So removing that as dependency is
not feasible right now.

Bastian

-- 
Fascinating, a totally parochial attitude.
-- Spock, "Metamorphosis", stardate 3219.8



Bug#1051421: cloud-init: Avoid hard dependency on isc-dhcp-client

2023-09-07 Thread Bastian Blank
On Thu, Sep 07, 2023 at 05:36:06PM +0200, Michael Prokop wrote:
> Please consider adapting the Depends for the new cloud-init version
> in Debian accordingly, so one can use e.g. cloud-init with udhcpc
> (which also allows co-installation next to dhcpcd), but without
> having to also have isc-dhcp-client present.

When the following commit is includes:

| commit ce7d597a65413f8ed14154f8a0fe64dda126d1f3
| Author: Jean-François Roche 
| Date:   Wed Jul 19 00:37:24 2023 +0200
| 
| net/dhcp: add udhcpc support (#4190)

Bastian

-- 
Too much of anything, even love, isn't necessarily a good thing.
-- Kirk, "The Trouble with Tribbles", stardate 4525.6



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-08-10 Thread Bastian Blank
On Mon, Jul 24, 2023 at 07:56:03PM +0200, Bastian Blank wrote:
> On Sat, Jan 21, 2023 at 11:58:26PM +0100, Bastian Blank wrote:
> > Please verify that this login works for you.  I would like to remove
> > existing users in a few weeks.
> I will cleanup the remaining users at the end of the week.

This is done now.

Regards,
Bastian

-- 
Another Armenia, Belgium ... the weak innocents who always seem to be
located on a natural invasion route.
-- Kirk, "Errand of Mercy", stardate 3198.4



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-08-06 Thread Bastian Blank
On Thu, Jul 27, 2023 at 01:39:39PM +0200, Bastian Blank wrote:
> There exists now a branch "use-identity".  This seems to work with
> Firefox.  At least the authentication part itself works and I already
> recorded the correct URL in the application.  You just can't use the
> extension button, no idea why yet.

I found the problem, this is different support by Chromium and Firefox
of the extension stuff.  So both require different manifests.

Bastian

-- 
If some day we are defeated, well, war has its fortunes, good and bad.
-- Commander Kor, "Errand of Mercy", stardate 3201.7



Bug#1042367: bookworm cloud images missing since 20230725 (only backports images)

2023-07-27 Thread Bastian Blank
Control: tags -1 pending

On Thu, Jul 27, 2023 at 06:44:56AM +0200, Martin Pitt wrote:
> We could adjust our scripts for the renaming, but this smells like a bug --
> it may be nice to have cloud images with some/all backports enabled, but can 
> we
> also have the "pure bullseye" images back?

Yeah, that's broken.  I mixed up names in the config.

Bastian

-- 
Without freedom of choice there is no creativity.
-- Kirk, "The return of the Archons", stardate 3157.4



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-07-27 Thread Bastian Blank
Hi Antonio

On Wed, Jul 26, 2023 at 11:17:42PM +0200, Bastian Blank wrote:
> I know.  You are welcome to try and get this to work.

There exists now a branch "use-identity".  This seems to work with
Firefox.  At least the authentication part itself works and I already
recorded the correct URL in the application.  You just can't use the
extension button, no idea why yet.

Bastian

-- 
Mind your own business, Spock.  I'm sick of your halfbreed interference.



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-07-26 Thread Bastian Blank
Hi Antonio

On Wed, Jul 26, 2023 at 05:34:53PM -0300, Antonio Terceiro wrote:
> I have to say, though, that being forced to use Chromium is not exactly
> fun, as I use Firefox for everything else.

I know.  You are welcome to try and get this to work.  The extension
itself should work fine, I think I actually got it working once.  But to
make it useful, we need to record the callback URL at the IdP and that
got the form of

| moz-extension://3ede8a66-20f0-4590-8842-2a75c248bce5/callback.html

But the UUID in this URL is installation specific and recorded in the
extensions.webextensions.uuids config setting.  There is no way I could
find to override that from the extension itself to have a stable value
between installations.

Bastian

-- 
There is a multi-legged creature crawling on your shoulder.
-- Spock, "A Taste of Armageddon", stardate 3193.9



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-07-24 Thread Bastian Blank
Hi

On Mon, Jul 24, 2023 at 08:29:41PM +0200, Lucas Nussbaum wrote:
> How can I verify that this login works for me?
> I installed the extension. What's next?

You can access the extension this way:
https://salsa.debian.org/-/snippets/648

Or go directly to
chrome-extension://afldafidllnmipiemfnjodgcabaepfhl/index.html

You'll see an account selection, this is hopefully self-explanatory.

Bastian

-- 
You!  What PLANET is this!
-- McCoy, "The City on the Edge of Forever", stardate 3134.0



Re: Moving AWS auth from IAM users to salsa.debian.org

2023-07-24 Thread Bastian Blank
Hi

On Sat, Jan 21, 2023 at 11:58:26PM +0100, Bastian Blank wrote:
> Please verify that this login works for you.  I would like to remove
> existing users in a few weeks.

I will cleanup the remaining users at the end of the week.

Regards,
Bastian

-- 
You're too beautiful to ignore.  Too much woman.
-- Kirk to Yeoman Rand, "The Enemy Within", stardate unknown



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-07-03 Thread Bastian Blank
Hi Julien

On Wed, Apr 26, 2023 at 02:10:05PM +0200, Julien Cristau wrote:
> I haven't been able to get connections to the host working again after
> the dhcp issues, can we maybe start over, using debian 11, and if
> possible some form of OOB access?

I finally managed to get to it, sorry about the missing response but
everything is weird right now.  Instance was re-created with Debian 11.
Nothing new on OOB though.

Regards,
Bastian

-- 
Sometimes a feeling is all we humans have to go on.
-- Kirk, "A Taste of Armageddon", stardate 3193.9



Bug#1038691: bookworm cloud images have broken "netdev" group

2023-06-20 Thread Bastian Blank
Package: cloud.debian.org
Severity: serious

Hi Martin

Thanks for reporting this.

On Tue, Jun 20, 2023 at 08:03:48AM +0200, Martin Pitt wrote:
> This isn't done by any package postinst -- `grep -r netdev 
> /var/lib/dpkg/info/*`
> shows no relevant hits. So this must be somewhere in the scripts that build 
> the
> cloud images.

Actually this seems to come from cloud-init, where we request the
initial user to be added to the netdev group, which does not exist.

> Where can I file a bug about this? https://cloud.debian.org/images/cloud/ nor
> the wiki mention the tools which are used to build these images.

The BTS pseudo-package is cloud.debian.org.

Bastian

-- 
Suffocating together ... would create heroic camaraderie.
-- Khan Noonian Singh, "Space Seed", stardate 3142.8



Re: Network on debian-cloud image

2023-05-22 Thread Bastian Blank
Hi Jeremy

On Mon, May 22, 2023 at 08:14:14AM +, Jeremy Collin wrote:
> We are seeing right now that you have change the network management to 
> netplan for debian12.

Yep.  And this means network setup via cloud-init will actually work in
a lot more ways then before.

> One of my colleague wants to revert this for our customers and make all 
> happen with ifupdown,
> so our customers won't be too disturbed by this (undocumented?) change.

It is documented in the source history, we don't maintain release notes
for those.

> I want to embrace the change and use netplan, as we also use systemd and 
> cloud-init, and follow as close as possible.
> Does the debian team have a recommendation on that matter?

I don't think we will support ifupdown with cloud-init any longer.
Debian 11 got some weird setup to make dhcpv6 work without breaking
setups where it does not exist.  You need systemd-networkd to be able to
use the managed flag.  Also IPv6 only is not possible with ifupdown.

Bastian

-- 
Only a fool fights in a burning house.
-- Kank the Klingon, "Day of the Dove", stardate unknown



Next team meeting: 2023-05-10 20:00 UTC

2023-05-09 Thread Bastian Blank
Hi

Our next team meeting is scheduled for 2023-05-10 20:00 UTC.  We'll be
on jitsi: https://jitsi.debian.social/DebianCloudMeeting20230510.

Regards,
Bastian

-- 
Another dream that failed.  There's nothing sadder.
-- Kirk, "This side of Paradise", stardate 3417.3



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-03-13 Thread Bastian Blank
On Sat, Mar 11, 2023 at 12:43:52PM +0100, Julien Cristau wrote:
> I finally got around to the initial setup.  A couple of things so far:
> - the machine is running bookworm; that's going to cause extra work
>   initially.  I'll give it a try anyway, since it's essentially work
>   we'll need to do regardless, but it came as a surprise.

My mail stated Debian 12.

> - can you allow outbound dns in the firewall rules?  The provided resolver
>   doesn't look like it supports dnssec.

Yeah, found that out now.  Will allow DNS.  Also I'll enable DNSSEC
verification on the resolver, but that will not fix the problem, as it
still does not act like a full DNSSEC capable resolver, it neither sets
the AD bit, nor provides RRSIG records to the clients.

Bastian

-- 
If some day we are defeated, well, war has its fortunes, good and bad.
-- Commander Kor, "Errand of Mercy", stardate 3201.7



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-17 Thread Bastian Blank
On Thu, Feb 16, 2023 at 06:14:53PM +0100, Bastian Blank wrote:
> On Thu, Feb 16, 2023 at 01:23:41PM +0100, Bastian Blank wrote:
> > Okay, 4TB it is.  We can always grow if we need to.
> Setup complete.  IP is 2600:1f13:fb2:f400:6b1e:beae:ebbc:c6a

Some remarks:

Please always communicate the range 2600:1f13:fb2:f400::/56 for access
control purposes.  It is dedicated to you and might reduce future
problems.

I did assign a more static IPv4 address: 35.82.129.173 (in AWS speak an
Elastic IP).

Please provide some feedback when you had the chance to look at it.

Regards,
Bastian

-- 
Virtue is a relative term.
-- Spock, "Friday's Child", stardate 3499.1



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-16 Thread Bastian Blank
On Thu, Feb 16, 2023 at 01:23:41PM +0100, Bastian Blank wrote:
> Okay, 4TB it is.  We can always grow if we need to.

Setup complete.  IP is 2600:1f13:fb2:f400:6b1e:beae:ebbc:c6a

Regards,
Bastian

-- 
Knowledge, sir, should be free to all!
-- Harry Mudd, "I, Mudd", stardate 4513.3



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-16 Thread Bastian Blank
Hi

On Wed, Feb 15, 2023 at 02:47:00PM +0100, Julien Cristau wrote:
> On Wed, Feb 15, 2023 at 02:16:08PM +0100, Bastian Blank wrote:
> > On Mon, Feb 13, 2023 at 10:12:01AM +0100, Bastian Blank wrote:
> > > - One dedicated /56 per region for all DSA stuff
> > > - One instance, m6g.2xlarge, arm64, Debian 12 (also possible is Debian
> > >   11)
> > > - One dedicated data volume with ext4, on instance creation mounted on
> > >   /srv
> > > - SSH keys taken from https://salsa.debian.org, only from jcristau
> > > - Network filter setup via two security groups
> > >   - administrative, which allows
> > > - ingress and egress of icmp
> > > - egress to http and https
> > > - unrestricted ingress and egress to the Debian IPv6 networks at
> > >   manda, grnet, ubc
> > >   - syncproxy, which allows
> > > - egress to ssh
> > > - ingress to rsync and rsync-ssl (1831)
> > > - ingress from fasolo and genome research for sibelius
> > 
> > Okay, no response, so looks like that's okay then.
> Sorry it wasn't clear to me that there was a question.

No problem, it was unclear.

> > Still needed information:
> > How much space does it need?
> The mirror would fit within 4T at the moment, though it's probably fair
> to assume that'll keep going up slightly.

Okay, 4TB it is.  We can always grow if we need to.

Regards,
Bastian

-- 
Youth doesn't excuse everything.
-- Dr. Janice Lester (in Kirk's body), "Turnabout Intruder",
   stardate 5928.5.



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-15 Thread Bastian Blank
Hi

On Mon, Feb 13, 2023 at 10:12:01AM +0100, Bastian Blank wrote:
> - One dedicated /56 per region for all DSA stuff
> - One instance, m6g.2xlarge, arm64, Debian 12 (also possible is Debian
>   11)
> - One dedicated data volume with ext4, on instance creation mounted on
>   /srv
> - SSH keys taken from https://salsa.debian.org, only from jcristau
> - Network filter setup via two security groups
>   - administrative, which allows
> - ingress and egress of icmp
> - egress to http and https
> - unrestricted ingress and egress to the Debian IPv6 networks at
>   manda, grnet, ubc
>   - syncproxy, which allows
> - egress to ssh
> - ingress to rsync and rsync-ssl (1831)
> - ingress from fasolo and genome research for sibelius

Okay, no response, so looks like that's okay then.

Still needed information:

How much space does it need?

Bastian

-- 
We do not colonize.  We conquer.  We rule.  There is no other way for us.
-- Rojan, "By Any Other Name", stardate 4657.5



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-13 Thread Bastian Blank
Hi

On Wed, Feb 08, 2023 at 09:26:55PM -0800, Ross Vandegrift wrote:
> Okay, great.  We're going to go ahead and work on deploying this.
> Here's what we're going to deploy, please let us know if anything sounds
> wrong:

This is now
https://salsa.debian.org/cloud-admin-team/debian-cloud-hosting-setup/-/merge_requests/2

Sorry, Julien, you can't currently read that.

Done
- One dedicated /56 per region for all DSA stuff
- One instance, m6g.2xlarge, arm64, Debian 12 (also possible is Debian
  11)
- One dedicated data volume with ext4, on instance creation mounted on
  /srv
- SSH keys taken from https://salsa.debian.org, only from jcristau
- Network filter setup via two security groups
  - administrative, which allows
- ingress and egress of icmp
- egress to http and https
- unrestricted ingress and egress to the Debian IPv6 networks at
  manda, grnet, ubc
  - syncproxy, which allows
- egress to ssh
- ingress to rsync and rsync-ssl (1831)
- ingress from fasolo and genome research for sibelius

Not yet done is
- elastic IP, so the IPv4 adress will change for example on re-create
- definition of required size of data volume

Bastian

-- 
No more blah, blah, blah!
-- Kirk, "Miri", stardate 2713.6



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-11 Thread Bastian Blank
On Sat, Feb 11, 2023 at 11:59:16AM +0100, Julien Cristau wrote:
> On Wed, Feb 08, 2023 at 09:26:55PM -0800, Ross Vandegrift wrote:
> > Do you have a list of hosts that should be permitted ssh access?
> Can we (DSA) control the cloud-side firewall?  If not then we'll
> probably want it open to the world and configure firewalling on the
> host.

The plan is to have no unmanaged resources, so those will be managed via
merge requests and those can be opened by DSA.

So we need some initial ranges to allow for what services on ingress and
egress.

Bastian

-- 
The more complex the mind, the greater the need for the simplicity of play.
-- Kirk, "Shore Leave", stardate 3025.8



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-02-08 Thread Bastian Blank
On Wed, Feb 08, 2023 at 09:26:55PM -0800, Ross Vandegrift wrote:
> - 8 cpu arm64, 16G of RAM (in AWS-speak: c6g.2xlarge)

My thought was on m6g.2xlarge.  With a more useful amount of ram (32
GB).  While rsync is CPU intensive, it needs a lot of cache.

Bastian

-- 
Military secrets are the most fleeting of all.
-- Spock, "The Enterprise Incident", stardate 5027.4



Re: help wanted, standing up mirroring sync proxies on public cloud

2023-01-28 Thread Bastian Blank
Hi Julien

On Thu, Mar 17, 2022 at 12:03:18PM +0100, Julien Cristau wrote:
> Would it be possible to work with the cloud team to stand up appropriate
> accounts and so on on one of the cloud infras Debian has a relationship
> with?  I don't have a whole lot of knowledge of this space so will
> probably end up asking a bunch of stupid questions.

Sure, we can do that with AWS.  The sponsor already acknowledged that
they are okay with that.

We intend to put those ressources into a shared hosting account, at
least for now.  All ressources are managed using Terraform.  Merge
requests can be used to propose changes.

You now need to decide on your wishlist.  I would propose to start with
one system.  With something like 8 cpus, 32 GB ram, 500 MB/s network,
125 MB/s storage and 3 kIOPS.  If I look at my own systems, this might
suffice for 40 normal push clients.

We can easily scale vertically to 64 cpus, 256 GB ram, 25
Gbps network, 1000 MB/s storage and 16 kIOPS.  I hope this is not
needed, but I would like to have some real data first.

Regards,
Bastian

-- 
No one may kill a man.  Not for any purpose.  It cannot be condoned.
-- Kirk, "Spock's Brain", stardate 5431.6



Moving AWS auth from IAM users to salsa.debian.org

2023-01-21 Thread Bastian Blank
Hi folks

You are receiving this e-mail, because you have somewhat used IAM
users to access Debian AWS accounts.

The cloud team intents to deprecate the use of IAM users for accessing
the (new) Debian AWS accounts.  In the future, logins to those AWS
accounts will be done via a Debian IdP (currently salsa.debian.org).

Login to AWS with federated users (SAML or OpenID Connect) requires an
additional piece of software.

I provide an implementation in form of a web browser extension (Chromium
only, supporting Firefox is not possible).  This extension allows login
to the web console or provide access token for programatic access.  You
can get this here
https://salsa.debian.org/cloud-admin-team/webext-debian-aws-login.

In addition it would be possible to write a standlone tool to support
AWS login with federated users with the help of any existing browser.
But I don't intend to implement that for now.

Please verify that this login works for you.  I would like to remove
existing users in a few weeks.

Regards,
Bastian

-- 
Worlds are conquered, galaxies destroyed -- but a woman is always a woman.
-- Kirk, "The Conscience of the King", stardate 2818.9



Re: Bug#1025618: cloud-init and firewalld systemd unit files have ordering cycles

2022-12-17 Thread Bastian Blank
On Fri, Dec 16, 2022 at 03:48:00PM -0800, Ross Vandegrift wrote:
> - from firewalld:
>   sysinit.target < dbus.service < firewalld.service < network-pre.target
> - from cloud-init:
>   cloud-init-local.service < network-pre.target < 
> systemd-networkd-wait-online.service < cloud-init.service < sysinit.target

I think this might be to strict.  "basic.target" should cover that
usage for "cloud-init.service".  Then we order all between
sysinit.target and basic.target.

The only thing is: sysinit.target usualy provides all local filesystems,
which cloud-init can create itself.

Bastian

-- 
The sooner our happiness together begins, the longer it will last.
-- Miramanee, "The Paradise Syndrome", stardate 4842.6



Re: Strange emails from AWS and Azure

2022-12-14 Thread Bastian Blank
On Wed, Dec 14, 2022 at 10:16:22PM +0100, Tomasz Rybak wrote:
> I suspect this is some left-over from my times as delegate.
> Probably someone restored/changed configuration and I'm
> receiving those emails again.
> Anyways - can someone (don't know whether current delegates,
> or SPI) unsubscribe me from all those mailing lists?

This is via cloudaccou...@debian.org.  I requested you being removed
from that.

Regards,
Bastian

-- 
Spock: We suffered 23 casualties in that attack, Captain.



Re: Enabling secure boot support on the generic / generic-cloud images

2022-12-10 Thread Bastian Blank
On Thu, Dec 08, 2022 at 11:12:28AM +0100, Thomas Goirand wrote:
> However, our image doesn't have secure boot support by default if I'm not
> mistaking.

Why do you think?  We install grub-efi-amd64-signed, so we have a signed
boot loader and kernel.

Bastian

-- 
Peace was the way.
-- Kirk, "The City on the Edge of Forever", stardate unknown



Bug#1025849: cloud-initramfs-growroot - silently breaks initramfs build

2022-12-10 Thread Bastian Blank
Package: cloud-initramfs-growroot
Version: 0.18.debian10
Severity: grave

Installation of new kernel now silently fails:

| Setting up linux-image-6.0.0-5-cloud-arm64 (6.0.10-2) ...
| /etc/kernel/postinst.d/initramfs-tools:
| update-initramfs: Generating /boot/initrd.img-6.0.0-5-cloud-arm64
| W: No zstd in /usr/bin:/sbin:/bin, using gzip
| E: /usr/share/initramfs-tools/hooks/growroot failed with return 1.
| update-initramfs: failed for /boot/initrd.img-6.0.0-5-cloud-arm64 with 1.
| run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1

Bastian

-- System Information:
Debian Release: bookworm/sid
  APT prefers testing
  APT policy: (700, 'testing'), (500, 'unstable'), (500, 'stable'), (1, 
'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-0-amd64 (SMP w/16 CPU threads; PREEMPT)
Locale: LANG=de_DE.UTF-8, LC_CTYPE=de_DE.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_US:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages cloud-initramfs-growroot depends on:
pn  cloud-utils  
ii  fdisk2.38.1-4
ii  initramfs-tools  0.142
ii  util-linux   2.38.1-4

cloud-initramfs-growroot recommends no packages.

cloud-initramfs-growroot suggests no packages.



Re: awscli v2 dependencies (was Re: Next team meeting: 2022-11-09 20:00 UTC)

2022-11-28 Thread Bastian Blank
On Mon, Nov 28, 2022 at 09:02:01AM -0800, Noah Meyerhans wrote:
> I understand that there's other software that may want to take direct
> dependencies on the C libraries, but as I don't see any of that being
> actively worked on in terms of packages that'll be ready for inclusion
> in bookworm, I don't think the unavailability of these packages will be
> a significant blocker for anybody.

>From my side that was the aws kms pkcs11 driver.  However that needs the
C++ part as well.

Bastian

-- 
Where there's no emotion, there's no motive for violence.
-- Spock, "Dagger of the Mind", stardate 2715.1



Re: qcow2 resize issue with latest unstable cloud images

2022-11-10 Thread Bastian Blank
On Thu, Nov 10, 2022 at 02:56:17PM +0100, Frédéric Bonnard wrote:
> I don't think this is related to the image itself.
> I just installed debian testing on a physical host, formatting manually
> with a 1G / and 3G free behind (installer was based on kernel 6.0 too) .
> I booted the machine, then :

This is a known bug.  The fix is pending in the meantime.

See https://bugs.debian.org/1023450

Bastian

-- 
There are always alternatives.
-- Spock, "The Galileo Seven", stardate 2822.3



s2n-tls_1.3.26+dfsg-1_amd64.changes REJECTED

2022-11-09 Thread Bastian Blank


Rejected by private request of maintainer



===

Please feel free to respond to this email if you don't understand why
your files were rejected, or if you upload new files which address our
concerns.



aws-c-common_0.8.4-1_amd64.changes REJECTED

2022-11-09 Thread Bastian Blank


Rejected by private request of maintainer



===

Please feel free to respond to this email if you don't understand why
your files were rejected, or if you upload new files which address our
concerns.



Bug#1023451: Current Bookworm daily image breaks root file system during resize

2022-11-07 Thread Bastian Blank
Control: reassign -1 linux/6.0-1~exp1
Control: forcemerge 1023450 -1

On Fri, Nov 04, 2022 at 02:04:05PM +0100, Sven Bartscher wrote:
> [  163.701342] EXT4-fs (sda1): resizing filesystem from 491515 to 4161531
> blocks
> [  163.870631] EXT4-fs (sda1): resized filesystem to 4161531
> [  163.914439] EXT4-fs (sda1): Invalid checksum for backup superblock 32768
> [  163.914439]
> [  163.914892] EXT4-fs error (device sda1) in ext4_update_backup_sb:174:
> Filesystem failed CRC

The ext4 maintainer confirmed this is a kernel bug introduced in 6.0.

Bastian

-- 
Only a fool fights in a burning house.
-- Kank the Klingon, "Day of the Dove", stardate unknown



Bug#966573: progress packaging awscli v2

2022-11-04 Thread Bastian Blank
On Fri, Nov 04, 2022 at 09:08:22AM -0700, Noah Meyerhans wrote:
> > Are you sure this library can have a 1 as ABI?  Can you please reproduce
> > the ABI stability promisses?
> Allegedly upstream has recently committed to proper SONAME and ABI
> management in support of efforts to get these packages accepted into
> Fedora.  I'll see if I can find meaningful evidence of this...

The SOVERSION was set two years ago:

https://github.com/awslabs/aws-c-common/pull/702

So some more information would be nice.

Bastian

-- 
War is never imperative.
-- McCoy, "Balance of Terror", stardate 1709.2



Re: qcow2 resize issue with latest unstable cloud images

2022-11-04 Thread Bastian Blank
On Fri, Nov 04, 2022 at 10:14:23AM +0100, Bastian Blank wrote:
> Maybe running fsck before shipping the image will make it work better.
> Currently we rely on the kernel of the build system to provide us with a
> clean file system.

fsck is not seeing any problem with that filesystem.

Bastian

-- 
I'm a soldier, not a diplomat.  I can only tell the truth.
-- Kirk, "Errand of Mercy", stardate 3198.9



Re: qcow2 resize issue with latest unstable cloud images

2022-11-04 Thread Bastian Blank
Hi

On Thu, Oct 20, 2022 at 03:33:53PM +0200, Frédéric Bonnard wrote:
> I test the cloud images from unstable and since 2 days, the tests fail
> to resize the qcow2 files :
> example using 
> https://cloud.debian.org/images/cloud/sid/daily/latest/debian-sid-nocloud-amd64-daily.qcow2
>  :

Thanks for the report.  I was now able to reproduce it.

> Any idea what's happening ?
> This used to work and still works on testing/bullseye cloud images.
> unstable cloud image 3days ago with kernel 5.19.0-2 worked.
> unstable cloud image 2days ago with kernel 6.0.0-1 did not work and fails 
> since then.
> I don't know about the intrinsics of cloud image generation. Something could 
> have
> change there as well.

Maybe running fsck before shipping the image will make it work better.
Currently we rely on the kernel of the build system to provide us with a
clean file system.

Bastian

-- 
Lots of people drink from the wrong bottle sometimes.
-- Edith Keeler, "The City on the Edge of Forever",
   stardate unknown



Bug#966573: progress packaging awscli v2

2022-11-04 Thread Bastian Blank
On Tue, Oct 05, 2021 at 11:10:43PM -0600, Ross Vandegrift wrote:
> My first pass only produces -dev packages with headers and static libraries.
> To test them out, build the debian/sid branch from these repos, in this order:
> - https://salsa.debian.org/rvandegrift/aws-c-common

Are you sure this library can have a 1 as ABI?  Can you please reproduce
the ABI stability promisses?

I see in the public JSON headers the following interface changes:

| -typedef bool(aws_json_on_member_encountered_const_fn)(
| +typedef int(aws_json_on_member_encountered_const_fn)(
|  const struct aws_byte_cursor *key,
|  const struct aws_json_value *value,
| +bool *out_should_continue,
|  void *user_data);

That's neither source nor binary compatible.

I don't think any of those libraries are reasonable ABI stable to be
maintained as shaed libraries.

Bastian

-- 
Warp 7 -- It's a law we can live with.



Next team meeting: 2022-09-14 20:00 UTC

2022-09-13 Thread Bastian Blank
Hi

Our next team meeting is scheduled for 2022-09-14 20:00 UTC.  We'll be
on jitsi: https://jitsi.debian.social/DebianCloudMeeting20220914.

Regards,
Bastian

-- 
It would be illogical to kill without reason.
-- Spock, "Journey to Babel", stardate 3842.4



Re: Closing of buster-backports?

2022-09-07 Thread Bastian Blank
On Wed, Sep 07, 2022 at 09:32:15AM -0700, Noah Meyerhans wrote:
> Is there a plan to continue offering new kernels for buster LTS?

Yes, the same as with the older ones.  It just is broken right now.

Bastian

-- 
Lots of people drink from the wrong bottle sometimes.
-- Edith Keeler, "The City on the Edge of Forever",
   stardate unknown



Re: Taking over root on legacy AWS account

2022-08-24 Thread Bastian Blank
Hi Ross

Sorry, I did not respond earlier.

On Tue, Aug 23, 2022 at 10:55:27PM -0700, Ross Vandegrift wrote:
> On Fri, Aug 12, 2022 at 05:37:33PM +0100, Marcin Kulisz wrote:
> > My take on the latter would be that one of the delegates if we'd have a 
> > chair
> > would be holding MFA to this account and this would be passed along this 
> > line to
> > the next one and it should be an obligation of the chair to do it be.
> > I would nominate Ross as the person usually charring our meetings.
> > Any other ideas or suggestions how to do it?
> Bastian suggested storing it in the password repo [1].  I like that since it
> supports providing access to multiple people via their gpg keys.  I don't 
> quite
> understand how to use pwstore, but the idea seems simple enough.

The main problem with that is for now: we don't have control over the
phone number associated with our accounts.  This means we can't recover
from broken MFA without help of the support.

As I said in the last meeting, I don't know a useful way to rectify
that missing access to a shared phone number.

Because none of the new accounts have MFA enabled, maybe it is okay to
just transfer the account without it as well.

Regards,
Bastian

-- 
I object to intellect without discipline;  I object to power without
constructive purpose.
-- Spock, "The Squire of Gothos", stardate 2124.5



Re: Use and rules of debian.cloud

2022-08-24 Thread Bastian Blank
On Tue, Aug 23, 2022 at 10:25:54PM -0700, Ross Vandegrift wrote:
> Yea, that's not great- but it's better fallback than what we have today.
> Making the fallback transparent to the VMs sounds awesome, but is it a
> must-have feature?

It comes done to: what do we expect to happen if someone uses it as a
plain entry or outside of apt?

If we expect that it is only ever used from within apt, we can also just
use _http[s]._tcp.$name IN SRV.

Regards,
Bastian

-- 
Our missions are peaceful -- not for conquest.  When we do battle, it
is only because we have no choice.
-- Kirk, "The Squire of Gothos", stardate 2124.5



Re: Use and rules of debian.cloud

2022-08-22 Thread Bastian Blank
Hi Ross

On Sun, Aug 21, 2022 at 10:35:38PM -0700, Ross Vandegrift wrote:
> According to apt-transport-mirror(1), apt can do this on the client side.  
> Once
> the MR for mirror+file apt sources is merged, we'd do something like:
> https://aws.deb.debian.cloud priority:1
> https://deb.debian.org
> This seems simpler than fastly tricks, though I haven't ever tried it.

It will for now introduce delays, as apt considers all errors transient.

- Not in DNS? transient, retry.
- Nothing listening? transient, retry.

Okay, it only retries four times after one second each.  So in the case
we would need to kill the own entry, all "apt update" requests now take
five seconds longer.

Regards,
Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, "Day of the Dove", stardate unknown



Use and rules of debian.cloud

2022-08-21 Thread Bastian Blank
Hi folks

I propose the following initial use and associated policies for the
domain debian.cloud.

## deb.debian.cloud

Provides Debian mirrors, possibly limited, similar to deb.debian.org.
Each provider gets a subdomain, which should be used in the apt config.

Currently assigned are:
- azure
- aws

Vendors are only eligible if the mirrors and images are somewhat
controlled by Debian.

>From Bookworm, vendors will be able to provide default mirrors directly
via vendor data to cloud-init.

Only one level is allowed, aka "vendor" and "vendor-special", not
"special.vendor".  So in theory one certificate "*.deb.debian.cloud"
will support all names.

We need to look how we can use deb.debian.org as last resort fallback.
Fastly requires explicit config of the domains, which is quite possible,
as they support wildcards.  But supporting HTTPS for this case will be
not nice at all.

## deb-backend.debian.cloud

Records pointing to the HTTP access to the backend resources at the
given provider.  Can be either CNAME or address records.

Currently assigned are:
- azure -> CNAME debian-archive.trafficmanager.net
- aws -> CNAME something.cloudfront.net

## infra.debian.cloud

Internal infrastructure services.  For services the base name is usualy
the HTTPS ingress point (a load balancer/ssl offloader), if such thing
exists.  Subdomains are used for other ingress points.

Currently assigned are:
- vault (ssh.vault)
- mirror-azure (ssh.mirror-azure, push.mirror-azure)

Those services need to be maintained at least in conjuction with the
cloed team and either for internal or Debian use.

Regards,
Bastian

-- 
No one can guarantee the actions of another.
-- Spock, "Day of the Dove", stardate unknown



Re: Resignation as cloud team delegate

2022-07-02 Thread Bastian Blank
Moin

On Fri, Jul 01, 2022 at 10:21:51PM -0700, Ross Vandegrift wrote:
> Great, most of that seems like a clear improvement.  But there's one change 
> I'm
> not sure about:
> 
> --- original  2022-07-01 21:40:33.826069834 -0700
> +++ draft 2022-07-01 21:40:45.778221446 -0700
> @@ -1,3 +1,3 @@
>  - With the Debian Project Leader and under the auspices of the
>Trusted Organizations, establish Debian accounts with cloud
> -  providers, negotiating terms and conditions where necessary.
> +  providers for the purpose of managing cloud images
> 
> Are you thinking the delegation needs to enumerate responsibilities more
> specifically?  If so, we have other projects unrelated to VM images:
> - provider-specific mirror infrastructure.
> - build systems for docker images on docker.io and public.ecr.aws.
- QA stuff, like ci.debian.net and archive rebuilds
- random systems for other teams
- infrastructure for cloud team use

With the above change, we can't longer provide those.

> A bigger consequence would be that other teams could establish official
> provider accounts for Debian.

Even before, the delegation was not exclusive.  And the DPL is free to
delegate as they wishes.  So this does not change IMHO.

>I'm not sure what I think of that.  There are
> some downsides:
> I'm not sure what I think of these tradeoffs.  But I do think we should
> consider them.  What do you and other team members think?

Even I don't know if we have an overview over the resources Debian have
access to.  I tried to document the ones the cloud team has, including
who can make use of them.  I'm not entirely okay with the form I chose,
but at least we have it.  But what about others?

Bastian

-- 
All your people must learn before you can reach for the stars.
-- Kirk, "The Gamesters of Triskelion", stardate 3259.2



Bug#1010555: cloud-init - Fails to read generated Azure keys from metadata service

2022-05-04 Thread Bastian Blank
Package: cloud-init
Version: 20.4.1-2+deb11u1
Severity: important

cloud-init fails to read keys provided by the new metadata service
sometimes.  In those instances, stray \r\n are embedded and should be
stripped.

See https://bugs.launchpad.net/cloud-init/+bug/1910835

Bastian

-- System Information:
Debian Release: bookworm/sid
  APT prefers testing
  APT policy: (990, 'testing'), (500, 'unstable'), (500, 'stable'), (1, 
'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 5.17.0-1-amd64 (SMP w/12 CPU threads; PREEMPT)
Locale: LANG=de_DE.UTF-8, LC_CTYPE=de_DE.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages cloud-init depends on:
ii  fdisk   2.38-4
ii  gdisk   1.0.9-1
pn  ifupdown
ii  isc-dhcp-client 4.4.2-P1-1+b1
ii  locales 2.33-7
ii  lsb-base11.1.0
ii  lsb-release 11.1.0
pn  net-tools   
ii  procps  2:3.3.17-7+b1
ii  python3 3.10.4-1
pn  python3-configobj   
ii  python3-jinja2  3.0.3-1
pn  python3-jsonpatch   
pn  python3-jsonschema  
pn  python3-netifaces   
ii  python3-oauthlib3.2.0-1
ii  python3-requests2.27.1+dfsg-1
ii  python3-yaml5.4.1-1+b1
ii  util-linux  2.38-4

Versions of packages cloud-init recommends:
pn  cloud-guest-utils  
pn  eatmydata  
ii  sudo   1.9.10-3

Versions of packages cloud-init suggests:
ii  btrfs-progs  5.16.2-1+b1
ii  e2fsprogs1.46.5-2
pn  xfsprogs 



Re: plain VM images for arm64

2022-05-03 Thread Bastian Blank
On Tue, May 03, 2022 at 04:10:08PM +0100, Wookey wrote:
> I note on https://cloud.debian.org/images/cloud/ that we supply 'plain VM' 
> images but only for x86.

Actually we supply arm64 and ppc64el variants of the "nocloud" images.
However it might be only for Sid.

Bastian

-- 
You canna change the laws of physics, Captain; I've got to have thirty minutes!



Re: fai bullseye image vs image from cloud.debian.org

2022-05-03 Thread Bastian Blank
Hi

On Sun, May 01, 2022 at 07:53:01PM +, dimitris.paraskevopoulos wrote:
> I checked out https://salsa.debian.org/cloud-team/debian-cloud-images and run 
> make image_bullseye_genericcloud_amd64 without my desired changes hoping that 
> it would be the exact same behaviour as the downloaded image from the cloud 
> but I am having a problem.
> 
> I created 2 template VM's one with each image and I am seeing different 
> behaviours when provisioning the VM through terraform by cloning the template.
> Basically the cloud image works as expected when I set a static IP but the 
> fai generated one does not. When using the fai image the VM gets a DHCP 
> address despite specifying a static one and in general does not seem to get 
> into cloud-init.

Both should behave identical.  We did not do any updates for the code
used to build published images for a while, but no change in behaviour
is expected.

Can you please share the network config you try to set?  Static network
config is known to somewhat broken in Bullseye.  You can remove
"config_space/bullseye/files/etc/udev/rules.d/75-cloud-ifupdown.rules/"
to disable what already exists in terms of network config.

What I see in my quick test: both the static and the DHCP address are
set at the same time.

> Can someone help me understand what is going on? Have I misunderstood which 
> is the correct make command to create an equivalent image to the one online.

I assume both images are the same, but you are seeing a race condition
that's sadly expected in the current setup.

Bastian

PS: Please set a proper name in the From header.
PSS: Please stop adding advertisements to your email:
> Sent with [ProtonMail](https://protonmail.com/) secure email.
-- 
I've already got a female to worry about.  Her name is the Enterprise.
-- Kirk, "The Corbomite Maneuver", stardate 1514.0



Re: [BOARD #5526] Re: [TREASURER #5526] Re: managing Huawei accounts

2022-04-25 Thread Bastian Blank via RT
On Mon, Apr 25, 2022 at 02:52:14PM +0200, Jonathan Carter wrote:
> On 2022/04/24 19:53, Bastian Blank via RT wrote:
> > It turns out, SPI can't help here.  Huawei Cloud is on the US sanctions
> > list.
> So, time to forward the request to Debian Switzerland instead then?

Switzerland is not on the list either.  See
https://support.huaweicloud.com/intl/en-us/intl_faq/en-us_topic_0115884694.html

Bastian

-- 
There is an order of things in this universe.
-- Apollo, "Who Mourns for Adonais?" stardate 3468.1



Re: [BOARD #5526] Re: [TREASURER #5526] Re: managing Huawei accounts

2022-04-25 Thread Bastian Blank
On Mon, Apr 25, 2022 at 02:52:14PM +0200, Jonathan Carter wrote:
> On 2022/04/24 19:53, Bastian Blank via RT wrote:
> > It turns out, SPI can't help here.  Huawei Cloud is on the US sanctions
> > list.
> So, time to forward the request to Debian Switzerland instead then?

Switzerland is not on the list either.  See
https://support.huaweicloud.com/intl/en-us/intl_faq/en-us_topic_0115884694.html

Bastian

-- 
There is an order of things in this universe.
-- Apollo, "Who Mourns for Adonais?" stardate 3468.1



[BOARD #5526] Re: [TREASURER #5526] Re: managing Huawei accounts

2022-04-24 Thread Bastian Blank via RT
Hi Hector

On Thu, Mar 31, 2022 at 04:05:48PM -0400, Héctor Orón via RT wrote:
> What is needed from SPI? Are there documents to sign or licenses to accept?

It turns out, SPI can't help here.  Huawei Cloud is on the US sanctions
list.

Regards,
Bastian

-- 
Bones: "The man's DEAD, Jim!"



Re: [TREASURER #5526] Re: managing Huawei accounts

2022-04-24 Thread Bastian Blank
Hi Hector

On Thu, Mar 31, 2022 at 04:05:48PM -0400, Héctor Orón via RT wrote:
> What is needed from SPI? Are there documents to sign or licenses to accept?

It turns out, SPI can't help here.  Huawei Cloud is on the US sanctions
list.

Regards,
Bastian

-- 
Bones: "The man's DEAD, Jim!"



Re: managing Huawei accounts

2022-04-22 Thread Bastian Blank
On Thu, Apr 21, 2022 at 06:37:59PM -0600, Sam Hartman wrote:
> YMMV of course.

My assessment was:

| However.  Huawei Cloud is on the US sanctions list.  And trying to
| actually create an account explicitly states that Europe and Russia are
| not allowed.  So it seems that all three trusted organizations are out
| of luck in holding contracts with Huawei Cloud.

So, no, none of the existing trusted organizations can help.

Regards,
Bastian

-- 
Landru! Guide us!
-- A Beta 3-oid, "The Return of the Archons", stardate 3157.4



Re: managing Huawei accounts

2022-04-21 Thread Bastian Blank
Hi

On Thu, Apr 21, 2022 at 06:39:47PM +0800, Aron Xu wrote:
> On Thu, Apr 21, 2022 at 6:09 AM Bastian Blank  wrote:
> > Notably, the list does not contain a single country of Europe, nor the
> > USA or Canada.  However thats where the Debian trusted orgs are located
> > in.
> I checked that they do have the mentioned contries including the US
> and some EU contries. Just in case this is what I do:
> 
> 1. Navigate to the English homepage at: 
> https://www.huaweicloud.com/intl/en-us/
> 2. Click the "Register" button on the right top position, it then
> redirects to the registration page at:
> https://id5.cloud.huawei.com/UnifiedIDMPortal/portal/userRegister/regbyemail.html

It redirects me to
https://id5.cloud.huawei.com/UnifiedIDMPortal/portal/userRegister/regbyemail.html?themeName=red_type=offline=103493351=8800=https%3A%2F%2Fauth.huaweicloud.com%2Fauthui%2Flogin.html%23=https%3A%2F%2Fauth.huaweicloud.com%2Fauthui%2FcasLogin=de=https%3A%2F%2Fwww.huawei.com%2Fauth%2Faccount%2Funified.profile+https%3A%2F%2Fwww.huawei.com%2Fauth%2Faccount%2Frisk.idstate=88=ef0469a11c234a029a4f36d6a8f09e83=en-us

> 3. On the registration page, click the "Country/Region" drop menu
> 4. The US and some EU countries are in the list.

Not in my case.  In my case it explicitly tells me:
> Services are available in the following countries/regions.

And only a short list shows up.

A list with all countries only show up if I try to register an account
for the mobile cloud stuff, aka storage and such for their telephone
part.

However.  Huawei Cloud is on the US sanctions list.  And trying to
actually create an account explicitly states that Europe and Russia are
not allowed.  So it seems that all three trusted organizations are out
of luck in holding contracts with Huawei Cloud.

Regards,
Bastian

-- 
Every living thing wants to survive.
-- Spock, "The Ultimate Computer", stardate 4731.3



Re: managing Huawei accounts

2022-04-20 Thread Bastian Blank
Hi


On Tue, Aug 24, 2021 at 09:49:11PM +0200, Paul Gevers wrote:
> >>> In your opinion,
> >>> should we do the same for the Huawei platform?
> > It will make it easier to have uninterupted access, esp as people in
> > Debian are coming and going.  So if we want to use it for longer,
> > definitely.
> That's exactly the reason why I wanted to bring this up. I'm not sure
> how long Huawei wants to sponsor, but I didn't Aron mention anything
> about this stopping in the foreseeable future.

After discussing this topic in the Cloud team meeting today, I decided
to dig a bit.  I got blocked pretty soon, as Huawei does not want to
sell their cloud stuff to people in most parts of the world.  The list
of countries it wants to sell to consists of parts of Asia, Africa and
South America.

Notably, the list does not contain a single country of Europe, nor the
USA or Canada.  However thats where the Debian trusted orgs are located
in.

I'll keep digging, what this really means.

Bastian

-- 
There are always alternatives.
-- Spock, "The Galileo Seven", stardate 2822.3



Re: [TREASURER #5526] Re: managing Huawei accounts

2022-04-01 Thread Bastian Blank
[Removing treasurer@, as this discussion is not relevant to SPI, until
we have an idea about it]

Hi Jonathan

On Fri, Apr 01, 2022 at 07:18:30PM +0200, Jonathan Carter wrote:
> Would it work to update the cloud team delegation so that the cloud team can
> create and manage this account, and then SPI (or even another TO if needed),
> just handles the billing part?

It's already in the delegation:

| - With the Debian Project Leader and under the auspices of the
|   Trusted Organizations, establish Debian accounts with cloud
|   providers, negotiating terms and conditions where necessary.
| 
| - With the Debian System Administration Team and the Trusted
|   Organizations, manage Debian account credentials with the cloud
|   providers and establish account life-cycle processes.

And that's exactly what the cloud team've done in the last years.
However you yourself tried to establish a debian.net team with similar
tasks almost two years ago.  So I'm not sure what your ideas are.

Bastian

-- 
A woman should have compassion.
-- Kirk, "Catspaw", stardate 3018.2



Re: [TREASURER #5526] Re: managing Huawei accounts

2022-03-31 Thread Bastian Blank via RT
Hi Hector

On Thu, Mar 31, 2022 at 04:05:48PM -0400, Héctor Orón via RT wrote:
> What is needed from SPI? Are there documents to sign or licenses to accept?

Yeah.  SPI needs to hold the contract with the vendor.  So sign it
somwhow.

> >From the pure technical aspect, it should be fine for Debian delegates to 
> >act as they think it's best course of action.

Well.  Last time SPI called problems with non-board members to just do
stuff.  See SPI#3574 and the parts of the discussion I haven't seen.

For Debian the question is: should the Cloud team handle it, as it does
with AWS and Azure at least.  Or should we invent a new structure just
for it?

Regards,
Bastian

-- 
We Klingons believe as you do -- the sick should die.  Only the strong
should live.
-- Kras, "Friday's Child", stardate 3497.2



Re: [TREASURER #5526] Re: managing Huawei accounts

2022-03-31 Thread Bastian Blank
Hi Hector

On Thu, Mar 31, 2022 at 04:05:48PM -0400, Héctor Orón via RT wrote:
> What is needed from SPI? Are there documents to sign or licenses to accept?

Yeah.  SPI needs to hold the contract with the vendor.  So sign it
somwhow.

> >From the pure technical aspect, it should be fine for Debian delegates to 
> >act as they think it's best course of action.

Well.  Last time SPI called problems with non-board members to just do
stuff.  See SPI#3574 and the parts of the discussion I haven't seen.

For Debian the question is: should the Cloud team handle it, as it does
with AWS and Azure at least.  Or should we invent a new structure just
for it?

Regards,
Bastian

-- 
We Klingons believe as you do -- the sick should die.  Only the strong
should live.
-- Kras, "Friday's Child", stardate 3497.2



Re: help wanted, standing up mirroring sync proxies on public cloud

2022-03-18 Thread Bastian Blank
Hi Julien

On Thu, Mar 17, 2022 at 10:01:11PM +0100, Julien Cristau wrote:
> Looking at syncproxy2.wna
> (https://munin.debian.org/debian.org/mirror-isc.debian.org/ip_149_20_4_16.html
> and
> https://munin.debian.org/debian.org/mirror-isc.debian.org/ip_2001_4f8_1_c__16.html)
> it looks like we're around 60Mbps outbound and 700kbps inbound on
> average in the last month.  That is probably the one with the most
> clients though (~20 of them), as a result of an issue with one of the
> other hosts a few years ago (plus the difficulty of coordinating a move
> back with downstream operators) it ended up with most of the NA load.
> We might be able to rebalance things a bit if we replace some hosts
> anyway.

The Azure sync proxy does something around 350GB of incoming traffic per
month for main and security archive, limited to source, all, amd64 and
i386.  So somewhere between 800-1000GB per client and months sounds
plausible (60Mbps ~~ 18TB/month and 20 clients is some bit less then
1000GB).

> > How much local storage?
> Currently we use 2T for the debian and debian-security archives, 6T if
> we add debian-archive + debian-debug, 7T if we add debian-ports.

(Wow, we rewrite half of the main archive every month.)

Regards,
Bastian

-- 
Madness has no purpose.  Or reason.  But it may have a goal.
-- Spock, "The Alternative Factor", stardate 3088.7



Re: help wanted, standing up mirroring sync proxies on public cloud

2022-03-18 Thread Bastian Blank
Hi Julien

On Thu, Mar 17, 2022 at 10:31:33PM +0100, Julien Cristau wrote:
> > You are just talking about the authenticated rsync and push stuff right
> > now?  Because mirror-isc.d.o for example does more.
> I figured we'd start there, yes.  Moving static mirrors around seems a lot
> easier.

You are right.  Also it's something unique to Debian.

> > Do you intend to make the syncproxy setup a bit more failover friendly?
> > So you can kill one and make another take up the work.
> I'm not sure.  Some of that is a bit constrained by things like
> downstream firewalls.  I'd be interested though if you have suggestions
> of things we could do.

I think an elegant solution would be something like:
- split ssh callout from data storage
- use load balancing over the nearest backend (this however is pretty
  special stuff)
- allow clients to connect to every data storage with the allowed
  archive type (main, security, debug, ports)

We would have the following components:
- a (maybe pair) of ssh callout instances.  all clients would get ssh
  connections from these IP.  all data storage systems as well.
- a bunch of data storage instances spaced out as needed.  they can
  contain different sets of archives or just only one for each and more
  instances
- some proximity load balancing front (AWS and Azure can do that via
  DNS, AWS and GCE via global anycast load balancers)
- freshness checks on the data storages.  this will allow storages to
  remove themselves from the load balancers after missing updates for
  too long

Advantages:
- ssh connections will always come from the same set of IP, regardless
  where to the client will connect for rsync
- the load balancing will allow to shift traffic between backends
- we can still do manual traffic steering

Disadvantages:
- clients can't longer firewall outgoing connections to the storages,
  depending on the actual implementation
- the ssh callout instance need to orchestrate and wait for all data
  storage nodes to be in sync before calling clients
- only works within one cloud

Regards,
Bastian

-- 
Is truth not truth for all?
-- Natira, "For the World is Hollow and I have Touched
   the Sky", stardate 5476.4.



Re: help wanted, standing up mirroring sync proxies on public cloud

2022-03-17 Thread Bastian Blank
Hi Julien

On Thu, Mar 17, 2022 at 12:03:18PM +0100, Julien Cristau wrote:
> DSA's looking into options to replace some of our archive mirroring
> infrastructure.  For context, so far we've been maintaining a few machines
> around the globe, called syncproxies, that serve as "hubs" for archive
> mirroring and push downstream mirrors.

You are just talking about the authenticated rsync and push stuff right
now?  Because mirror-isc.d.o for example does more.

> Would it be possible to work with the cloud team to stand up appropriate
> accounts and so on on one of the cloud infras Debian has a relationship
> with?

We only have a relationship with AWS, via SPI, that allow us to just do
things within reason.  We should ask them nevertheless just as good
measure, because that's a ongoing commitment.

> (One possibly complicating factor is there's some element of sensitivity
> because these machines host embargoed binaries for the security
> archive.)

There are some mails about that from January 2018 in the mirrors@
mailbox.

Some questions:

How much resources to you think you need?
Resource in cloud environments are usually tightly coupled.  You get X
cpu, X*Y ram and X*Z network/disk throughput.

Do you intend to make the syncproxy setup a bit more failover friendly?
So you can kill one and make another take up the work.

Regards,
Bastian

-- 
You're too beautiful to ignore.  Too much woman.
-- Kirk to Yeoman Rand, "The Enemy Within", stardate unknown



Re: Finding new home for our builds and other security sensitive stuff

2022-03-07 Thread Bastian Blank
On Mon, Mar 07, 2022 at 07:38:50AM -0800, Noah Meyerhans wrote:
> On Mon, Mar 07, 2022 at 12:11:37PM +0100, Bastian Blank wrote:
> > I was talking about a Vault for our secrets.  That's the priority now.
> At the moment, yes, but earlier in the thread was discussion of needing
> ~50 GB of storage and a private Gitlab instance.  That's the scenario I
> want to avoid.  It's bad enough that Debian owns one Gitlab
> installation.  A second one isn't going to reduce the burden of doing
> so.

You can even side track a thread in two e-mails.  And we talked about
using another instance, not necessarily our own.  Debian even got
several Jenkins installation.

What do we need?

> > But yes, I know that none of the issues with Salsa have been addressed
> > in any way.  They did an upgrade to Bullseye, so the database version is
> > new enough now.  But even this problem will show up again and again.
> And why would that not be the case with a team managed Gitlab instance?

Because we are able to actually embrace that not only Debian tries to do
work.  Debian tries to re-do everything.  But sometimes it just needs a
well meant unattended-upgrades.

> We as a team don't have experience running Gitlab, and it's really not
> in our collective area of interest or expertise.  *You* may be
> interested in running a private Gitlab instance, but that just makes you
> a single point of failure for such an instance and sets us for having to
> deal with an unmaintained or poorly maintained instance in the future
> should your involvement with the team change for any reason.

Sadly, Debian is full of SPOF, that's nothing new.

Bastian

-- 
Hailing frequencies open, Captain.



Re: Finding new home for our builds and other security sensitive stuff

2022-03-07 Thread Bastian Blank
On Sun, Mar 06, 2022 at 04:40:24PM -0800, Noah Meyerhans wrote:
> Are you not satisfied that the salsa issues have been addressed with the
> latest maintenance?  We are now running a current Gitlab release, at
> least.

I was talking about a Vault for our secrets.  That's the priority now.

But yes, I know that none of the issues with Salsa have been addressed
in any way.  They did an upgrade to Bullseye, so the database version is
new enough now.  But even this problem will show up again and again.

Bastian

-- 
I'm a soldier, not a diplomat.  I can only tell the truth.
-- Kirk, "Errand of Mercy", stardate 3198.9



Re: Finding new home for our builds and other security sensitive stuff

2022-03-06 Thread Bastian Blank
Hi

On Mon, Feb 28, 2022 at 08:25:21AM -0800, Ross Vandegrift wrote:
> On Mon, Feb 28, 2022 at 01:07:37PM +0100, Bastian Blank wrote:
> > Yeah.  That just reduces the possibilities to the large platforms.
> I agree this is a downside.  But we wouldn't be forever locked into a
> plaform - it's easy to migrate to consul (and probably raft, but I've
> never actally used it).

I intend to use AWS for now.  I'll just setup another account for
administrative infrastructure.

And let's not forget backups.

Bastian

-- 
I'm a soldier, not a diplomat.  I can only tell the truth.
-- Kirk, "Errand of Mercy", stardate 3198.9



Re: Finding new home for our builds and other security sensitive stuff

2022-02-28 Thread Bastian Blank
On Sun, Feb 27, 2022 at 09:41:47PM -0800, Ross Vandegrift wrote:
> > We use Hashicorp Vault in my company, and we are very happy of it. It works
> > well, it's safe, and has many good options. So I support the idea.
> +1 - we should talk more about how this would look.  I have some thoughts.
> We could keep it simple: one VM in an autounseal supported cloud, probably
> using a storage backend from the platform.

Yeah.  That just reduces the possibilities to the large platforms.

> > > Using another GitLab instance is a bit more problematic.  Due to the
> > > ressources we use, most of the instances out there are kind of out of
> > > the question.  Which remains is hosting one ourselves.  That's not
> > > ideal, by far.
> gitlab.com could work - they could handle our artifacts, and we could bring 
> our
> own CI runners.  This might not be popular for a variety of reasons (and I'm
> not pushing for it).  But I think it's important to note since:
> a) it's technically feasible, and
> b) it's probably the least effort (both migration & ongoing ops)

Yes, it is possible.

> Thanks!  Bastian, do you remember how much artifact storage we use?  IIRC, 
> it's
> surprisingly large.  salsa is still down at the moment, so I'm unable to 
> check.

It isn't that much.  Let's say something below 200G, more like 50.

> > But, this is problematic not only for the cloud team. Let's hope this gets
> > fixed "soon", no? Maybe we should set a deadline for ourselves?
> 100% agreed.  I don't think we need to set a deadline yet, but I think we
> should continue this conversation so we can build opinions about our options.

Well, 14 months should be enough, don't you think?

Bastian

-- 
Bones: "The man's DEAD, Jim!"



Finding new home for our builds and other security sensitive stuff

2022-02-27 Thread Bastian Blank
Hi

Sadly the problems regarding Salsa did just gain a new level.  For those
who don't follow debian-private or the monthly meetings of the Cloud
team, this is the short version:

- The instance was not updated for any of the last nine upstream
  releases, it is now seven months out of upstream security support.
- It is now affected by a critical (aka pre-auth) vulnerability, which
  leads to expossure of secrets stored in the instance.

I don't see or hear anything that would make me think there will be any
meaningful change in maintenance procedures in the future.

Our image management stuff uses capabilities of Salsa and also uses it
to store the secrets required to do privileged operations on Cloud
platforms.  Those stored secrets are non-expiring and allow privileged
access to our releases on those platforms.

After thinking about, I propose two projects:
- Move secrets to Vault.
- Move the critical projects to a properly maintainer GitLab instance.

Using Hashicorp Vault as secrets store allows us tighter controls, like
- providing the jobs with temporary access credentials,
- restricting from where credentials can be read and
- get an audit log when, who, where credentials have been requested.

Using another GitLab instance is a bit more problematic.  Due to the
ressources we use, most of the instances out there are kind of out of
the question.  Which remains is hosting one ourselves.  That's not
ideal, by far.

Regards,
Bastian

-- 
A father doesn't destroy his children.
-- Lt. Carolyn Palamas, "Who Mourns for Adonais?",
   stardate 3468.1.



Re: python3-google-compute-engine vs. google-guest-agent

2022-01-28 Thread Bastian Blank
On Fri, Jan 28, 2022 at 10:27:22AM +0100, Dominik George wrote:
> after google-guest-agent has been accepted into sid, it was reported
> that it has a conflicting file with python3-google-compute-engine:
>   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004071

> Shall we simply replicate this deprecation in Debian? Or should we go
> all the way with alternatives and whatnot?

See:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004075

> My proposal is doing what upstream says, and making
> python3-google-compute-engine a transitional package that pulls in
> google-guest-agent.

python3-google-compute-engine was never in a release, so just drop it.

Bastian

-- 
Schshschshchsch.
-- The Gorn, "Arena", stardate 3046.2



Re: Cloud team plans for cloud-hosted mirrors

2022-01-28 Thread Bastian Blank
Hi Julien

On Wed, Jan 26, 2022 at 07:58:23PM +0100, Julien Cristau wrote:
> I think we (DSA) have been reluctant to add new third-party-run services
> under debian.org,

Just being curious: what is your definition of "third-party-run"?  As
example: deb.debian.org.  It uses Fastly, which is shared
responsibility.  It is run by someone else but provides a product that
Debian configures.  But where would be the limit?

>   and it's not clear to me if that infrastructure would
> be run by the cloud team on behalf of debian, or if the cloud team would
> control the names but point them at mirrors run by the cloud providers
> themselves.

I doubt that there will be any reason for us to point Debian names to
mirrors the provider controls.

There is one provider providing it's own mirrors: Hetzner.  They use an
already existing mechanism to configure the mirror to their own if not
overriden by the user.  So there is no name controlled by Debian
required.

The whole reason for this stunt is to protect Debian.  Protect Debian
and it's users from screwups by
- the providers themselves,
- a laps in the agreement that currently provides access to Debian
  mirrors on Azure and AWS and
- single Debian developers controlling resources.

Bastian

-- 
I've already got a female to worry about.  Her name is the Enterprise.
-- Kirk, "The Corbomite Maneuver", stardate 1514.0



Bug#1004075: RM: google-compute-engine -- ROM; replaced by google-guest-agent, not in Buster

2022-01-20 Thread Bastian Blank
Package: ftp.debian.org
Severity: normal
X-Debbugs-Cc: debian-cloud@lists.debian.org

Please remove the package google-compute-engine.  It's purpose was
replaced by a different implementation in the package google-guest-agent
and google-compute-engine-oslogin.  It was not shipped in Buster.

Bastian



Re: Global networking change in our AWS accounts

2021-11-17 Thread Bastian Blank
Hi

On Wed, Nov 17, 2021 at 08:01:55AM -0300, Antonio Terceiro wrote:
> For ci, we are working with the security team on testing embargoed
> security updates, and for that we need a unique IP address, because it
> will be added to an ACL on the security repository side.

You mean via https://security-master.debian.org/debian-security-buildd?

> I would like the central server to have its unique public IPv4 address
> for this.

None of the IP addresses you can assign are actually stable.  The best
approximation comes in form of a complete IPv6 subnet, aka a /64 where
only your stuff with security access runs.

> > - IPv4 incoming will _not_ work with a public IP assigned to an
> >   instance, and
> > - IPv4-only or (better) dual-stack network load balancers can be used
> >   for stuff like HTTP access for users.
> This means that all incoming HTTP access has to go through the admins
> first. Is there a way to do this without creating a bottleneck or a
> SPoF?

I have not decided how that should work.  Actually I added the
permissions required to manage load balancers.  We can however also
pre-create it and only let you decide where to route the traffic.

Bastian

-- 
Love sometimes expresses itself in sacrifice.
-- Kirk, "Metamorphosis", stardate 3220.3



Re: Global networking change in our AWS accounts

2021-11-17 Thread Bastian Blank
Hi

On Tue, Nov 16, 2021 at 10:31:12PM +0100, Lucas Nussbaum wrote:
> I'm surprised by this: what is the motivation? Have we been asked to use
> less IPv4 addresses?

Yes, you get at most two.  We can also remove the NAT one, as access to
Debian's infrastructure is capable of working IPv6-only, apart from two
not relevant things.

> Also, in qa2, there's an instance that is painful to move and that needs
> to be publicly accessible. So please plan for a migration that does not
> involve terminating it.

All instances are publicly accessible.

> - if everything is on private IPv4, how are we supposed to connect to
>   the instances?

You use the always assigned IPv6 address.  We have 2021 so this should
be no problem.

> - who is "we"? (I only interacted with you, and was wondering how much
>   backup we have for the admins of this AWS setup, given that it looks
>   a lot more dependant on the admins that with the old account)

Noah and me, mostly.

Bastian

-- 
War is never imperative.
-- McCoy, "Balance of Terror", stardate 1709.2



Global networking change in our AWS accounts

2021-11-16 Thread Bastian Blank
Hi folks

We like to do a global change to the way the network is setup on the new
AWS accounts.  The goal is to reduce the amount of global IPv4 addresses
to a minimum, as those are an increasingly rare comodity nowadays.

We will
- use NAT gateways for all outgoing IPv4 traffic, and
- allow use of IPv4 via load balancers for some kinds of traffic.

This means for you as a user that
- IPv6 will work in either direction and can be used to access instances
  at will (subject of security groups off course),
- IPv4 outgoing will work and all instances use the same address to the
  outside,
- IPv4 incoming will _not_ work with a public IP assigned to an
  instance, and
- IPv4-only or (better) dual-stack network load balancers can be used
  for stuff like HTTP access for users.

This affects mainly the following accounts:
- container (tianon),
- qa1 (elbrus, terceiro), and
- qa2 (lucas).

The time-frame to deploy this change is not yet determined, but should
be in the next couple of days.  If you have questions about this, please
don't hesitate to ask.

Regards,
Bastian

-- 
If some day we are defeated, well, war has its fortunes, good and bad.
-- Commander Kor, "Errand of Mercy", stardate 3201.7



Re: managing Huawei accounts

2021-08-24 Thread Bastian Blank
Hi Paul

On Tue, Aug 24, 2021 at 11:31:24AM +0200, Paul Gevers wrote:
>> As I
>> understand, the account for AWS is "owned" by SPI.

SPI "owns" several resources for Debian.  This is primary relevant for
the things that must not go away, like our published images.

>> In your opinion,
>> should we do the same for the Huawei platform?

It will make it easier to have uninterupted access, esp as people in
Debian are coming and going.  So if we want to use it for longer,
definitely.

>> If yes, how does that work?

- Get DPL involved, esp as most cloud providers want a credit card
  somewhere on signup.
- Get SPI to sign-off on the contract details.

I assume here that a single DD just created an account and signed the
existing contract?

Then the technical details needs to be figured out.  I assume this
account should then not be restricted to ci.debian.net, but others
should be able to use resources as well.

For AWS we have some Terraform code that configures the projects and
permissions to it (yes, I did not forget the AWS account for you guys).
Okay, AWS might be more complex, that's why it requires more stuff to
work correctly.  We might want to do something similar for Huawei.

I just did a brief look in the documentation to see what our options
are.  A single Huawei account contains a global identity management and
can contain several projects.  So all users and permissions can only be
managed by a global admin user.  But permissions can be specified on
separate projects.  That might work.

Regards,
Bastian

-- 
We have phasers, I vote we blast 'em!
-- Bailey, "The Corbomite Maneuver", stardate 1514.2



Bug#991613: DHCPv6 problem in our image: needs "-D LL" when spawning dhclient

2021-07-31 Thread Bastian Blank
Hi

Looking again at the DUID reported by Ubuntu:
| 00:02:00:00:ab:11:11:16:f0:97:0e:c5:c9:b6

00:02: the type is enterprise number
00:00:ab:11, aka 43793: systemd
11:16:f0:97:0e:c5:c9:b6: this is by default a hash of the machine id, so
does change as well, or is this using the UUID set by the firmware?

Does /etc/machine-id have different contents if you rebuild the system
using different but similar enough Ubuntu images?

On Sat, Jul 31, 2021 at 06:03:33PM +0200, Thomas Goirand wrote:
> > So dnsmasq is wrongly configured to take the DUID into account, even if
> > it does not matter for the address selection, because the address is
> > fixed?
> Do you have any suggestion on how to start dnsmasq, so it doesn't take
> DUID into account? Is there any parameter to do that?

I have no idea.  You should talk to Neutron(?) upstream for that.  It is
quiet possible that dnsmasq does not support this usecase at all.  I
wasn't able to find anything with reading sources within some minutes.

> FYI, I've just opened a thread in the openstack-discuss list to see if
> this can be fixed. Though ideally, it'd be nice to have both my
> deployment AND the image fixed for this problem, so nobody can ever
> encounter the issue again.

At least it needs to be added to the documentation prominently, because
regardless of what we do, a server rebuild will randomly drop systems
out of the ipv6 net.

Bastian

-- 
Spock: The odds of surviving another attack are 13562190123 to 1, Captain.



Bug#991613: DHCPv6 problem in our image: needs "-D LL" when spawning dhclient

2021-07-30 Thread Bastian Blank
Hi

On Wed, Jul 28, 2021 at 05:22:43PM +0200, Thomas Goirand wrote:
> - Initial boot:
> 2021-07-28T12:26:38.804683+00:00 pub1-network-3 dnsmasq-dhcp[3765807]: 
> DHCPSOLICIT(tap67fa8c3f-8d) 00:01:00:01:28:94:09:7b:fa:16:3e:f1:a9:da
> 2021-07-28T12:26:38.805023+00:00 pub1-network-3 dnsmasq-dhcp[3765807]: 
> DHCPADVERTISE(tap67fa8c3f-8d) 00:01:00:01:28:94:09:7b:fa:16:3e:f1:a9:da no 
> addresses available
> 
> - Server side:
> /var/lib/neutron/dhcp/dcf25c41-9057-4bc2-8475-a2e3c5d8c662/host:fa:16:3e:f1:a9:da,tag:dhcpv6,host---143.dc3-a.pub1.infomaniak.cloud.,[::143]
> /var/lib/neutron/dhcp/dcf25c41-9057-4bc2-8475-a2e3c5d8c662/leases:1627481056 
> 1056025050 2001:1600:10:100::143 host---143 
> 00:01:00:01:28:94:03:11:fa:16:3e:f1:a9:da

So dnsmasq is wrongly configured to take the DUID into account, even if
it does not matter for the address selection, because the address is
fixed?

> We see here that DHCPv6 fails because the DUID sent by the distro isn't the
> same as the initial build of the VM:

Well, the pending change in network machinery will scramble the DUID
anyway.  So you can't rely on it.

Bastian

-- 
Extreme feminine beauty is always disturbing.
-- Spock, "The Cloud Minders", stardate 5818.4



Re: Moving daily builds out of main debian-cloud-images project

2021-07-28 Thread Bastian Blank
Hi Ross

On Mon, Jul 26, 2021 at 09:54:23PM -0700, Ross Vandegrift wrote:
> The second disadvantage recently came up in [1].  I proposed a possible fix 
> for
> discussion at [2].  Bastian thought the discussion needed to happen on the ML,
> not salsa.  So here we are!

My largest problem with that change is that it removes
| - Access credentials for vendor and Debian infrastructure only exist in
|   the new group, so accidently leaking them is way harder.

So we would need to again completely trust anyone on the normal cloud
team group.

There is no real way around either options, you either
- need to trust everyone with write access to the code (and this trust
  was dented lately, after the person in question not even answered on
  my question why he thought this would be appropriate) or
- "manually" move the changes forward.

Bastian

-- 
Death, when unnecessary, is a tragic thing.
-- Flint, "Requiem for Methuselah", stardate 5843.7



Re: manage_etc_hosts: true

2021-07-23 Thread Bastian Blank
Hi Thomas

On Thu, Jul 22, 2021 at 11:15:23AM +0200, Thomas Goirand wrote:
> In commit 522055bf, I added
> config_space/files/etc/cloud/cloud.cfg.d/01_debian_cloud.cfg/GENERICCLOUD
> and
> config_space/files/etc/cloud/cloud.cfg.d/01_debian_cloud.cfg/GENERIC, in

Why did you decide that you can do that controversial (we talked about
it several time) change without anyone else looking at it?

Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, "Day of the Dove", stardate unknown



Re: Daily cloud image not found

2021-07-07 Thread Bastian Blank
Moin

On Tue, Jul 06, 2021 at 10:39:54PM -0700, Ross Vandegrift wrote:
> On Wed, Jul 07, 2021 at 02:31:53AM +, laalaa laalaa wrote:
> > Daily cloud image not found since 2021-07-02. I did not find announcement 
> > of it, is it intentionally or a problem? Thanks.
> Not intentional - looks like the salsa runner for our builds failed.  [1, 2]

The fail due to the virtue of casulana.d.o, the host all that runs on,
being broken.  No ETA for a fix available.

Bastian

-- 
It would be illogical to kill without reason.
-- Spock, "Journey to Babel", stardate 3842.4



  1   2   3   4   >