Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Thu, Apr 12, 2018 at 09:00:15AM -0400, Paul Belanger wrote:
> On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> > Hi Ian,
> > 
> > do you have any insight into the availability of a physical environment for 
> > the ARM64 cloud?
> > 
> > I’m curious, as there may be a need for downstream testing, which I would 
> > assume will want to make use of our existing OSP CI framework.
> > 
> The hardware is donated by Linaro and the first cloud is currently located in
> China. As for details of hardware, I recently asked hrw in #openstack-infra 
> and
> this was his reply:
> 
>   hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
> storage. different vendors. That's probably the closest to what I can say
>   hrw | pabelanger: some machines may be under NDA, some never reached mass 
> market, some are mass market available, some are no longer mass market 
> available.
> 
> As for downstream testing, are you looking for arm64 hardware or hoping to use
> the Linaro clouds for the testing.
> 
Also, I just noticed this was from Jan 15th, but only just showed up in my
inbox. Sorry for the noise, and will try to look at headers before replying :)

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> Hi Ian,
> 
> do you have any insight into the availability of a physical environment for 
> the ARM64 cloud?
> 
> I’m curious, as there may be a need for downstream testing, which I would 
> assume will want to make use of our existing OSP CI framework.
> 
The hardware is donated by Linaro and the first cloud is currently located in
China. As for details of hardware, I recently asked hrw in #openstack-infra and
this was his reply:

  hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
storage. different vendors. That's probably the closest to what I can say
  hrw | pabelanger: some machines may be under NDA, some never reached mass 
market, some are mass market available, some are no longer mass market 
available.

As for downstream testing, are you looking for arm64 hardware or hoping to use
the Linaro clouds for the testing.

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-11 Thread Frank Jansen
Hi Ian,

do you have any insight into the availability of a physical environment for the 
ARM64 cloud?

I’m curious, as there may be a need for downstream testing, which I would 
assume will want to make use of our existing OSP CI framework.

Thanks!

Frank

Frank Jansen
Senior Manager | Quality Engineering
Red Hat, Inc.

> On Jan 15, 2018, at 8:03 AM, Ian Wienand  wrote:
> 
> On 01/13/2018 01:26 PM, Ian Wienand wrote:
>> In terms of implementation, since you've already looked, I think
>> essentially diskimage_builder/block_device/level1.py create() will
>> need some moderate re-factoring to call a gpt implementation in
>> response to a gpt label, which could translate self.partitions into a
>> format for calling parted via our existing exec_sudo.
> 
>> bringing up a sample config and test, then working backwards from what
>> calls we expect to see
> 
> I've started down this path with
> 
> https://review.openstack.org/#/c/533490/
> 
> ... still very wip
> 
> -i
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-02-22 Thread Gema Gomez
On 23/02/18 05:35, Ian Wienand wrote:
> On 02/02/2018 05:15 PM, Ian Wienand wrote:
>> - Once that is done, it should be straight forward to add a
>>nodepool-builder in the cloud and have it build images, and zuul
>>should be able to launch them just like any other node (famous last
>>words).
> 
> This roughly turned out to be correct :)
> 
> In short, we now have ready xenial arm64 based nodes.  If you request
> an ubuntu-xenial-arm64 node it should "just work"
> 
> There are some caveats:
> 
>  - I have manually installed a diskimage-builder with the changes from
>[1] downwards onto nb03.openstack.org.  These need to be finalised
>and a release tagged before we can remove nb03 from the emergency
>file (just means, don't run puppet on it).  Reviews welcome!
> 
>  - I want to merge [2] and related changes to expose the image build
>logs, and also the webapp end-points so we can monitor active
>nodes, etc.  It will take some baby-sitting so I plan on doing this
>next week.
> 
>  - We have mirror.cn1.linaro.openstack.org, but it's not mirroring
>anything that useful for arm64.  We need to sort out mirroring of
>ubuntu ports, maybe some wheel builds, etc.
> 
>  - There's currently capacity for 8 nodes.  So please take that into
>account when adding jobs.
> 
> Everything seems in good shape at the moment.  For posterity, here is
> the first ever arm64 ready node:
> 
>  nodepool@nl03:/var/log/nodepool$ nodepool list | grep arm64
>  | 0002683657 | linaro-cn1 | ubuntu-xenial-arm64 | 
> c7bb6da6-52e5-4aab-88f1-ec0f1b392a0c | 211.148.24.200  |  
>   | ready| 00:00:03:43 | unlocked |
> 
> :)

Thank you, for the update! This is awesome news and great work \o/

Cheers,
Gema

> 
> -i
> 
> [1] https://review.openstack.org/547161
> [2] https://review.openstack.org/543671
> 


-- 
Gema Gomez-Solano
Tech Lead, SDI
Linaro Ltd
IRC: gema@#linaro on irc.freenode.net

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-02-22 Thread Ian Wienand
On 02/02/2018 05:15 PM, Ian Wienand wrote:
> - Once that is done, it should be straight forward to add a
>nodepool-builder in the cloud and have it build images, and zuul
>should be able to launch them just like any other node (famous last
>words).

This roughly turned out to be correct :)

In short, we now have ready xenial arm64 based nodes.  If you request
an ubuntu-xenial-arm64 node it should "just work"

There are some caveats:

 - I have manually installed a diskimage-builder with the changes from
   [1] downwards onto nb03.openstack.org.  These need to be finalised
   and a release tagged before we can remove nb03 from the emergency
   file (just means, don't run puppet on it).  Reviews welcome!

 - I want to merge [2] and related changes to expose the image build
   logs, and also the webapp end-points so we can monitor active
   nodes, etc.  It will take some baby-sitting so I plan on doing this
   next week.

 - We have mirror.cn1.linaro.openstack.org, but it's not mirroring
   anything that useful for arm64.  We need to sort out mirroring of
   ubuntu ports, maybe some wheel builds, etc.

 - There's currently capacity for 8 nodes.  So please take that into
   account when adding jobs.

Everything seems in good shape at the moment.  For posterity, here is
the first ever arm64 ready node:

 nodepool@nl03:/var/log/nodepool$ nodepool list | grep arm64
 | 0002683657 | linaro-cn1 | ubuntu-xenial-arm64 | 
c7bb6da6-52e5-4aab-88f1-ec0f1b392a0c | 211.148.24.200  |
| ready| 00:00:03:43 | unlocked |

:)

-i

[1] https://review.openstack.org/547161
[2] https://review.openstack.org/543671

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-02-01 Thread Ian Wienand

Hi,

A quick status update on the integration of the Linaro aarch64 cloud

- Everything is integrated into the system-config cloud-launcher bits,
  so all auth tokens are in place, keys are deploying, etc.

- I've started with a mirror.  So far only a minor change to puppet
  required for the ports sources list [1].  It's a bit bespoke at the
  moment but up as mirror.cn1.linaro.openstack.org.

- AFS is not supported out-of-the-box.  There is a series at [2] that
  I've been working on today, with some success.  I have custom
  packages at [3] which seem to work and can see our mirror
  directories.  I plan to puppet this in for our immediate needs, and
  keep working to get it integrated properly upstream.

- For building images, we are getting closer.  The series at [4] is
  still very WIP but can produce a working gpt+efi image.  I don't see
  any real blockers there; work will continue to make sure we get the
  interface if not perfect, at least not something we totally regret
  later :)

- Once that is done, it should be straight forward to add a
  nodepool-builder in the cloud and have it build images, and zuul
  should be able to launch them just like any other node (famous last
  words).

Thanks all,

-i

[1] https://review.openstack.org/539083
[2] https://gerrit.openafs.org/11940
[3] https://tarballs.openstack.org/package-afs-aarch64/
[4] https://review.openstack.org/#/c/539731/

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-22 Thread Marcin Juszkiewicz
W dniu 19.01.2018 o 06:08, Ian Wienand pisze:
> On 01/13/2018 03:54 AM, Marcin Juszkiewicz wrote:
>> UEFI expects GPT and DIB is completely not prepared for it.
> 
> I feel like we've made good progress on this part, with sufficient
> GPT support in [1] to get started on the EFI part
> 
> ... which is obviously where the magic is here.  This is my first
> rodeo building something that boots on aarch64, but not yours I've
> noticed :)
> 
> I've started writing some notes at [2] and anyone is welcome to edit,
> expand, add notes on testing etc etc.  I've been reading through the
> cirros implementation and have more of a handle on it; I'm guessing
> we'll need to do something similar in taking distro grub packages and
> put them in place manually.  Any notes on testing very welcome :)

> [2] https://etherpad.openstack.org/p/dib-efi

We are now at 5 patch series. Resulting image boots in x86-64 VM using UEFI.

More information at etherpad [2].

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-19 Thread Xinliang Liu
Hi Lan,

On 19 January 2018 at 13:08, Ian Wienand  wrote:
> On 01/13/2018 03:54 AM, Marcin Juszkiewicz wrote:
>>
>> UEFI expects GPT and DIB is completely not prepared for it.
>
>
> I feel like we've made good progress on this part, with sufficient
> GPT support in [1] to get started on the EFI part

Thanks for the patch, great work.
Only one question is that: why just merge gpt into 'vm' rather than
create an new 'block-device-gpt' element?
We then just need to add some partition configuration file into 'vm'
element, like the one you add in test case:
https://review.openstack.org/#/c/533490/10/diskimage_builder/block_device/tests/config/gpt_efi.yaml

>
> ... which is obviously where the magic is here.  This is my first
> rodeo building something that boots on aarch64, but not yours I've
> noticed :)
>
> I've started writing some notes at [2] and anyone is welcome to edit,
> expand, add notes on testing etc etc.  I've been reading through the
> cirros implementation and have more of a handle on it; I'm guessing
> we'll need to do something similar in taking distro grub packages and
> put them in place manually.  Any notes on testing very welcome :)

I think qemu image just need to install grub into /EFI/BOOT/. We can
make it by adding '--removable' option when installing the grub.
See comment here: https://review.openstack.org/#/c/533126/1

Thanks,
Xinliang


>
> Cheers,
>
> -i
>
> [1] https://review.openstack.org/#/c/533490/
> [2] https://etherpad.openstack.org/p/dib-efi
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-18 Thread Ian Wienand

On 01/13/2018 03:54 AM, Marcin Juszkiewicz wrote:

UEFI expects GPT and DIB is completely not prepared for it.


I feel like we've made good progress on this part, with sufficient
GPT support in [1] to get started on the EFI part

... which is obviously where the magic is here.  This is my first
rodeo building something that boots on aarch64, but not yours I've
noticed :)

I've started writing some notes at [2] and anyone is welcome to edit,
expand, add notes on testing etc etc.  I've been reading through the
cirros implementation and have more of a handle on it; I'm guessing
we'll need to do something similar in taking distro grub packages and
put them in place manually.  Any notes on testing very welcome :)

Cheers,

-i

[1] https://review.openstack.org/#/c/533490/
[2] https://etherpad.openstack.org/p/dib-efi

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-16 Thread Gema Gomez
On 15/01/18 22:51, Ian Wienand wrote:
> On 01/16/2018 12:11 AM, Frank Jansen wrote:
>> do you have any insight into the availability of a physical
>> environment for the ARM64 cloud?
> 
>> I’m curious, as there may be a need for downstream testing, which I
>> would assume will want to make use of our existing OSP CI framework.
> 
> Sorry, not 100% sure what you mean here?  I think the theory is that
> this would be an ARM64 based cloud attached to OpenStack infra and
> thus run any jobs infra could ...

+1, this is the idea indeed.

Gema
> 
> -i
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-15 Thread Ian Wienand

On 01/16/2018 12:11 AM, Frank Jansen wrote:

do you have any insight into the availability of a physical
environment for the ARM64 cloud?



I’m curious, as there may be a need for downstream testing, which I
would assume will want to make use of our existing OSP CI framework.


Sorry, not 100% sure what you mean here?  I think the theory is that
this would be an ARM64 based cloud attached to OpenStack infra and
thus run any jobs infra could ...

-i


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-15 Thread Dan Radez


On 01/12/2018 07:21 PM, Clark Boylan wrote:
> On Fri, Jan 12, 2018, at 3:27 PM, Dan Radez wrote:
>> fwiw
>> We've been building arm images for tripleo and posting them.
>> https://images.rdoproject.org/aarch64/pike/delorean/current-tripleo-rdo/
>>
>>
>> This uses delorean and overcloud build:
>>
>>     DIB_YUM_REPO_CONF+="/etc/yum.repos.d/delorean-deps-${OSVER}.repo
>> /etc/yum.repos.d/delorean-${OSVER}.repo /etc/yum.repos.d/ceph.repo
>> /etc/yum.repos.d/epel.repo /etc/yum.repos.d/radez.fedorapeople.repo" \
>>     openstack --debug overcloud image build \
>>     --config-file overcloud-aarch64.yaml \
>>     --config-file
>> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml \
>>     --config-file
>> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml
>>     # --config-file overcloud-images.yaml --config-file
>> overcloud-images-centos7.yaml --config-file aarch64-gumpf.yaml --image-name
>>     #openstack --debug overcloud image build --type overcloud-full
>> --node-arch aarch64
>>
>> It's not quite an orthodox RDO build, There are still a few things in
>> place that work around arm related packaging discrepancies or x86
>> related configs. But we get good builds from it.
>>
>> I don't know the details of what overcloud build does to the dib builds,
>> Though I don't believe these are whole disk images. I think the
>> overcloud and undercloud are root partition images and the kernel an
>> initrd are composed into the disk for the overcloud by OOO and we direct
>> boot them to launch a undercloud VM.
>>
>> Happy to share details if anyone wants more.
>>
>> Radez
> Looking into this a big more `openstack overcloud image build` takes in the 
> yaml config files you list and converts that into a forked diskimage-builder 
> process to build an image. The centos7 dib element in particular seems to 
> have aarch64 support via building on top of the upstream centos7 aarch64 
> image. We do use the centos-minimal element for our images though as it 
> allows us to do things like install glean. Chances are we still need need to 
> sort out UEFI and GPT for general dib use.
>
> Just to be sure there isn't any other magic going on can you provide the 
> contents of the overcloud-aarch64.yaml or point to where it can be found? It 
> doesn't appear to be in tripleo-common with the other configs.
>
> It is good to know that this is working in some cases though.
>
> Clark
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
The centos support that's there is because I added it. :)
Here's the overcloud-aarch64 file, Its purpose is just to switch the
arch for the two images built.
I think the packages reference was because there was a missing dep that
has since been resolved.

[stack@localhost ~]$ cat overcloud-aarch64.yaml
  disk_images:
  -
    imagename: overcloud-full
    arch: arm64
    packages:
  - os-collect-config
  -
    imagename: ironic-python-agent
    arch: arm64


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-15 Thread Ian Wienand

On 01/13/2018 01:26 PM, Ian Wienand wrote:

In terms of implementation, since you've already looked, I think
essentially diskimage_builder/block_device/level1.py create() will
need some moderate re-factoring to call a gpt implementation in
response to a gpt label, which could translate self.partitions into a
format for calling parted via our existing exec_sudo.



bringing up a sample config and test, then working backwards from what
calls we expect to see


I've started down this path with

 https://review.openstack.org/#/c/533490/

... still very wip

-i

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Ian Wienand
On 01/13/2018 05:01 AM, Jeremy Stanley wrote:
> On 2018-01-12 17:54:20 +0100 (+0100), Marcin Juszkiewicz wrote:
> [...]
>> UEFI expects GPT and DIB is completely not prepared for it. I made
>> block-layout-arm64.yaml file and got it used just to see "sorry,
>> mbr expected" message.
> 
> I concur. It looks like the DIB team would welcome work toward GPT
> support based on the label entry at
> https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#module-partitioning
> and I find https://bugzilla.redhat.com/show_bug.cgi?id=1488557
> suggesting there's probably also interest within Red Hat for it as
> well.

Yes, it would be welcome.  So far it's been a bit of a "nice to have"
which has kept it low priority, but a concrete user could help our
focus here.

>> You have whole Python class to create MBR bit by bit when few
>> calls to 'sfdisk/gdisk' shell commands do the same.
> 
> Well, the comments at
> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/level1/mbr.py?id=5d5fa06#n28
> make some attempt at explaining why it doesn't just do that instead
> (at least as of ~7 months ago?).

I agree with the broad argument of this sentiment; that writing a
binary-level GPT implementation is out of scope for dib (and the
existing MBR one is, with hindsight, something I would have pushed
back on more).

dib-block-device being in python is a double edged sword -- on the one
hand it's harder to drop in a few lines like in shell, but on the
other hand it has proper data structures, unit testing, logging and
config-reading abilities -- things that all are rather ugly, or get
lost with shell.  The code is not perfect, but doing more things like
[1,2] to enhance and better use libraries will help everyone (and
notice that's making it easier to translate directly to parted, no
coincidence :)

The GPL linkage issue, as described in the code, prevents us doing the
obvious thing and calling directly via python.  But I believe will we
be OK just making system() calls to parted to configure GPT;
especially given the clearly modular nature of it all.

In terms of implementation, since you've already looked, I think
essentially diskimage_builder/block_device/level1.py create() will
need some moderate re-factoring to call a gpt implementation in
response to a gpt label, which could translate self.partitions into a
format for calling parted via our existing exec_sudo.

This is highly amenable to a test-driven development scenario as we
have some pretty good existing unit tests for various parts of the
partitioning to template from (for example, tests/test_lvm.py).  So
bringing up a sample config and test, then working backwards from what
calls we expect to see is probably a great way to start.  Even if you
just want to provide some (pseudo)shell examples based on your
experience and any thoughts on the yaml config files it would be
helpful.

--

I try to run the meetings described in [3] if there is anything on the
agenda.  The cadence is probably not appropriate for this, we can do
much better via mail here, or #openstack-dib in IRC.  I hope we can
collaborate in a positive way; as I mentioned I think as a first step
we'd be best working backwards from what we expect to see in terms of
configuration, partition layout and parted calls.

Thanks,

-i

[1] https://review.openstack.org/#/c/503574/
[2] https://review.openstack.org/#/c/503572/
[3] https://wiki.openstack.org/wiki/Meetings/diskimage-builder

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Clark Boylan
On Fri, Jan 12, 2018, at 3:27 PM, Dan Radez wrote:
> fwiw
> We've been building arm images for tripleo and posting them.
> https://images.rdoproject.org/aarch64/pike/delorean/current-tripleo-rdo/
> 
> 
> This uses delorean and overcloud build:
> 
>     DIB_YUM_REPO_CONF+="/etc/yum.repos.d/delorean-deps-${OSVER}.repo
> /etc/yum.repos.d/delorean-${OSVER}.repo /etc/yum.repos.d/ceph.repo
> /etc/yum.repos.d/epel.repo /etc/yum.repos.d/radez.fedorapeople.repo" \
>     openstack --debug overcloud image build \
>     --config-file overcloud-aarch64.yaml \
>     --config-file
> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml \
>     --config-file
> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml
>     # --config-file overcloud-images.yaml --config-file
> overcloud-images-centos7.yaml --config-file aarch64-gumpf.yaml --image-name
>     #openstack --debug overcloud image build --type overcloud-full
> --node-arch aarch64
> 
> It's not quite an orthodox RDO build, There are still a few things in
> place that work around arm related packaging discrepancies or x86
> related configs. But we get good builds from it.
> 
> I don't know the details of what overcloud build does to the dib builds,
> Though I don't believe these are whole disk images. I think the
> overcloud and undercloud are root partition images and the kernel an
> initrd are composed into the disk for the overcloud by OOO and we direct
> boot them to launch a undercloud VM.
> 
> Happy to share details if anyone wants more.
> 
> Radez

Looking into this a big more `openstack overcloud image build` takes in the 
yaml config files you list and converts that into a forked diskimage-builder 
process to build an image. The centos7 dib element in particular seems to have 
aarch64 support via building on top of the upstream centos7 aarch64 image. We 
do use the centos-minimal element for our images though as it allows us to do 
things like install glean. Chances are we still need need to sort out UEFI and 
GPT for general dib use.

Just to be sure there isn't any other magic going on can you provide the 
contents of the overcloud-aarch64.yaml or point to where it can be found? It 
doesn't appear to be in tripleo-common with the other configs.

It is good to know that this is working in some cases though.

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Dan Radez
fwiw
We've been building arm images for tripleo and posting them.
https://images.rdoproject.org/aarch64/pike/delorean/current-tripleo-rdo/


This uses delorean and overcloud build:

    DIB_YUM_REPO_CONF+="/etc/yum.repos.d/delorean-deps-${OSVER}.repo
/etc/yum.repos.d/delorean-${OSVER}.repo /etc/yum.repos.d/ceph.repo
/etc/yum.repos.d/epel.repo /etc/yum.repos.d/radez.fedorapeople.repo" \
    openstack --debug overcloud image build \
    --config-file overcloud-aarch64.yaml \
    --config-file
/usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml \
    --config-file
/usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml
    # --config-file overcloud-images.yaml --config-file
overcloud-images-centos7.yaml --config-file aarch64-gumpf.yaml --image-name
    #openstack --debug overcloud image build --type overcloud-full
--node-arch aarch64

It's not quite an orthodox RDO build, There are still a few things in
place that work around arm related packaging discrepancies or x86
related configs. But we get good builds from it.

I don't know the details of what overcloud build does to the dib builds,
Though I don't believe these are whole disk images. I think the
overcloud and undercloud are root partition images and the kernel an
initrd are composed into the disk for the overcloud by OOO and we direct
boot them to launch a undercloud VM.

Happy to share details if anyone wants more.

Radez



On 01/12/2018 09:59 AM, Jeremy Stanley wrote:
> On 2018-01-12 11:17:33 +0100 (+0100), Marcin Juszkiewicz wrote:
> [...]
>> I am aware that you like to build disk images on your own but have
>> you considered using virt-install with generated preseed/kickstart
>> files? It would move several arch related things (like bootloader)
>> to be handled by distribution rules instead of handling them again
>> in code.
> [...]
>
> We pre-generate and upload images via Glance because it allows us to
> upload the same image to all providers (modulo processor
> architecture in this case obviously). Once we have more than one
> arm64 deployment to integrate, being able to know that we're
> uploading identical images to all of them will be useful from a
> consistency standpoint. Honestly, getting EFI bits into DIB is
> probably no harder than writing a new nodepool builder backend to do
> remote virt-install, and would be of use to a lot more people when
> implemented.
>
> If you look in
> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/bootloader/finalise.d/50-bootloader
> there's support for setting up ppc64 PReP boot partitions... I don't
> expect getting correct EFI partition creation integrated would be
> much tougher? That said, it's something the DIB maintainers will
> want to weigh in on obviously.
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 17:54:20 +0100 (+0100), Marcin Juszkiewicz wrote:
[...]
> UEFI expects GPT and DIB is completely not prepared for it. I made
> block-layout-arm64.yaml file and got it used just to see "sorry,
> mbr expected" message.

I concur. It looks like the DIB team would welcome work toward GPT
support based on the label entry at
https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#module-partitioning
and I find https://bugzilla.redhat.com/show_bug.cgi?id=1488557
suggesting there's probably also interest within Red Hat for it as
well.

> You have whole Python class to create MBR bit by bit when few
> calls to 'sfdisk/gdisk' shell commands do the same.

Well, the comments at
http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/level1/mbr.py?id=5d5fa06#n28
make some attempt at explaining why it doesn't just do that instead
(at least as of ~7 months ago?). Per the subsequent discussion in
#openstack-dib I don't know whether there is also work underway to
solve the identified deficiencies in sfdisk and gparted but those
more directly involved in DIB development may have answers when
they're around (which they may not be at this point in the weekend).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
W dniu 12.01.2018 o 16:54, Jeremy Stanley pisze:
> On 2018-01-12 16:06:03 +0100 (+0100), Marcin Juszkiewicz wrote:
>> Or someone will try to target q35/uefi emulation instead of i440fx 
>> one on x86 alone.
> 
> I'm curious why we'd need emulation there...

Developers around x86 virtualisation live in world where VM is like PC
from 90s (i440fx qemu model). You boot BIOS which reads bootloader from
1st sector of your storage, you have 32 PCI slots with hotplug etc. All
you need is VM + disk image with one partition using MBR partitioning.

If you want to have something which reminds Arm64 (but still is x86)
then you switch to Q35 qemu model and enable UEFI as bootloader. And all
your existing (and working with previous model) disk images with one
partition are useless. Your hotplug options are limited to amount of
pcie root ports defined in VM (usually 2-3). All your disk images need
to be converted to GPT partitioning, you need to have ESP (EFI System
Partition) partition with EFI bootloader stored there.

But (nearly) no one in x86 world goes for q35 model. Mostly because it
requires more work to be done and because users will ask why they can
not add 6th storage and 11th network card. And in arm64 world we do not
have such luck.

That's why I mentioned q35.

>> If I disable installing grub I can build useless one partition
>> disk image on arm64. Nothing will boot it.
> 
> See my other reply on this thread with a link to the bootloader 
> element. It seems like it's got support implemented for 
> multi-partition images needed by 64-bit PowerPC systems, so not 
> terribly dissimilar.

Firmware used by 64-bit Power system accepts MBR partitioned storage.

UEFI expects GPT and DIB is completely not prepared for it. I made
block-layout-arm64.yaml file and got it used just to see "sorry, mbr
expected" message. You have whole Python class to create MBR bit by bit
when few calls to 'sfdisk/gdisk' shell commands do the same.


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 16:06:03 +0100 (+0100), Marcin Juszkiewicz wrote:
> Or someone will try to target q35/uefi emulation instead of i440fx
> one on x86 alone.

I'm curious why we'd need emulation there... the expectation is that
DIB is running on a native 64-bit ARM system (under a hypervisor,
but still not cross-architecture). The reason we'll be deploying a
Nodepool builder server in the environment is so that we don't need
to worry about cross-building an arm64 rootfs and boot partition.

> I am tired of yet another disk image building projects.
> All think they are special, all have same assumptions. btdt.

While I can't disagree with regard to the proliferation of disk
image builders, this one has existed since July of 2012 and sees
extensive use throughout OpenStack. At the time it came into being,
there weren't a lot of good options for cross-distro orchestration
of image builds (e.g., one tool which could build Debian images on
CentOS and CentOS images on Debian).

> If I disable installing grub I can build useless one partition disk
> image on arm64. Nothing will boot it.

See my other reply on this thread with a link to the bootloader
element. It seems like it's got support implemented for
multi-partition images needed by 64-bit PowerPC systems, so not
terribly dissimilar.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Gema Gomez


On 12/01/18 00:28, Clark Boylan wrote:
> On Wed, Jan 10, 2018, at 1:41 AM, Gema Gomez wrote:
>> Hi all,
>>
>> Linaro would like to add a new cloud to infra so that we can run tests
>> on ARM64 going forward. This discussion has been ongoing for the good
>> part of a year, apologies that it took us so long to get to a point
>> where we feel comfortable going ahead in terms of stability of
>> infrastructure and functionality.
>>
>> My team has been taking care of making OpenStack as multiarch as
>> possible and making the experience of using an ARM64 cloud as close to
>> using a traditional amd64 one as possible. We have the Linaro Developer
>> Cloud program, which consists of a set of clouds that run on ARM64
>> hardware donated by the Linaro Enterprise Group members[1] and dedicated
>> to enablement/testing of upstream projects. Until recently our clouds
>> were run by an engineering team and were used to do enablement of
>> different projects and bug fixes of OpenStack, now we have a dedicated
>> admin and we are ready to take it a step forward. The clouds are
>> currently running OpenStack Newton but we are going to be moving to
>> Queens as soon as the release is out. Kolla has volunteered to be the
>> first project for this experiment, they have been pushing us to start
>> doing CI on our images so they also feel more comfortable accepting our
>> changes. We will welcome any other project that wants to be part of this
>> experiment, but we'll focus our engineering efforts initially on
>> enabling Kolla CI.
>>
>> After some preliminary discussion with fungi and inc0, we are going to
>> start small and grow from there. The initial plan is to add 2 projects:
>>
>> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
>> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
>> space. A cache server with similar specs + 200GB on a cinder volume for
>> AFS and Apache proxy caches. They will have a routable IP address.
>>
>> 2. Jobs project, we'll have capacity for 6 test instances initially and
>> after initial assessment grow it as required (8 vCPUs/8 GB RAM, 80GB
>> storage, 1 routable IP each).
>>
>> Is there anything else we are missing for the initial trial? Any
>> questions/concerns before we start? I will try to have presence on the
>> infra weekly calls/IRC channel or have someone from my team on them
>> going forward.
> 
> This plan looks good to me. The one question I had on IRC (and putting it 
> here for historical reasons) is whether or not Andrew FileSystem (AFS) will 
> build and run on arm64. OpenAFS is not in the linux kernel tree so this may 
> not work. The good news is mtreinish reports that after a quick test on some 
> of his hardware AFS was working.
> 
>>
>> In practical terms, once we've created the resources, is there a guide
>> to getting the infra bits in place for it? Who to give the credentials
>> to/etc?
> 
> New clouds happen infrequently enough and require a reasonable amount of 
> communication to get going so I don't think we have written down a guide 
> beyond what we have on the donating resources page [2].
> 
> Typically what happens is we'll have an infra root act as the contact point 
> to set things up, you'll provide them with credentials via email (or whatever 
> communication system is most convenient) then they will immediately change 
> the password(s). It is also helpful if we can get a contact individual for 
> the cloud side and we'll record that in our passwords file so that any one of 
> our infra roots knows who to talk to should the need arise.
> 
> Once the initial credential exchange happens the next step is for that infra 
> root to double check quotas and get the mirror host up and running as well as 
> image builder (and images) built. Once that is done you should be ready to 
> push changes to projects that add jobs using the new nodepool labels 
> (something like ubuntu-xenial-arm64).

Sounds good. Quick update, we are working on applying a patch
(https://review.openstack.org/#/c/489951/) to our Newton deployment so
that uploaded images do not require any extra parameters. Once that is
done we can give infra credentials. Who will be our infra counterpart
for that?

We are also happy to add engineers to work on any fixes required to make
the infra tools work as seamlessly as possible with ARM64.

Cheers!
Gema

>>
>> Thanks!
>> Gema
> 
> Thank you! this is exciting.
> 
>>
>> [1] https://www.linaro.org/groups/leg/
> [2] https://docs.openstack.org/infra/system-config/contribute-cloud.html
> 
> Clark
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 

-- 
Gema Gomez-Solano
Tech Lead, SDI
Linaro Ltd
IRC: gema@#linaro on irc.freenode.net

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Gema Gomez


On 12/01/18 15:49, Paul Belanger wrote:
> On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
>> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
>>> On 01/10/2018 08:41 PM, Gema Gomez wrote:
 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
 space.
>>> Does this mean you're planning on using diskimage-builder to produce
>>> the images to run tests on?  I've seen occasional ARM things come by,
>>> but of course diskimage-builder doesn't have CI for it (yet :) so it's
>>> status is probably "unknown".
>>
>> I had a quick look at diskimage-builder tool.
>>
>> It looks to me that you always build MBR based image with one partition.
>> This will have to be changed as AArch64 is UEFI based platform (both
>> baremetal and VM) so disk needs to use GPT for partitioning and EFI
>> System Partition needs to be present (with grub-efi binary on it).
>>
> This is often the case when bringing new images online, that some changes to 
> DIB
> will be required to support them. I suspect somebody with access to AArch64
> hardware will first need to run build-image.sh[1] and paste the build.log. 
> That
> will build an image locally for you using our DIB elements.
> 
> [1] 
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh

Yep, that won't be an issue. Will do that on Monday.

>> I am aware that you like to build disk images on your own but have you
>> considered using virt-install with generated preseed/kickstart files? It
>> would move several arch related things (like bootloader) to be handled
>> by distribution rules instead of handling them again in code.
>>
> I don't believe we want to look at using a new tool to build all our images,
> switching to virt-install would be a large change. There are reasons why we
> build images from scratch and don't believe switching to virt-install help
> with that
>>
>> Sent a patch to make it choose proper grub package on aarch64.
>>
>> ___
>> OpenStack-Infra mailing list
>> OpenStack-Infra@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 

-- 
Gema Gomez-Solano
Tech Lead, SDI
Linaro Ltd
IRC: gema@#linaro on irc.freenode.net

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
W dniu 12.01.2018 o 15:49, Paul Belanger pisze:
> On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
>> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
>>> On 01/10/2018 08:41 PM, Gema Gomez wrote:
 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
 space.
>>> Does this mean you're planning on using diskimage-builder to produce
>>> the images to run tests on?  I've seen occasional ARM things come by,
>>> but of course diskimage-builder doesn't have CI for it (yet :) so it's
>>> status is probably "unknown".
>>
>> I had a quick look at diskimage-builder tool.
>>
>> It looks to me that you always build MBR based image with one partition.
>> This will have to be changed as AArch64 is UEFI based platform (both
>> baremetal and VM) so disk needs to use GPT for partitioning and EFI
>> System Partition needs to be present (with grub-efi binary on it).
>>
> This is often the case when bringing new images online, that some changes to 
> DIB
> will be required to support them. I suspect somebody with access to AArch64
> hardware will first need to run build-image.sh[1] and paste the build.log. 
> That
> will build an image locally for you using our DIB elements.

Or someone will try to target q35/uefi emulation instead of i440fx one
on x86 alone. I am tired of yet another disk image building projects.
All think they are special, all have same assumptions. btdt.

If I disable installing grub I can build useless one partition disk
image on arm64. Nothing will boot it.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 11:17:33 +0100 (+0100), Marcin Juszkiewicz wrote:
[...]
> I am aware that you like to build disk images on your own but have
> you considered using virt-install with generated preseed/kickstart
> files? It would move several arch related things (like bootloader)
> to be handled by distribution rules instead of handling them again
> in code.
[...]

We pre-generate and upload images via Glance because it allows us to
upload the same image to all providers (modulo processor
architecture in this case obviously). Once we have more than one
arm64 deployment to integrate, being able to know that we're
uploading identical images to all of them will be useful from a
consistency standpoint. Honestly, getting EFI bits into DIB is
probably no harder than writing a new nodepool builder backend to do
remote virt-install, and would be of use to a lot more people when
implemented.

If you look in
http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/bootloader/finalise.d/50-bootloader
there's support for setting up ppc64 PReP boot partitions... I don't
expect getting correct EFI partition creation integrated would be
much tougher? That said, it's something the DIB maintainers will
want to weigh in on obviously.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> > On 01/10/2018 08:41 PM, Gema Gomez wrote:
> >> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> >> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> >> space.
> > Does this mean you're planning on using diskimage-builder to produce
> > the images to run tests on?  I've seen occasional ARM things come by,
> > but of course diskimage-builder doesn't have CI for it (yet :) so it's
> > status is probably "unknown".
> 
> I had a quick look at diskimage-builder tool.
> 
> It looks to me that you always build MBR based image with one partition.
> This will have to be changed as AArch64 is UEFI based platform (both
> baremetal and VM) so disk needs to use GPT for partitioning and EFI
> System Partition needs to be present (with grub-efi binary on it).
> 
This is often the case when bringing new images online, that some changes to DIB
will be required to support them. I suspect somebody with access to AArch64
hardware will first need to run build-image.sh[1] and paste the build.log. That
will build an image locally for you using our DIB elements.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh
> I am aware that you like to build disk images on your own but have you
> considered using virt-install with generated preseed/kickstart files? It
> would move several arch related things (like bootloader) to be handled
> by distribution rules instead of handling them again in code.
> 
I don't believe we want to look at using a new tool to build all our images,
switching to virt-install would be a large change. There are reasons why we
build images from scratch and don't believe switching to virt-install help
with that.
> 
> Sent a patch to make it choose proper grub package on aarch64.
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> On 01/10/2018 08:41 PM, Gema Gomez wrote:
>> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
>> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
>> space.
> Does this mean you're planning on using diskimage-builder to produce
> the images to run tests on?  I've seen occasional ARM things come by,
> but of course diskimage-builder doesn't have CI for it (yet :) so it's
> status is probably "unknown".

I had a quick look at diskimage-builder tool.

It looks to me that you always build MBR based image with one partition.
This will have to be changed as AArch64 is UEFI based platform (both
baremetal and VM) so disk needs to use GPT for partitioning and EFI
System Partition needs to be present (with grub-efi binary on it).

I am aware that you like to build disk images on your own but have you
considered using virt-install with generated preseed/kickstart files? It
would move several arch related things (like bootloader) to be handled
by distribution rules instead of handling them again in code.


Sent a patch to make it choose proper grub package on aarch64.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-11 Thread Jeremy Stanley
On 2018-01-12 11:09:10 +1100 (+1100), Ian Wienand wrote:
[...]
> I've seen occasional ARM things come by, but of course
> diskimage-builder doesn't have CI for it (yet :) so it's status is
> probably "unknown".

A problem which will solve itself! ;)
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-11 Thread Ian Wienand

On 01/10/2018 08:41 PM, Gema Gomez wrote:

1. Control-plane project that will host a nodepool builder with 8 vCPUs,
8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
space.

Does this mean you're planning on using diskimage-builder to produce
the images to run tests on?  I've seen occasional ARM things come by,
but of course diskimage-builder doesn't have CI for it (yet :) so it's
status is probably "unknown".

-i

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-11 Thread Clark Boylan
On Wed, Jan 10, 2018, at 1:41 AM, Gema Gomez wrote:
> Hi all,
> 
> Linaro would like to add a new cloud to infra so that we can run tests
> on ARM64 going forward. This discussion has been ongoing for the good
> part of a year, apologies that it took us so long to get to a point
> where we feel comfortable going ahead in terms of stability of
> infrastructure and functionality.
> 
> My team has been taking care of making OpenStack as multiarch as
> possible and making the experience of using an ARM64 cloud as close to
> using a traditional amd64 one as possible. We have the Linaro Developer
> Cloud program, which consists of a set of clouds that run on ARM64
> hardware donated by the Linaro Enterprise Group members[1] and dedicated
> to enablement/testing of upstream projects. Until recently our clouds
> were run by an engineering team and were used to do enablement of
> different projects and bug fixes of OpenStack, now we have a dedicated
> admin and we are ready to take it a step forward. The clouds are
> currently running OpenStack Newton but we are going to be moving to
> Queens as soon as the release is out. Kolla has volunteered to be the
> first project for this experiment, they have been pushing us to start
> doing CI on our images so they also feel more comfortable accepting our
> changes. We will welcome any other project that wants to be part of this
> experiment, but we'll focus our engineering efforts initially on
> enabling Kolla CI.
> 
> After some preliminary discussion with fungi and inc0, we are going to
> start small and grow from there. The initial plan is to add 2 projects:
> 
> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> space. A cache server with similar specs + 200GB on a cinder volume for
> AFS and Apache proxy caches. They will have a routable IP address.
> 
> 2. Jobs project, we'll have capacity for 6 test instances initially and
> after initial assessment grow it as required (8 vCPUs/8 GB RAM, 80GB
> storage, 1 routable IP each).
> 
> Is there anything else we are missing for the initial trial? Any
> questions/concerns before we start? I will try to have presence on the
> infra weekly calls/IRC channel or have someone from my team on them
> going forward.

This plan looks good to me. The one question I had on IRC (and putting it here 
for historical reasons) is whether or not Andrew FileSystem (AFS) will build 
and run on arm64. OpenAFS is not in the linux kernel tree so this may not work. 
The good news is mtreinish reports that after a quick test on some of his 
hardware AFS was working.

> 
> In practical terms, once we've created the resources, is there a guide
> to getting the infra bits in place for it? Who to give the credentials
> to/etc?

New clouds happen infrequently enough and require a reasonable amount of 
communication to get going so I don't think we have written down a guide beyond 
what we have on the donating resources page [2].

Typically what happens is we'll have an infra root act as the contact point to 
set things up, you'll provide them with credentials via email (or whatever 
communication system is most convenient) then they will immediately change the 
password(s). It is also helpful if we can get a contact individual for the 
cloud side and we'll record that in our passwords file so that any one of our 
infra roots knows who to talk to should the need arise.

Once the initial credential exchange happens the next step is for that infra 
root to double check quotas and get the mirror host up and running as well as 
image builder (and images) built. Once that is done you should be ready to push 
changes to projects that add jobs using the new nodepool labels (something like 
ubuntu-xenial-arm64).
> 
> Thanks!
> Gema

Thank you! this is exciting.

> 
> [1] https://www.linaro.org/groups/leg/
[2] https://docs.openstack.org/infra/system-config/contribute-cloud.html

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-10 Thread Jeremy Stanley
On 2018-01-10 19:34:07 +0100 (+0100), Marcin Juszkiewicz wrote:
> W dniu 10.01.2018 o 18:28, Jeremy Stanley pisze:
> > On 2018-01-10 09:06:43 -0800 (-0800), Michał Jastrzębski wrote:
> >> So it's my understanding (which is limited at best) that zuul
> >> currently doesn't support something like "this job has to run on
> >> nodepool X", which would be necessary. We might need to add some sort
> >> of metadata for nodepools and be able to specify in zuul job that
> >> "this job has to land on node with metadata X", like architecture:
> >> arm64.
> 
> > This shouldn't be an issue since our node types are tightly coupled
> > to the images they boot, and the arm64 architecture images will I'm
> > sure get distinct names (as an offshoot of this though, we may end
> > up extending the names of our current amd64 images to embed
> > processor architecture names for consistency, but this is one of the
> > many points we'll need to debate).
> 
> Docker images can be multiarch so same names are used on different
> architectures.

Right, these are virtual machine images and not Docker images, but
regardless we _could_ do something similar. This would however
require some reworking of Nodepool and Zuul to become processor
architecture aware so that it could be tracked independently of node
type and image build.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-10 Thread Marcin Juszkiewicz
W dniu 10.01.2018 o 18:28, Jeremy Stanley pisze:
> On 2018-01-10 09:06:43 -0800 (-0800), Michał Jastrzębski wrote:
>> So it's my understanding (which is limited at best) that zuul
>> currently doesn't support something like "this job has to run on
>> nodepool X", which would be necessary. We might need to add some sort
>> of metadata for nodepools and be able to specify in zuul job that
>> "this job has to land on node with metadata X", like architecture:
>> arm64.

> This shouldn't be an issue since our node types are tightly coupled
> to the images they boot, and the arm64 architecture images will I'm
> sure get distinct names (as an offshoot of this though, we may end
> up extending the names of our current amd64 images to embed
> processor architecture names for consistency, but this is one of the
> many points we'll need to debate).

Docker images can be multiarch so same names are used on different
architectures.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-10 Thread Jeremy Stanley
On 2018-01-10 09:06:43 -0800 (-0800), Michał Jastrzębski wrote:
> So it's my understanding (which is limited at best) that zuul
> currently doesn't support something like "this job has to run on
> nodepool X", which would be necessary. We might need to add some sort
> of metadata for nodepools and be able to specify in zuul job that
> "this job has to land on node with metadata X", like architecture:
> arm64.

This shouldn't be an issue since our node types are tightly coupled
to the images they boot, and the arm64 architecture images will I'm
sure get distinct names (as an offshoot of this though, we may end
up extending the names of our current amd64 images to embed
processor architecture names for consistency, but this is one of the
many points we'll need to debate).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-10 Thread Michał Jastrzębski
Thanks Gema!

So it's my understanding (which is limited at best) that zuul
currently doesn't support something like "this job has to run on
nodepool X", which would be necessary. We might need to add some sort
of metadata for nodepools and be able to specify in zuul job that
"this job has to land on node with metadata X", like architecture:
arm64.

On 10 January 2018 at 01:41, Gema Gomez  wrote:
> Hi all,
>
> Linaro would like to add a new cloud to infra so that we can run tests
> on ARM64 going forward. This discussion has been ongoing for the good
> part of a year, apologies that it took us so long to get to a point
> where we feel comfortable going ahead in terms of stability of
> infrastructure and functionality.
>
> My team has been taking care of making OpenStack as multiarch as
> possible and making the experience of using an ARM64 cloud as close to
> using a traditional amd64 one as possible. We have the Linaro Developer
> Cloud program, which consists of a set of clouds that run on ARM64
> hardware donated by the Linaro Enterprise Group members[1] and dedicated
> to enablement/testing of upstream projects. Until recently our clouds
> were run by an engineering team and were used to do enablement of
> different projects and bug fixes of OpenStack, now we have a dedicated
> admin and we are ready to take it a step forward. The clouds are
> currently running OpenStack Newton but we are going to be moving to
> Queens as soon as the release is out. Kolla has volunteered to be the
> first project for this experiment, they have been pushing us to start
> doing CI on our images so they also feel more comfortable accepting our
> changes. We will welcome any other project that wants to be part of this
> experiment, but we'll focus our engineering efforts initially on
> enabling Kolla CI.
>
> After some preliminary discussion with fungi and inc0, we are going to
> start small and grow from there. The initial plan is to add 2 projects:
>
> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> space. A cache server with similar specs + 200GB on a cinder volume for
> AFS and Apache proxy caches. They will have a routable IP address.
>
> 2. Jobs project, we'll have capacity for 6 test instances initially and
> after initial assessment grow it as required (8 vCPUs/8 GB RAM, 80GB
> storage, 1 routable IP each).
>
> Is there anything else we are missing for the initial trial? Any
> questions/concerns before we start? I will try to have presence on the
> infra weekly calls/IRC channel or have someone from my team on them
> going forward.
>
> In practical terms, once we've created the resources, is there a guide
> to getting the infra bits in place for it? Who to give the credentials
> to/etc?
>
> Thanks!
> Gema
>
> [1] https://www.linaro.org/groups/leg/
> --
> Gema Gomez-Solano
> Tech Lead, SDI
> Linaro Ltd
> IRC: gema@#linaro on irc.freenode.net
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra