Re: [foreman-dev] Merge Permissions for Katello/katello

2017-10-30 Thread Chris Roberts
+1 Andrew knows alot about Katello and has made a ton of prs to both the 
code and tests

On Friday, October 27, 2017 at 10:25:22 AM UTC-4, Andrew Kofink wrote:
>
> Hello,
>
> I'm not sure why I waited so long to ask, but in light of other requests, 
> I submit my own. Here are my merged PRs 
> ,
>  
> and here are other PRs I've helped review 
> .
>  
> Do you consent to giving me the mighty power of the Green Button?
>
> Andrew
>
> -- 
> Andrew Kofink
> ako...@redhat.com 
> IRC: akofink
> Software Engineer
> Red Hat Satellite
>

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-dev] Merge permission for theforemen/foreman-ansible-modules

2017-10-30 Thread Andrew Kofink
+1 from me! We have benefited greatly from Matthias' contributions in
foreman-ansible-modules. If you'd like to see some of his work, here are
his merged PRs
,
and here are all the PRs he has helped to review

.

On Mon, Oct 30, 2017 at 11:34 AM, Matthias Dellweg  wrote:

> Hello,
> i was just encouraged, to ask for merge/push permission in
> theforeman/foreman-ansible-modules.
> I have contributed to this repository almost since the beginning (yes,
> it's a very young one),
> and did a hand full of reviews that lead to constructive discussions. The
> collaboration with
> Andrew, Eric and Evgeni have always been very fruitful from my perspective.
> For co-inventing the DRY glue layer 'cement' (with fobheb), github
> classifies me as the top
> garbage collector with by far the most removed lines.
>
> I ask you kindly to vote, whether i shall be entrusted with the power of
> the merge.
> Thanks for considering,
>   Matthias
>
> --
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Andrew Kofink
akof...@redhat.com
IRC: akofink
Software Engineer
Red Hat Satellite

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[foreman-dev] Merge permission for theforemen/foreman-ansible-modules

2017-10-30 Thread Matthias Dellweg
Hello,
i was just encouraged, to ask for merge/push permission in 
theforeman/foreman-ansible-modules.
I have contributed to this repository almost since the beginning (yes, it's a 
very young one),
and did a hand full of reviews that lead to constructive discussions. The 
collaboration with
Andrew, Eric and Evgeni have always been very fruitful from my perspective.
For co-inventing the DRY glue layer 'cement' (with fobheb), github classifies 
me as the top
garbage collector with by far the most removed lines.

I ask you kindly to vote, whether i shall be entrusted with the power of the 
merge.
Thanks for considering,
  Matthias

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-dev] Compute resource plugin: Network compute attributes not passed

2017-10-30 Thread jbm
On 27/10/17 16:51, Ivan Necas wrote:
> Hi,
>
> I've not found a compute resource plugin, that would extend the
> network interface in this way, which is probably the reason
> why you're hitting it first. I believe it's the correct way, but gap
> in the foreman core.
Hm. The change you suggested doesn't look to complicated, I'll see if can't 
implement that. I don't know when I'll get to it, though, since I will also use 
your workaround.
> I know it's sub-optimal, but wouldn't be an option to re-use some of
> the existing params allowed for
> compute attributes on network? See
> https://github.com/theforeman/foreman/blob/18780e5cb7f7d0fbcf97b99426217730b1a54635/app/controllers/concerns/foreman/controller/parameters/nic_base.rb#L27
Yes, I guess that'll have to do for now
> -- Ivan
Thanks for your help and suggestions!

--
jbm
>
> On Wed, Oct 25, 2017 at 6:06 PM, jbm  wrote:
>> On 24/10/17 17:42, Ivan Necas wrote:
>>> On Tue, Oct 24, 2017 at 3:16 PM, jbm  wrote:
 On 24/10/17 13:20, Ivan Necas wrote:
> Hi,
>
> I was hitting this issue recently in the rex plugin, I needed to
> change the way the param is whitelisted,
> from using an array, to pass it via a block like this:
>
>https://github.com/theforeman/foreman_remote_execution/pull/276
 I changed the portion to

 parameter_filter Nic::Interface do |ctx|
   ctx.permit compute_attributes: [:my_param]
 end

 but it still doesn't work. I also tried `ctx.permit :my_param', which 
 worked neither.

 I'm not sure if I transfered your solution correctly to my usecase, as you 
 seem to be passing the NIC parameters differently than I do (I do it in 
 app/views/compute_resources_vms/form/powervm/_network.html.erb, you in 
 app/views/overrides/nics/_execution_interface.html.erb).
> Also, make sure you're testing it with the version of Foreman that has
> this patch
> https://github.com/theforeman/foreman/pull/4886, as it needs it to
> work properly.
 I'm using the latest develop branch, so this is included
>>> Oh, I haven't realized you need to put it under the
>>> `compute_attributes`. In that case, I think
>>> you would need to convert the `compute_attributes` in
>>> (https://github.com/theforeman/foreman/blob/c6760930cf08a4b584b75df8a621092dd787da01/app/controllers/concerns/foreman/controller/parameters/host_base.rb#L42)
>>> to use filter
>>> (such as Nic::ComputeAttribute), similarly as we have in
>>> the host 
>>> (https://github.com/theforeman/foreman/blob/c6760930cf08a4b584b75df8a621092dd787da01/app/controllers/concerns/foreman/controller/parameters/host_base.rb#L42)
>>> and then, define the parameter_filter on `Nic::ComputeAttribute`)
>> Just so I understand you right: Are you saying that it is a missing feature 
>> in foreman that one can not add custom parameters in the 
>> `compute_attributes'? If so, did I understand the "How to Create a Plugin" 
>> guide wrong? Is `compute_attributes' *not* the correct way to add custom 
>> provider specific parameters to the NIC creation (by adding them to 
>> `app/views/compute_resources_vms/form/foreman_powervm/_network.html.erb')? 
>> What is the correct way?
>>
>> If the only way to do this is to implement your suggested change in foreman, 
>> I will do that, but since I don't know how fast I could get this patch to 
>> the production system I want to run my plugin on I'd prefer a solution that 
>> works without altering foreman and would be happy for a suggestion.
>>
>> --
>> jbm
>>> - Ivan
>>>
> -- Ivan
>
> On Thu, Oct 12, 2017 at 7:24 PM, jbm  wrote:
>> Hi,
>>
>> I'm currently working on a Foreman plugin to use IBM PowerVM instances as
>> compute resources, and have run into the following problem when 
>> implementing
>> the network interface form:
>>
>> Following the guide at [1], I put my additional network parameters in
>> `foreman_powervm/app/views/compute_resources_vms/form/foreman_powervm/_network.html.erb',
>> like this:
>>
>>   <%= number_f f, :my_param
>>   :label => _("Foo") %>
>>
>> I now expected :my_param to be available in
>> ForemanPowerVM::PowerVM#create_vm (which is my subclass of 
>> ComputeResource),
>> in the form of args['interfaces_attributes'][i]['my_param'], but it is 
>> not
>> there (even though the foreman log shows that my_param was in fact 
>> recieved
>> as a POST parameter).
>>
>> So, as described in [2], I added
>>
>>   parameter_filter Nic::Interface, compute_attributes: [:my_param]
>>
>> to the Plugin.register block in my Engine class, but to no avail.
>>
>>
>> After a bit of digging around in the foreman code I managed to "fix" 
>> this by
>> adding :my_param to the :compute_attributes entry in
>> Foreman::Controller::Parameters::NicBase#add_nic_base_params_filter (file
>> `foreman_app/

Re: [foreman-dev] Vendorizing or Building RPMs

2017-10-30 Thread Eric D Helms
The conversation has been open for 2 weeks now, and I appreciate all of the
feedback and conversation. I am going to summarize the conversations as
they stand and outline what I believe next steps are based on the feedback.

RPMs

I performed a quick tally of those that responded and essentially got the
following counts on SCL vs. vendor.

  SCL: 4
  Vendorizing: 2

Further, there appeared to be a lot of unanswered technical questions
around how we maintain a vendorized stack with plugins that need to add
dependencies. Based on this feedback,  I believe the goal should be to
create a new Rails SCL that we own and maintain. As for the plan, the how,
of creating and maintaining this new SCL, I will start a new thread to
discuss the plan and improvements we can make along the way.


NPM

Across the board, everyone appeared to be in favor of vendorizing our NPM
modules to reduce the frequency of breakages and to allow the UI work that
is on going across Foreman and plugins to continue at a rapid pace. I'll
start a similar thread to outline and discuss the changes for this.


Eric

On Sun, Oct 29, 2017 at 5:43 PM, Ewoud Kohl van Wijngaarden <
ew...@kohlvanwijngaarden.nl> wrote:

> On Mon, Oct 23, 2017 at 09:52:39AM +0100, Greg Sutcliffe wrote:
>
>> On Mon, 2017-10-16 at 14:36 +0100, Greg Sutcliffe wrote:
>>
>>>
>>> That said, I've not really been involved with the RPMs, so I'm unsure
>>> if this causes a bigger headache for Yum users than Apt users. I'm
>>> also unsure of the work required to create an SCL, but if it's non-
>>> trivial then I'd be looking to CentOS to collaborate on a Rails SCL
>>> for everyone to use - for just ourselves, then vendoring seems
>>> easier.
>>>
>>
>> I spoke to a few people I know about this, and it seems there's not
>> much appetite for making new SCLs. We might be able to attract
>> contributors once it's created, but I think we should assume the effort
>> for creating/maintaining SCLs will fall on us initially.
>>
>> Do we have any conclusions on this thread? It's going to matter for
>> 1.17 which is getting closer by the day.
>>
>
> Personally I feel an updated RoR SCL is the way to go for 1.17 and to
> prove that I'm willing to invest the time to make it a reality. After 1.16
> RC2 is out I'm going to spend time on it.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Eric D. Helms
Red Hat Engineering

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-dev] Koji Outage

2017-10-30 Thread Lukas Zapletal
I can confirm that rsync port was not in the SG, added and Ewoud
tested this. Thanks.

What happened is in a summary in a separate thread. Let's carry on
discussion there. But the good thing about this failure is we have
been able to recover quite fast. Next time this happens I think we are
able to spin off in dozens of minutes now. (Most of the time we were
waiting for a snapshot of the data EBS volume just to have a copy.)

LZ

On Mon, Oct 30, 2017 at 9:33 AM, Ewoud Kohl van Wijngaarden
 wrote:
> Turns out it was started in a new security group which didn't allow rsync.
> lzap fixed that now.
>
>
> On Sat, Oct 28, 2017 at 08:09:39PM +0200, Ewoud Kohl van Wijngaarden wrote:
>>
>> It looks like the rsync service isn't started causing our promotion
>> pipeline to fail. Could you have a look?
>>
>> On Thu, Oct 26, 2017 at 03:31:28PM +0200, Lukas Zapletal wrote:
>>>
>>> Here is an update. Restart did not help, we stopped the instance and I
>>> am following this guide to create new AMI and start it:
>>>
>>>
>>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html
>>>
>>> Image which is now pending completion is called ami-2ecd6b54 and it
>>> has both root EBS volume and data EBS volume (900 GB) which is the
>>> reason why this is so slow. Then we can start new AMI to see if it
>>> boots up. The instance type was i3.xlarge for the record, we want the
>>> same one which was the best performance/price ratio for Koji workload.
>>>
>>> After new koji boots up we want to recreate /mnt/tmp folder structure
>>> and swap. Open /mnt/fstab to see the mountpoints, the i3.xlarge has
>>> like 950 GB ephemeral storage, but it was unused (we had 400 GB swap
>>> and 400 GB /mnt/tmp). In the /mnt/tmp directory there was just few
>>> directories where koji was doing building locally. More CPU intensive
>>> flavours were more expensive so we had this IO intensive one instead
>>> which stills delivers 4 cores and 32 GB RAM which is good.
>>>
>>> On the main EBS volume (900 GB one) there is a backup directory and in
>>> this directory we should have a backup of the directory structure.
>>> There is a cron job that does this daily. It was not backing up
>>> temporary files, just directories. This should be enough to get koji
>>> daemons back online. There should be a daily backup of postgre
>>> database as well.
>>>
>>> The EBS volume snapshot is ongoing, it is required to do snapshot
>>> first and then you can create new AMI from it. I have some family
>>> business in an hour, so I am writing this summary so someone else form
>>> US timezone can carry on from here. Next step would be - start new
>>> instance, let it boot (there might be ext4 file system check - not
>>> sure if we use XFS or ext4 for the data volume - see the AMI console
>>> for boot) and then find the /mnt/tmp backups, restore the directory
>>> structure, restart (rather whole system than just koji) and it should
>>> show up. Last thing would be to associate the elastic IP.
>>>
>>>
>>>
>>> On Thu, Oct 26, 2017 at 2:56 PM, Lukas Zapletal  wrote:

 Likely a hardware failure according to notification, our instance is
 not responding. We are trying restart first.

 ***

 EC2 has detected degradation of the underlying hardware hosting your
 Amazon EC2 instance associated with this event in the us-east-1
 region. Due to this degradation, your instance could already be
 unreachable. After 2017-10-30 00:00 UTC your instance, which has an
 EBS volume as the root device, will be stopped.

 You can see more information on your instances that are scheduled for
 retirement in the AWS Management Console
 (https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Events)

 * How does this affect you?
 Your instance's root device is an EBS volume and the instance will be
 stopped after the specified retirement date. You can start it again at
 any time. Note that if you have EC2 instance store volumes attached to
 the instance, any data on these volumes will be lost when the instance
 is stopped or terminated as these volumes are physically attached to
 the host computer

 * What do you need to do?
 You may still be able to access the instance. We recommend that you
 replace the instance by creating an AMI of your instance and launch a
 new instance from the AMI. For more information please see Amazon
 Machine Images
 (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
 in the EC2 User Guide. In case of difficulties stopping your
 EBS-backed instance, please see the Instance FAQ
 (http://aws.amazon.com/instance-help/#ebs-stuck-stopping).

 * Why retirement?
 AWS may schedule instances for retirement in cases where there is an
 unrecoverable issue with the underlying hardware. For more information
 about scheduled retirement events please see the EC2 user guide

 (http://docs.aws.amazon.com

Re: [foreman-dev] Koji builder crash - days after

2017-10-30 Thread Greg Sutcliffe
Hey

On Mon, 2017-10-30 at 10:22 +0100, Lukas Zapletal wrote:
> After several hours of outage, we were able to bring it up by
> mounting the volume in a temporary VM, editing /etc/fstab and
> starting new instance.

Thanks for the effort, especially on a Friday!

> Started new wiki page where we have this information:
> 
> http://projects.theforeman.org/projects/foreman/wiki/KojiSetup

Good idea.

> There were voices on the IRC to puppetize this server, I am not
> against and feel free to add this to todo. It does not make much
> sense IMHO to puppetize koji setup, but things like setting up ssh
> keys or basic services can be useful.

That was me, and yeah, just setting up the usual base classes will mean
that the infra team have access (handy if you're away) and all the
boring stuff is taken care of. One area that it might also help is
adding Koji into our wider backup system (I don't think we can move the
Koji files backup, but the Psql backups could be moved offsite, just in
case)

I can try to find time for this, but I'll need my key[1] adding in that
case. No promises though - as you say, it's low priority.

1 http://emeraldreverie.org/about#sshkeys

Greg

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: This is a digitally signed message part


[foreman-dev] Koji builder crash - days after

2017-10-30 Thread Lukas Zapletal
Hello,

the reason why our Koji was out of service last week was a hardware
failure. The instance was respinned on a different hypervisor but due
to ephemeral storage mounted as swap and scratch disks the OS did not
come up and went into emergency mode. I was surprised frankly because
I expected the system to boot up (root volume was ok), anyway lesson
learned.

After several hours of outage, we were able to bring it up by mounting
the volume in a temporary VM, editing /etc/fstab and starting new
instance. I made some changes - cleaned up fstab and dropped
everything except the root volume. Everything else is configured in
rc.local now, so the instance should boot up on a different machine or
configuration just fine as long as the root volume is /dev/sda1.

Started new wiki page where we have this information:

http://projects.theforeman.org/projects/foreman/wiki/KojiSetup

There were voices on the IRC to puppetize this server, I am not
against and feel free to add this to todo. It does not make much sense
IMHO to puppetize koji setup, but things like setting up ssh keys or
basic services can be useful.

The wikipage now follows, I recommand to read on the wiki rather than
here, there might be updates already:

***

h1. Koji Setup

Our intance is running at AWS EC2 (us-east-1) as i3.xlarge instance (4
CPUs, 32 GB RAM, 900 GB SSD NVMe). It is running CentOS 7 from EBS
volume (8 GB). The account is managed by Bryan Kearney, access to the
instance has few people including Lukas Zapletal, Eric Helms and Mike
McCune. If you need to be there, contact them.

h2. Volumes and mounts

The instance has two EBS volumes attached:

* /dev/sda1 - root
* /dev/sdx - data volume (/mnt/koji available as /dev/xvdx1)

The instance must be running in a security group with ports 22, 80,
443, 873 (rsyncd), 44323 (read only monitoring PCP web app) allowed
(all IPv4 TCP).

Root EBS volume is mounted via UUID in fstab:


UUID=29342a0b-e20f-4676-9ecf-dfdf02ef6683 / xfs defaults 0 0


Note that other volumes are not present in fstab, this is to prevent
booting into emergency mode when the VM is respinned on a different
hypervisor with different or empty ephemeral or EBS storage
configuration. All the rest is mounted in /etc/rc.local:


swapon /dev/nvme0n1p1
mount /dev/nvme0n1p2 /mnt/tmp -o defaults,noatime,nodiratime
mount /dev/xvdx1 /mnt/koji -o defaults,noatime,nodiratime
hostnamectl set-hostname koji.katello.org
systemctl restart pmcd pmlogger pmwebd
mount | grep /mnt/koji && systemctl restart rsyncd
mount | grep /mnt/koji && systemctl start postgresql
systemctl start httpd
mount | grep /mnt/koji && mount | grep /mnt/tmp && systemctl start kojid
mount | grep /mnt/koji && mount | grep /mnt/tmp && systemctl start kojira


On our current VM flavour there is a local SSD NVMe storage
(/dev/nvme0n1) with two partitions created (50/50). The first one is
swap and the second one is mounted as /mnt/tmp where koji does all the
work. This volume needs to be fast, it grows over the time and
contains temporary files (built packages, build logs, support files).

The main data folder where PostgreSQL database and koji generated
repositories and external repositories are present is on EBS volume
mounted as /mnt/koji. Note this was created as ext4 which can
sometimes lead to mkfs, perhaps xfs would be better fit for our use
case.

Services required for koji (postgresql, httpd, kojid, kojira, rsyncd)
are only started if required volumes are mounted.

h2. Hostname

The instance has a floating IP, in /etc/hosts we have an entry for that:

34.224.159.44 koji.katello.org kojihub.katello.org koji kojihub

When the IP changes, make sure this does change as well.

When new instance is booted via AWS, it will have a random hostname
assigned. In the rc.local we set the hostname to koji.katello.org on
every boot.

h2. Backups

There is a cron job (/etc/cron.weekly/koji-backup) that performs two
actions every week:

Full PostgreSQL database dump into /mnt/koji/backups/postgres.

File system backup of /mnt/tmp (ephemeral storage) into
/mnt/koji/backups/ephemeral. This backup skips all files named RPM
(these are not needed), duplicity tool is used, no encryption is done.
The main purpose of this backup is to store required filesystem
structure so koji can be quickly brought up after crash. Since the
backup mostly contains directories and build logs, it is not big. To
restore that, use:

duplicity restore file:///mnt/koji/backups/ephemeral /mnt/tmp --force
--no-encryption

Both backups does not have any rotation and need to be deleted every
year. The full backup script looks like:


#!/bin/bash
/usr/bin/duplicity --full-if-older-than 1M --no-encryption -vWARNING
--exclude '/mnt/tmp/**/*rpm' /mnt/tmp
file:///mnt/koji/backups/ephemeral
date=`date +"%Y%m%d"`
filename="/mnt/koji/backups/postgres/koji_${date}.dump"
pg_dump -Fc -f "$filename" -U koji koji


h2. Updates

We are running CentOS 7 with Koji (1.11) installed from EPEL7 and
mrepo package installed from Fedora

Re: [foreman-dev] Koji Outage

2017-10-30 Thread Ewoud Kohl van Wijngaarden
Turns out it was started in a new security group which didn't allow 
rsync. lzap fixed that now.


On Sat, Oct 28, 2017 at 08:09:39PM +0200, Ewoud Kohl van Wijngaarden wrote:
It looks like the rsync service isn't started causing our promotion 
pipeline to fail. Could you have a look?


On Thu, Oct 26, 2017 at 03:31:28PM +0200, Lukas Zapletal wrote:

Here is an update. Restart did not help, we stopped the instance and I
am following this guide to create new AMI and start it:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html

Image which is now pending completion is called ami-2ecd6b54 and it
has both root EBS volume and data EBS volume (900 GB) which is the
reason why this is so slow. Then we can start new AMI to see if it
boots up. The instance type was i3.xlarge for the record, we want the
same one which was the best performance/price ratio for Koji workload.

After new koji boots up we want to recreate /mnt/tmp folder structure
and swap. Open /mnt/fstab to see the mountpoints, the i3.xlarge has
like 950 GB ephemeral storage, but it was unused (we had 400 GB swap
and 400 GB /mnt/tmp). In the /mnt/tmp directory there was just few
directories where koji was doing building locally. More CPU intensive
flavours were more expensive so we had this IO intensive one instead
which stills delivers 4 cores and 32 GB RAM which is good.

On the main EBS volume (900 GB one) there is a backup directory and in
this directory we should have a backup of the directory structure.
There is a cron job that does this daily. It was not backing up
temporary files, just directories. This should be enough to get koji
daemons back online. There should be a daily backup of postgre
database as well.

The EBS volume snapshot is ongoing, it is required to do snapshot
first and then you can create new AMI from it. I have some family
business in an hour, so I am writing this summary so someone else form
US timezone can carry on from here. Next step would be - start new
instance, let it boot (there might be ext4 file system check - not
sure if we use XFS or ext4 for the data volume - see the AMI console
for boot) and then find the /mnt/tmp backups, restore the directory
structure, restart (rather whole system than just koji) and it should
show up. Last thing would be to associate the elastic IP.



On Thu, Oct 26, 2017 at 2:56 PM, Lukas Zapletal  wrote:

Likely a hardware failure according to notification, our instance is
not responding. We are trying restart first.

***

EC2 has detected degradation of the underlying hardware hosting your
Amazon EC2 instance associated with this event in the us-east-1
region. Due to this degradation, your instance could already be
unreachable. After 2017-10-30 00:00 UTC your instance, which has an
EBS volume as the root device, will be stopped.

You can see more information on your instances that are scheduled for
retirement in the AWS Management Console
(https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Events)

* How does this affect you?
Your instance's root device is an EBS volume and the instance will be
stopped after the specified retirement date. You can start it again at
any time. Note that if you have EC2 instance store volumes attached to
the instance, any data on these volumes will be lost when the instance
is stopped or terminated as these volumes are physically attached to
the host computer

* What do you need to do?
You may still be able to access the instance. We recommend that you
replace the instance by creating an AMI of your instance and launch a
new instance from the AMI. For more information please see Amazon
Machine Images (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
in the EC2 User Guide. In case of difficulties stopping your
EBS-backed instance, please see the Instance FAQ
(http://aws.amazon.com/instance-help/#ebs-stuck-stopping).

* Why retirement?
AWS may schedule instances for retirement in cases where there is an
unrecoverable issue with the underlying hardware. For more information
about scheduled retirement events please see the EC2 user guide
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html).
To avoid single points of failure within critical applications, please
refer to our architecture center for more information on implementing
fault-tolerant architectures: http://aws.amazon.com/architecture

LZ

On Thu, Oct 26, 2017 at 1:51 PM, Eric D Helms  wrote:

Our Koji is currently down from a web perspective and ssh access. Please
don't merge anything further to -packaging until we've resolved this. All
actions requiring Koji repositories for testing or actions in Koji cannot be
performed.

If Bryan or Lukas (since I am not sure who has AWS access to the box) could
investigate for us please.

--
Eric D. Helms
Red Hat Engineering

--
You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
e