Re: [openstack-dev] [nova] [placement] resource providers update 36

2017-10-06 Thread Ed Leafe
On Oct 6, 2017, at 8:21 AM, Chris Dent  wrote:

> The extent to which an
> allocation request is not always opaque and the need to be explicit
> about microversions was clarified, so edleafe is going to make some
> adjustments, after first resolving the prerequisite code (alternate
> hosts: https://review.openstack.org/#/c/486215/ ).

For the record, what was clarified was that the agreement that the 
allocation_request blob that had been previously agreed to as an opaque blob 
had been compromised by some last-minute hacks to get migrations working before 
the Pike deadline. As a result of this change, it was now necessary to add 
versioning to these allocation_request objects, because now Nova or any other 
service could be altering them as they saw fit. I still think that this is a 
terrible design, but I will bow to the pressure to go along with accommodating 
the change to the previous design.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][tc] Accessible upgrade support

2017-10-06 Thread Kevin Benton
>The neutron story is mixed on accessable upgrade, because at least in some
cases, like ovs, upgrade might trigger a network tear down / rebuild that
generates an outage (though typically a pretty small one).

This shouldn't happen. If it does it should be reported as a bug. All
existing OVS flows are left in place during agent initialization and we
don't get rid of the old ones until the agent finishes setting up the new
ones.

On Thu, Oct 5, 2017 at 4:42 AM, Sean Dague  wrote:

> On 10/05/2017 07:08 AM, Graham Hayes wrote:
> >
> >
> > On Thu, 5 Oct 2017, at 09:50, Thierry Carrez wrote:
> >> Matt Riedemann wrote:
> >>> What's the difference between this tag and the zero-impact-upgrades
> tag?
> >>> I guess the accessible one is, can a user still ssh into their VM while
> >>> the nova compute service is being upgraded. The zero-impact-upgrade one
> >>> is more to do with performance degradation during an upgrade. I'm not
> >>> entirely sure what that might look like, probably need operator input.
> >>> For example, while upgrading, you're live migrating VMs all over the
> >>> place which is putting extra strain on the network.
> >>
> >> The zero-impact-upgrade tag means no API downtime and no measurable
> >> impact on performance, while the accessible-upgrade means that while
> >> there can be API downtime, the resources provisioned are still
> >> accessible (you can use the VM even if nova-api is down).
> >>
> >> I still think we have too many of those upgrade tags, and amount of
> >> information they provide does not compensate the confusion they create.
> >> If you're not clear on what they mean, imagine a new user looking at the
> >> Software Navigator...
> >>
> >> In particular, we created two paths in the graph:
> >> * upgrade < accessible-upgrade
> >> * upgrade < rolling-upgrade < zero-downtime < zero-impact
> >>
> >> I personally would get rid of zero-impact (not sure there is that much
> >> additional information it conveys beyond zero-downtime).
> >>
> >> If we could make the requirements of accessible-upgrade a part of
> >> rolling-upgrade, that would also help (single path in the graph, only 3
> >> "levels"). Is there any of the current rolling-upgrade things (cinder,
> >> neutron, nova, swift) that would not qualify for accessible-upgrade as
> >> well ?
> >
> > Well, there is projects (like designate) that qualify for accessible
> > upgrade, but not rolling upgrade.
>
> The neutron story is mixed on accessable upgrade, because at least in
> some cases, like ovs, upgrade might trigger a network tear down /
> rebuild that generates an outage (though typically a pretty small one).
>
> I still think it's hard to describe to folks what is going on without
> pictures. And the tag structure might just be the wrong way to describe
> the world, because they are a set of positive assertions, and upgrade
> expectations are really about: "how terrible will this be".
>
> If I was an operator the questions I might have is:
>
> 1) Really basic, will my db roll forward?
>
> 2) When my db rolls forward, is it going to take a giant table lock that
> is effectively an outage?
>
> 3) Is whatever date I created, computes, networks going to stay up when
> I do all this? (i.e. no customer workload interuption)
>
> 4) If the service is more than 1 process, can they arbitrarily work with
> N-1 so I won't have a closet outage when services restart.
>
> 5) If the service runs on more than 1 host, can I mix host levels, or
> will there be an outage as I upgrade nodes
>
> 6) If the service talks to other openstack services, is there a strict
> version lock in which means I've got to coordinate with those for
> upgrade? If so, what order is that and is it clear?
>
> 7) Can I seamlessly hide my API upgrade behind HA-Proxy / Istio / (or
> similar) so that there is no API service interruption
>
> 8) Is there any substantial degradation in running "mixed mode" even if
> it's supported, so that I know whether I can do this over a longer
> window of time when time permits
>
> 9) What level of validation exists to ensure that any of these "should
> work" do work?
>
>
> The tags were really built around grouping a few of these, but even with
> folks that are near the problem, they got confusing quick. I really
> think that some more pictoral upgrade safety cards or something
> explaining the things you need to consider, and what parts projects
> handle for you would be really useful. And then revisit whatever the
> tagging structure is going to be later.
>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] leverage page cache in openstack swift

2017-10-06 Thread Clay Gerrard
There's a couple of options in the object server that are related to how
object data is cached (or generally *not*)

https://github.com/openstack/swift/blob/master/swift/obj/server.py#L921

At scale on dense nodes it becomes challenging to keep all the filesystem
metdata in the page cache, so we've tried a few different tricks and
tunings over the years to optimize towards using as much RAM as possible on
minimizing the number of seeks "wasted" to pickup filesystem directory and
extent information instead of object data.  BUT on nodes with more RAM and
less objects (or fewer *active* objects) it is definitely possible to tune
towards keeping more object data in the page cache.

Good luck!

-Clay


On Fri, Oct 6, 2017 at 2:34 PM, Jialin Liu  wrote:

> Hi,
> Is there any existing work that leveraging operating system's page cache
> for swift?
> like many other parallel file systems, lustre, the IO is cached in buffer
> and call is returned immediately to user space.
>
> Best,
> Jialin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Matt Riedemann

On 10/6/2017 1:10 PM, Mathieu Gagné wrote:

I don't think there ever was a technical limitation to add support for it.


See the review comments with the -1 votes on the patches I linked 
originally.


There are valid technical reasons for the -1s on those, including on 
this one from Paul Murray - who spent more time than probably anyone 
trying to get nova to support detach/attach root volumes:


https://review.openstack.org/#/c/201458/

Quoth Dr Murray:

"There is actually a lot more to this than detaching and attaching the 
volume - doing that alone is unsafe because the rest of the code assumes 
there is a root device. So if nova tries to do anything with the 
instance when the volume is detached it will probably fail and to error 
(e.g. delete, migrate, resize, start). All of these cases have to be 
fixed as well as the volume. We also need to make sure that only another 
volume can be attached or handle the case for swapping in any kind of 
storage device (an image on ephemeral disk for example)."


I'm fine with people wanting to resume his old spec and code if someone 
wants to own that, but I can't say it's going to be a review priority.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-10-06 Thread Clint Byrum
Excerpts from Giuseppe de Candia's message of 2017-10-06 13:49:43 -0500:
> Hi Clint,
> 
> Isn't user-data by definition available via the Metadata API, which isn't
> considered secure:
> https://wiki.openstack.org/wiki/OSSN/OSSN-0074
> 

Correct! The thinking is to account for the MITM attack vector, not
host or instance security as a whole. One would hope the box comes up
in a mostly drone-like state until it can be hardened with a new secret
host key.

> Or is there a way to specify that certain user-data should only be
> available via config-drive (and not metadata api)?
> 
> Otherwise, the only difference I see compared to using Meta-data is that
> the process you describe is driven by the user vs. automated.
> 
> Regarding the extra plumbing, I'm not trying to avoid it. I'm thinking to
> eventually tie this all into Keystone. For example, a project should have
> Host CA and User CA keys. Let's assume OpenStack manages these for now,
> later we can consider OpenStack simply proxying signature requests and
> vouching that a public key does actually belong to a specific instance (and
> host-name) or Keystone user. So what I think should happen is when a
> Project is enabled for SSHaaS support, any VM instance automatically gets
> host certificate, authorized principal files based on Keystone roles for
> the project, and users can call an API (or Dashboard form) to get a public
> key signed (and assigned appropriate SSH principals).
> 

Fascinating, but it's hard for me to get excited about this when I can
just handle MITM security myself.

Note that the other existing techniques are simpler too. Most instances
will print the public host key to the console. The API offers console
access, so it can be scraped for the host key.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking for feedback on a spec to limit max_count in multi-create requests

2017-10-06 Thread Matt Riedemann
I've been chasing something weird I was seeing in devstack when creating 
hundreds of instances in a single request where at some limit, things 
blow up in an unexpected way during scheduling and all instances were 
put into ERROR state. Given the environment I was running in, this 
shouldn't have been happening, and today we figured out what was 
actually happening. To summarize, we retry scheduling requests on RPC 
timeout so you can have scheduler_max_attempts greenthreads running 
concurrently trying to schedule 1000 instances and melt your scheduler.


I've started a spec which goes into the details of the actual issue:

https://review.openstack.org/#/c/510235/

It also proposes a solution, but I don't feel it's the greatest 
solution, so there are also some alternatives in there.


I'm really interested in operator feedback on this because I assume that 
people are dealing with stuff like this in production already, and have 
had to come up with ways to solve it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] leverage page cache in openstack swift

2017-10-06 Thread Jialin Liu
Hi,
Is there any existing work that leveraging operating system's page cache
for swift?
like many other parallel file systems, lustre, the IO is cached in buffer
and call is returned immediately to user space.

Best,
Jialin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-10-03

2017-10-06 Thread Lance Bragstad
Hey all,

The following was done during office hours this week:

Bug #1698455 in OpenStack Identity (keystone): "Install and configure in
Installation Guide: Populate the Identity service database step fails on
CentOS7"
https://bugs.launchpad.net/keystone/+bug/1698455
Triaged and tagged

Bug #1711883 in OpenStack Identity (keystone): "An error in function
get_user_unique_id_and_display_name()"
https://bugs.launchpad.net/keystone/+bug/1711883
Triaged

Bug #1712335 in OpenStack Identity (keystone): "Links go to wrong page:
Install and configure in keystone"
https://bugs.launchpad.net/keystone/+bug/1712335
Set milestone

Bug #1716797 in OpenStack Identity (keystone): "Verify operation in
keystone: step 1 has already been done"
https://bugs.launchpad.net/keystone/+bug/1716797
Proposed fix and reviewed

Bug #1717962 in OpenStack Identity (keystone): "Unhelpful error in the
keystone log"
https://bugs.launchpad.net/keystone/+bug/1717962
Triaged

Bug #1645568 in OpenStack Identity (keystone): " keystone-manage
mapping_purge fails silently"
https://bugs.launchpad.net/keystone/+bug/1645568
Set milestone

Bug #1714937 in OpenStack Identity (keystone): "keystone returns 500 on
password change"
https://bugs.launchpad.net/keystone/+bug/1714937
Triaged and discussed approaches

Bug #1715080 in OpenStack Identity (keystone): "Update global
requirements to handle encoding issues with python2-pyldap-2.4.35"
https://bugs.launchpad.net/keystone/+bug/1715080
Triaged and discussed approaches

Bug #1718789 in OpenStack Identity (keystone): "Error number should be
used when throwing IOError exception"
https://bugs.launchpad.net/keystone/+bug/1718789
Triaged

Bug #1713574 in OpenStack Identity (keystone): "python 3 errors with
memcache enabled"
https://bugs.launchpad.net/keystone/+bug/1713574
Triaged and discussed approaches

Bug #1713970 in OpenStack Identity (keystone): "Keystone Internal Server
Error (HTTP 500) after apache restart when influxdb is installed"
https://bugs.launchpad.net/keystone/+bug/1713970
Requested more information to recreate

Bug #1714416 in OpenStack Identity (keystone): "Incorrect response
returned for invalid Accept header"
https://bugs.launchpad.net/keystone/+bug/1714416
Triaged

Let me know if you have any questions. Thanks!





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] TC candidacy (dims)

2017-10-06 Thread Davanum Srinivas
Oops! Matt Treinish pointed out i have 6 more months to go :)

Thanks,
Dims

On Fri, Oct 6, 2017 at 8:38 AM, Davanum Srinivas  wrote:
> Team,
>
> Please consider my candidacy for Queens. I just realized how time flies, 
> reading
> my previous two self nominations [1][2]. As a TC we have made significant
> strides in the last cycle[3] and my contributions have been around how we can
> do better working with adjacent communities and getting people from outside
> US involved and productive. We have significant challenges in both areas. 
> Let's
> start with the one about engaging new potentially part-time contributors, the
> newly forming SIG around "First Contact" is something we need to pool
> our efforts
> into it. Encouraging activities around SIG-OpenStack in kubernetes (and the
> mirror SIG-Kubernetes proposed in OpenStack) is turning the corner seeing the
> level of interest at the PTG [5]. I fully intend to help in both areas so
> our Foundation can stay at the cutting edge and relevant in the current and
> emerging eco systems.
>
> Thanks,
> Dims
>
>
> [1] 
> https://git.openstack.org/cgit/openstack/election/plain/candidates/newton/TC/Davanum_Srinivas.txt
> [2] 
> https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/TC/dims.txt
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/122962.html
> [4] 
> http://lists.openstack.org/pipermail/openstack-sigs/2017-September/thread.html#74
> [5] https://etherpad.openstack.org/p/queens-ptg-sig-k8s
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all]Tempest plugin split community goals status update

2017-10-06 Thread Chandan kumar
Hello,

Since one month is about to pass after Denver PTG.
Here is the status update on the Tempest Plugin split community goals:

List of projects which have already completed the goal:
- Barbican
- Designate
- Horizon
- Keystone
- Kuryr
- Os-win
- Sahara
- Solum
- Watcher

List of projects which are working on the goal:
- Aodh
- Cinder
- Magnum
- Manila
- Murano
- Neutron
- Neutron L2GW
- Octavia
- Senlin
- Zaqar
- Zun

Here is the detailed report on Tempest Plugin split goal status for
different projects:
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html#project-teams

Here is the list of open reviews:
https://review.openstack.org/#/q/topic:goal-split-tempest-plugins+status:open
Feel free to take a look.
If you have any queries related to the goal, Free to ping me on
#openstack-qa channel.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-10-06 Thread Jeremy Stanley
On 2017-10-06 13:49:43 -0500 (-0500), Giuseppe de Candia wrote:
> Isn't user-data by definition available via the Metadata API,
> which isn't considered secure:
> https://wiki.openstack.org/wiki/OSSN/OSSN-0074
[...]

It depends on who you are. If you're the one deploying/running nova
then you can take steps to make sure you set the environment up
correctly so that won't be a problem.

The background on OSSN-0074 is that if you mis-configure the
metadata service or do a bad job designing the network it's on, then
unauthorized users can get access to others' metadata. The OSSN is
sensationalizing the matter in an effort to get those deploying or
using OpenStack to take notice and double-check their settings and
network design, but the fundamental disconnect is that if you enable
use_forwarded_for in the config then you'd better have an actual
proxy fronting the service which (as they usually do) removes or
rewrites any X-Forwarded-For header to its own IP address. This is
basic network operations knowledge, but not everyone running
OpenStack is careful to consider the consequences of accidentally
enabling a "feature" they're not relying on.

See https://launchpad.net/bugs/1563954 for the gory details.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] blockquotes in docs

2017-10-06 Thread Ben Nemec



On 10/06/2017 03:01 AM, Markus Zoeller wrote:

Just a short reminder that rst puts stuff in blockquotes when you're not
careful with the spacing. Example:

  * item 1
  * item 2

That's in blockquotes because of the 1 blank at the beginning. The
TripleO docs are full with them, which looks a bit ugly, to be frank.

This change removes all unintentional blockquotes in the tripleo docs:
 https://review.openstack.org/#/c/504518/

You can find them by yourself with:
 $ tox -e docs
 $ grep -rn blockquote doc/build/html/ --include *.html


I often use http://rst.aaroniles.net/ for a quick check.



I think it's worth noting that people should really be looking at the 
output of the docs job when reviewing doc changes.  I know my mental RST 
parser is full of bugs so it's a job best left to a computer. :-)


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Ben Nemec



On 10/06/2017 12:22 PM, Matt Riedemann wrote:
This came up in IRC discussion the other day, but we didn't dig into it 
much given we were all (2 of us) exhausted talking about rebuild.


But we have had several bugs over the years where people expect the root 
disk to change to a newly supplied image during rebuild even if the 
instance is volume-backed.


I distilled several of those bugs down to just this one and duplicated 
the rest:


https://bugs.launchpad.net/nova/+bug/1482040

I wanted to see if there is actually any failure on the backend when 
doing this, and there isn't - there is no instance fault or anything 
like that. It's just not what the user expects, and actually the 
instance image_ref is then shown later as the image specified during 
rebuild, even though that's not the actual image in the root disk (the 
volume).


There have been a couple of patches proposed over time to change this:

https://review.openstack.org/#/c/305079/

https://review.openstack.org/#/c/201458/

https://review.openstack.org/#/c/467588/

And Paul Murray had a related (approved) spec at one point for detach 
and attach of root volumes:


https://review.openstack.org/#/c/221732/

But the blueprint was never completed.

So with all of this in mind, should we at least consider, until at least 
someone owns supporting this, that the API should fail with a 400 
response if you're trying to rebuild with a new image on a volume-backed 
instance? That way it's a fast failure in the API, similar to trying to 
backup a volume-backed instance fails fast.


If we did, that would change the API response from a 202 today to a 400, 
which is something we normally don't do. I don't think a microversion 
would be necessary if we did this, however, because essentially what the 
user is asking for isn't what we're actually giving them, so it's a 
failure in an unexpected way even if there is no fault recorded, it's 
not what the user asked for. I might not be thinking of something here 
though, like interoperability for example - a cloud without this change 
would blissfully return 202 but a cloud with the change would return a 
400...so that should be considered.




As a user who has been bitten by this behavior in the past, +1.  Yeah, 
it's technically an API change, but I think there's a strong argument 
that what the API is returning now is wrong.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-10-06 Thread Giuseppe de Candia
Hi Clint,

Isn't user-data by definition available via the Metadata API, which isn't
considered secure:
https://wiki.openstack.org/wiki/OSSN/OSSN-0074

Or is there a way to specify that certain user-data should only be
available via config-drive (and not metadata api)?

Otherwise, the only difference I see compared to using Meta-data is that
the process you describe is driven by the user vs. automated.

Regarding the extra plumbing, I'm not trying to avoid it. I'm thinking to
eventually tie this all into Keystone. For example, a project should have
Host CA and User CA keys. Let's assume OpenStack manages these for now,
later we can consider OpenStack simply proxying signature requests and
vouching that a public key does actually belong to a specific instance (and
host-name) or Keystone user. So what I think should happen is when a
Project is enabled for SSHaaS support, any VM instance automatically gets
host certificate, authorized principal files based on Keystone roles for
the project, and users can call an API (or Dashboard form) to get a public
key signed (and assigned appropriate SSH principals).

cheers,
Pino


On Fri, Oct 6, 2017 at 2:37 AM, Clint Byrum  wrote:

> A long time ago, a few Canonical employees (Scott Moser was one of them,
> forget who else was doing it, maybe Dave Walker and/or Dustin Kirkland)
> worked out a scheme for general usage that doesn't require extra plumbing:
>
>  * Client generates a small SSH host key locally and pushes it into
>user data in a way which causes the image to boot and install this
>key from user_data as the host SSH key.
>  * Client SSH's in with the strict requirement that the host key be the
>one they just generated and pushed into the instance.
>  * Client now triggers new host key generation, and copies new public
>key into client known_hosts.
>
> With this system you don't have to scrape console logs for SSH keys or
> build your system on hope.
>
> Excerpts from Giuseppe de Candia's message of 2017-09-29 14:21:06 -0500:
> > Hi Folks,
> >
> >
> >
> > My intent in this e-mail is to solicit advice for how to inject SSH host
> > certificates into VM instances, with minimal or no burden on users.
> >
> >
> >
> > Background (skip if you're already familiar with SSH certificates):
> without
> > host certificates, when clients ssh to a host for the first time (or
> after
> > the host has been re-installed), they have to hope that there's no man in
> > the middle and that the public key being presented actually belongs to
> the
> > host they're trying to reach. The host's public key is stored in the
> > client's known_hosts file. SSH host certicates eliminate the possibility
> of
> > Man-in-the-Middle attack: a Certificate Authority public key is
> distributed
> > to clients (and written to their known_hosts file with a special syntax
> and
> > options); the host public key is signed by the CA, generating an SSH
> > certificate that contains the hostname and validity period (among other
> > things). When negotiating the ssh connection, the host presents its SSH
> > host certificate and the client verifies that it was signed by the CA.
> >
> >
> >
> > How to support SSH host certificates in OpenStack?
> >
> >
> >
> > First, let's consider doing it by hand, instance by instance. The only
> > solution I can think of is to VNC to the instance, copy the public key to
> > my CA server, sign it, and then write the certificate back into the host
> > (again via VNC). I cannot ssh without risking a MITM attack. What about
> > using Nova user-data? User-data is exposed via the metadata service.
> > Metadata is queried via http (reply transmitted in the clear, susceptible
> > to snooping), and any compute node can query for any instance's
> > meta-data/user-data.
> >
> >
> >
> > At this point I have to admit I'm ignorant of details of cloud-init. I
> know
> > cloud-init allows specifying SSH private keys (both for users and for SSH
> > service). I have not yet studied how such information is securely
> injected
> > into an instance. I assume it should only be made available via
> ConfigDrive
> > rather than metadata-service (again, that service transmits in the
> clear).
> >
> >
> >
> > What about providing SSH host certificates as a service in OpenStack?
> Let's
> > keep out of scope issues around choosing and storing the CA keys, but the
> > CA key is per project. What design supports setting up the SSH host
> > certificate automatically for every VM instance?
> >
> >
> >
> > I have looked at Vendor Data and I don't see a way to use that, mainly
> > because 1) it doesn't take parameters, so you can't pass the public key
> > out; and 2) it's queried over http, not https.
> >
> >
> >
> > Just as a feasibility argument, one solution would be to modify Nova
> > compute instance boot code. Nova compute can securely query a CA service
> > asking for a triplet (private key, public key, SSH certificate) for the
> > specific hostname. It can then 

Re: [openstack-dev] Supporting SSH host certificates

2017-10-06 Thread Giuseppe de Candia
James, thanks for all the information! I will study, try some of this out
and get back to the thread with any more questions or findings.

One topic that may be worth discussing now, is that if we do pass ssh host
keys (including private) via vendordata, then there needs to be a way to
signal to Nova either:
- that specific data is private and should not be exposed via metadata API
- or for each vendordata choose what cloud-init data source it should be
offered on.

The goal would be to make sure that the private key is only offered to the
instance via config drive.

Does that make sense?

thanks,
Pino




On Thu, Oct 5, 2017 at 3:47 PM, James Penick  wrote:

> Hey Pino,
>
> mriedem pointed me to the vendordata code [1] which shows some fields are
> passed (such as project ID) and that SSL is supported. So that's good.
>
> The docs on vendordata suck. But I think it'll do what you're looking for.
> Michael Still wrote up a helpful post titled "Nova vendordata deployment,
> an excessively detailed guide"[2] and he's written a vendordata service
> example[3] which even shows keystone integration.
>
> At Oath, we have a system that provides a unique x509 certificate for each
> host, including the ability to sign host SSH keys against an HSM. In our
> case what we do is have Nova call the service, which generates and returns
> a signed (and time limited) host bootstrap document, which is injected into
> the instance. When the instance boots it calls our identity service and
> provides its bootstrap document as a bearer certificate. The identity
> service trusts this one-time document to attest the instance, and will then
> provide an x509 certificate as well as sign the hosts SSH keys. After the
> initial bootstrap the host will rotate its keys frequently, by providing
> its last certificate in exchange for a new one. The service tracks all host
> document and certificate IDs which have been exchanged until their expiry,
> so that a cert cannot be re-used.
>
> This infrastructure relies on Athenz [4] as the AuthNG system for all
> principals (users, services, roles, domains, etc) as well as an internal
> signatory service which signs x509 certificates and SSH host keys using an
> HSM infrastructure.
>
> Instead, you could write a vendordata service which, when called, would
> generate an ssh host keypair, sign it, and return those files as encoded
> data, which can be expanded into files in the correct locations on first
> boot. I strongly suggest using not only using keystone auth, but that you
> ensure all calls from vendordata to the microservice are encrypted with TLS
> mutual auth.
>
> -James
>
>
> 1: https://github.com/openstack/nova/blob/master/nova/api/
> metadata/vendordata_dynamic.py#L77
> 2: https://www.stillhq.com/openstack/22.html
> 3: https://github.com/mikalstill/vendordata
> 4: https://athenz.io
>
>
> On Fri, Sep 29, 2017 at 5:17 PM, Fox, Kevin M  wrote:
>
>> https://review.openstack.org/#/c/93/
>> --
>> *From:* Giuseppe de Candia [giuseppe.decan...@gmail.com]
>> *Sent:* Friday, September 29, 2017 1:05 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] Supporting SSH host certificates
>>
>> Ihar, thanks for pointing that out - I'll definitely take a close look.
>>
>> Jon, I'm not very familiar with Barbican, but I did assume the full
>> implementation would use Barbican to store private keys. However, in terms
>> of actually getting a private key (or SSH host cert) into a VM instance,
>> Barbican doesn't help. The instance needs permission to access secrets
>> stored in Barbican. The main question of my e-mail is: how do you inject a
>> credential in an automated but secure way? I'd love to hear ideas - in the
>> meantime I'll study Ihar's link.
>>
>> thanks,
>> Pino
>>
>>
>>
>> On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
>> wrote:
>>
>>> What you describe (at least the use case) seems to resemble
>>> https://review.openstack.org/#/c/456394/ This work never moved
>>> anywhere since the spec was posted though. You may want to revive the
>>> discussion in scope of the spec.
>>>
>>> Ihar
>>>
>>> On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
>>>  wrote:
>>> > Hi Folks,
>>> >
>>> >
>>> >
>>> > My intent in this e-mail is to solicit advice for how to inject SSH
>>> host
>>> > certificates into VM instances, with minimal or no burden on users.
>>> >
>>> >
>>> >
>>> > Background (skip if you're already familiar with SSH certificates):
>>> without
>>> > host certificates, when clients ssh to a host for the first time (or
>>> after
>>> > the host has been re-installed), they have to hope that there's no man
>>> in
>>> > the middle and that the public key being presented actually belongs to
>>> the
>>> > host they're trying to reach. The host's public key is stored in the
>>> > client's known_hosts file. SSH host 

Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-10-06 Thread Davanum Srinivas
Many thanks to Jeremy and Clark. We are back on our feet for Mogan.

Thanks,
Dims

On Fri, Oct 6, 2017 at 1:03 PM, Jeremy Stanley  wrote:
> On 2017-09-29 00:19:25 -0400 (-0400), Davanum Srinivas wrote:
>> I tried several things, not sure i have enough git-fu to pull this
>> off. For example.
> [...]
>> remote: ERROR:  In commit de26dc69aa28f57512326227a65dc3f9110a7be1
>> remote: ERROR:  committer email address sleepsonthefl...@gmail.com
>> remote: ERROR:  does not match your user account.
> [...]
>
> Yep, sorry about that (and sorry we've been too distracted to
> respond before now, thanks for the reminder in IRC). We should have
> predicted you'd need forgeCommitter in addition to pushMerge. We've
> added that via https://review.openstack.org/510158 which has now
> applied in Gerrit so please do try again and I'll try to be more
> responsive in assisting with any further errors (but hopefully it'll
> "just work" now).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Mathieu Gagné
> On Fri, Oct 6, 2017 at 2:02 PM, Chris Friesen
>  wrote:
>> On 10/06/2017 11:32 AM, Mathieu Gagné wrote:
>>>
>>> Why not supporting this use case?
>>
>>
>> I don't think anyone is suggesting we support do it, but nobody has stepped
>> up to actually merge a change that implements it.
>>
>> I think what Matt is suggesting is that we make it fail fast *now*, and if
>> someone else implements it then they can remove the fast failure at the same
>> time.
>>

On Fri, Oct 6, 2017 at 2:10 PM, Mathieu Gagné  wrote:
> I don't think there ever was a technical limitation to add support for it.
>
> I have nothing to back my claims but IIRC, it was more philosophical
> points against adding support for it.

So it looks like people worked on this feature in the past:
https://review.openstack.org/#/c/305079/

And there was pending technical questions. =)

Would it be worth the effort to bring that one back, maybe with a spec
so people can address technical details in there?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Mathieu Gagné
I don't think there ever was a technical limitation to add support for it.

I have nothing to back my claims but IIRC, it was more philosophical
points against adding support for it.

--
Mathieu


On Fri, Oct 6, 2017 at 2:02 PM, Chris Friesen
 wrote:
> On 10/06/2017 11:32 AM, Mathieu Gagné wrote:
>>
>> Why not supporting this use case?
>
>
> I don't think anyone is suggesting we support do it, but nobody has stepped
> up to actually merge a change that implements it.
>
> I think what Matt is suggesting is that we make it fail fast *now*, and if
> someone else implements it then they can remove the fast failure at the same
> time.
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Sean Dague
On 10/06/2017 01:22 PM, Matt Riedemann wrote:

> So with all of this in mind, should we at least consider, until at least
> someone owns supporting this, that the API should fail with a 400
> response if you're trying to rebuild with a new image on a volume-backed
> instance? That way it's a fast failure in the API, similar to trying to
> backup a volume-backed instance fails fast.
> 
> If we did, that would change the API response from a 202 today to a 400,
> which is something we normally don't do. I don't think a microversion
> would be necessary if we did this, however, because essentially what the
> user is asking for isn't what we're actually giving them, so it's a
> failure in an unexpected way even if there is no fault recorded, it's
> not what the user asked for. I might not be thinking of something here
> though, like interoperability for example - a cloud without this change
> would blissfully return 202 but a cloud with the change would return a
> 400...so that should be considered.

Yes, and this is the kind of thing I think we'd backport to all stable
branches as well. Right now this is effectively a silent 500 (giving a
202 that we know will never actually do what is asked).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Chris Friesen

On 10/06/2017 11:32 AM, Mathieu Gagné wrote:

Why not supporting this use case?


I don't think anyone is suggesting we support do it, but nobody has stepped up 
to actually merge a change that implements it.


I think what Matt is suggesting is that we make it fail fast *now*, and if 
someone else implements it then they can remove the fast failure at the same time.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Mohammed Naser
On Fri, Oct 6, 2017 at 1:32 PM, Mathieu Gagné  wrote:
> Why not supporting this use case?
>
> Same reason as before: A user might wish to keep its IP addresses.
>
> The use cannot do the following to bypass the limitation:
> 1) stop instance
> 2) detach root volume
> 3) delete root volume
> 4) create new volume
> 5) attach as root
> 6) boot instance
>
> Last time I tried, operation fails at step 2. I would need to test
> against latest version of Nova to confirm.

You are right, this is indeed something that is not possible.

> Otherwise boot-from-volume feels like a second class citizen.
>
> --
> Mathieu
>
>
> On Fri, Oct 6, 2017 at 1:22 PM, Matt Riedemann  wrote:
>> This came up in IRC discussion the other day, but we didn't dig into it much
>> given we were all (2 of us) exhausted talking about rebuild.
>>
>> But we have had several bugs over the years where people expect the root
>> disk to change to a newly supplied image during rebuild even if the instance
>> is volume-backed.
>>
>> I distilled several of those bugs down to just this one and duplicated the
>> rest:
>>
>> https://bugs.launchpad.net/nova/+bug/1482040
>>
>> I wanted to see if there is actually any failure on the backend when doing
>> this, and there isn't - there is no instance fault or anything like that.
>> It's just not what the user expects, and actually the instance image_ref is
>> then shown later as the image specified during rebuild, even though that's
>> not the actual image in the root disk (the volume).
>>
>> There have been a couple of patches proposed over time to change this:
>>
>> https://review.openstack.org/#/c/305079/
>>
>> https://review.openstack.org/#/c/201458/
>>
>> https://review.openstack.org/#/c/467588/
>>
>> And Paul Murray had a related (approved) spec at one point for detach and
>> attach of root volumes:
>>
>> https://review.openstack.org/#/c/221732/
>>
>> But the blueprint was never completed.
>>
>> So with all of this in mind, should we at least consider, until at least
>> someone owns supporting this, that the API should fail with a 400 response
>> if you're trying to rebuild with a new image on a volume-backed instance?
>> That way it's a fast failure in the API, similar to trying to backup a
>> volume-backed instance fails fast.
>>
>> If we did, that would change the API response from a 202 today to a 400,
>> which is something we normally don't do. I don't think a microversion would
>> be necessary if we did this, however, because essentially what the user is
>> asking for isn't what we're actually giving them, so it's a failure in an
>> unexpected way even if there is no fault recorded, it's not what the user
>> asked for. I might not be thinking of something here though, like
>> interoperability for example - a cloud without this change would blissfully
>> return 202 but a cloud with the change would return a 400...so that should
>> be considered.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread John Griffith
On Fri, Oct 6, 2017 at 11:22 AM, Matt Riedemann  wrote:

> This came up in IRC discussion the other day, but we didn't dig into it
> much given we were all (2 of us) exhausted talking about rebuild.
>
> But we have had several bugs over the years where people expect the root
> disk to change to a newly supplied image during rebuild even if the
> instance is volume-backed.
>
> I distilled several of those bugs down to just this one and duplicated the
> rest:
>
> https://bugs.launchpad.net/nova/+bug/1482040
>
> I wanted to see if there is actually any failure on the backend when doing
> this, and there isn't - there is no instance fault or anything like that.
> It's just not what the user expects, and actually the instance image_ref is
> then shown later as the image specified during rebuild, even though that's
> not the actual image in the root disk (the volume).
>
> There have been a couple of patches proposed over time to change this:
>
> https://review.openstack.org/#/c/305079/
>
> https://review.openstack.org/#/c/201458/
>
> https://review.openstack.org/#/c/467588/
>
> And Paul Murray had a related (approved) spec at one point for detach and
> attach of root volumes:
>
> https://review.openstack.org/#/c/221732/
>
> But the blueprint was never completed.
>
> So with all of this in mind, should we at least consider, until at least
> someone owns supporting this, that the API should fail with a 400 response
> if you're trying to rebuild with a new image on a volume-backed instance?
> That way it's a fast failure in the API, similar to trying to backup a
> volume-backed instance fails fast.
>
​Seems reasonable and correct to me, if we were dealing with ephemeral
Cinder (which isn't a thing) we'd just delete the volume, create a new one
with new image.  In this case however the point of BFV for must is
persistence so it seems reasonable to me to start with changing the
response.​


>
> If we did, that would change the API response from a 202 today to a 400,
> which is something we normally don't do. I don't think a microversion would
> be necessary if we did this, however, because essentially what the user is
> asking for isn't what we're actually giving them, so it's a failure in an
> unexpected way even if there is no fault recorded, it's not what the user
> asked for. I might not be thinking of something here though, like
> interoperability for example - a cloud without this change would blissfully
> return 202 but a cloud with the change would return a 400...so that should
> be considered.

​It's a bug IMO, if you're relying on an incorrect response code and not
getting what you expect it seems that should be more important than
consistent behavior.  ​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Mathieu Gagné
Why not supporting this use case?

Same reason as before: A user might wish to keep its IP addresses.

The use cannot do the following to bypass the limitation:
1) stop instance
2) detach root volume
3) delete root volume
4) create new volume
5) attach as root
6) boot instance

Last time I tried, operation fails at step 2. I would need to test
against latest version of Nova to confirm.

Otherwise boot-from-volume feels like a second class citizen.

--
Mathieu


On Fri, Oct 6, 2017 at 1:22 PM, Matt Riedemann  wrote:
> This came up in IRC discussion the other day, but we didn't dig into it much
> given we were all (2 of us) exhausted talking about rebuild.
>
> But we have had several bugs over the years where people expect the root
> disk to change to a newly supplied image during rebuild even if the instance
> is volume-backed.
>
> I distilled several of those bugs down to just this one and duplicated the
> rest:
>
> https://bugs.launchpad.net/nova/+bug/1482040
>
> I wanted to see if there is actually any failure on the backend when doing
> this, and there isn't - there is no instance fault or anything like that.
> It's just not what the user expects, and actually the instance image_ref is
> then shown later as the image specified during rebuild, even though that's
> not the actual image in the root disk (the volume).
>
> There have been a couple of patches proposed over time to change this:
>
> https://review.openstack.org/#/c/305079/
>
> https://review.openstack.org/#/c/201458/
>
> https://review.openstack.org/#/c/467588/
>
> And Paul Murray had a related (approved) spec at one point for detach and
> attach of root volumes:
>
> https://review.openstack.org/#/c/221732/
>
> But the blueprint was never completed.
>
> So with all of this in mind, should we at least consider, until at least
> someone owns supporting this, that the API should fail with a 400 response
> if you're trying to rebuild with a new image on a volume-backed instance?
> That way it's a fast failure in the API, similar to trying to backup a
> volume-backed instance fails fast.
>
> If we did, that would change the API response from a 202 today to a 400,
> which is something we normally don't do. I don't think a microversion would
> be necessary if we did this, however, because essentially what the user is
> asking for isn't what we're actually giving them, so it's a failure in an
> unexpected way even if there is no fault recorded, it's not what the user
> asked for. I might not be thinking of something here though, like
> interoperability for example - a cloud without this change would blissfully
> return 202 but a cloud with the change would return a 400...so that should
> be considered.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Matt Riedemann
This came up in IRC discussion the other day, but we didn't dig into it 
much given we were all (2 of us) exhausted talking about rebuild.


But we have had several bugs over the years where people expect the root 
disk to change to a newly supplied image during rebuild even if the 
instance is volume-backed.


I distilled several of those bugs down to just this one and duplicated 
the rest:


https://bugs.launchpad.net/nova/+bug/1482040

I wanted to see if there is actually any failure on the backend when 
doing this, and there isn't - there is no instance fault or anything 
like that. It's just not what the user expects, and actually the 
instance image_ref is then shown later as the image specified during 
rebuild, even though that's not the actual image in the root disk (the 
volume).


There have been a couple of patches proposed over time to change this:

https://review.openstack.org/#/c/305079/

https://review.openstack.org/#/c/201458/

https://review.openstack.org/#/c/467588/

And Paul Murray had a related (approved) spec at one point for detach 
and attach of root volumes:


https://review.openstack.org/#/c/221732/

But the blueprint was never completed.

So with all of this in mind, should we at least consider, until at least 
someone owns supporting this, that the API should fail with a 400 
response if you're trying to rebuild with a new image on a volume-backed 
instance? That way it's a fast failure in the API, similar to trying to 
backup a volume-backed instance fails fast.


If we did, that would change the API response from a 202 today to a 400, 
which is something we normally don't do. I don't think a microversion 
would be necessary if we did this, however, because essentially what the 
user is asking for isn't what we're actually giving them, so it's a 
failure in an unexpected way even if there is no fault recorded, it's 
not what the user asked for. I might not be thinking of something here 
though, like interoperability for example - a cloud without this change 
would blissfully return 202 but a cloud with the change would return a 
400...so that should be considered.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-10-06 Thread Jeremy Stanley
On 2017-09-29 00:19:25 -0400 (-0400), Davanum Srinivas wrote:
> I tried several things, not sure i have enough git-fu to pull this
> off. For example.
[...]
> remote: ERROR:  In commit de26dc69aa28f57512326227a65dc3f9110a7be1
> remote: ERROR:  committer email address sleepsonthefl...@gmail.com
> remote: ERROR:  does not match your user account.
[...]

Yep, sorry about that (and sorry we've been too distracted to
respond before now, thanks for the reminder in IRC). We should have
predicted you'd need forgeCommitter in addition to pushMerge. We've
added that via https://review.openstack.org/510158 which has now
applied in Gerrit so please do try again and I'll try to be more
responsive in assisting with any further errors (but hopefully it'll
"just work" now).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] s390x zKVM CI

2017-10-06 Thread Andreas Scheuring
Yes, job_name_in_report = true did it!


---
Andreas Scheuring (andreas_s)



> On 6. Oct 2017, at 17:01, Andreas Scheuring  
> wrote:
> 
> Probably it’s the following setting in zuul.conf
> 
> job_name_in_report = true
> 
> We’ll see if it works out in 1.5h when the first test succeeded…
> 
> Thanks!
> 
> ---
> Andreas Scheuring (andreas_s)
> 
> 
> 
>> On 6. Oct 2017, at 12:53, Sean Dague > > wrote:
>> 
>> On 10/06/2017 04:07 AM, Andreas Scheuring wrote:
>>> Hi,
>>> yesterday the nova s390x (zkvm) CI got the permission to vote +1/-1 on
>>> nova again [3]. Technically it was just an addition the the gerri group
>>> [1]. Now Jenkins is showing up behind the “Verified” field in the review
>>> with its +1/-1. However it’s not yet showing up in the long table
>>> underneath that and therefore (probably) also not in the third party ci
>>> watch [2] page. What else needs to be done that our CI appears there as
>>> well?
>>> 
>>> Thanks!
>>> 
>>> [1] https://review.openstack.org/#/admin/groups/511,members 
>>>  
>>> [2] http://ci-watch.tintri.com/project?project=nova 
>>> 
>>> [3] 
>>> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123195.html 
>>> 
>>> 
>>> 
>>> Andreas Scheuring (andreas_s)
>> 
>> Look at how the markup for the jenkins and power vm results posts are:
>> 
>> > href="http://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/nova/11/457711/35/check/tempest-dsvm-full-xenial/ec523a2/
>>  
>> "
>> rel="nofollow">tempest-dsvm-full-xenial > class="comment_test_result">SUCCESS
>> in 53m 59s
>> 
>> Those css classes are used as selectors by the Toggle CI code to extract
>> test results. The zKVM results posting is missing them.
>> 
>>  -Sean
>> 
>> 
>> -- 
>> Sean Dague
>> http://dague.net 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] using different dns domains in neutorn networks

2017-10-06 Thread Miguel Lavalle
Hi Kim,

Thanks for trying the feature out! There are 2 levels of Neutron
integration with DNS:

   1. Integration with Neutron's own internal DNS service. In short, ports
   get a DHCP lease from a dnsmasq instance associated to their network. This
   dnsmasq instance also assigns DNS attributes to the port, which are shown
   to the user in the dns_assigment dictionary returned by the API. It
   contains a FQDN, that is the concatenation of the port's dns_name and the
   dns_domain defined in neutron.conf. The purpose of the dns_assignment
   attribute is exclusively to show to the user the DNS attributes that
   dnsmasq assigns to the port.
   2. Integration with external DNS services (OpenStack Designate being the
   reference implementation). If this integration is enabled, a port will be
   known to an external DNS service by the FQDN that results from
   concatenating its dns_name atribute and its network's dns_domain. This
   latter attribute has no influence whatsoever in the port's dns_assignment.
   In the latest release, Pike, we also added a dns_domain attribute to ports.
   But again, it only has an effect on the external DNS service, with no
   effect on dns_assigment

I hope this helps. Please feel free to ping me in #openstack-neutron. My
IRC nickname is mlavalle

Regards

Miguel

On Fri, Oct 6, 2017 at 7:50 AM, Kim-Norman Sahm 
wrote:

> I've tried to upgrade my testing environment to ocata but i have the
> same issue...
>
> does anybody has an idea?
>
> regards
> Kim
>
>
>
> Am Freitag, den 29.09.2017, 09:32 +0200 schrieb Kim-Norman Sahm:
> > Hi,
> >
> > i'm currently testing the integration of designate and i found the
> > dns integration on neutron:
> > https://docs.openstack.org/newton/networking-guide/config-dns-int.htm
> > l
> >
> > In this example the value "dns_domain = example.org." is set in the
> > neutron.conf.
> > if i create a port with the "--dns_name fancyname" it is assigned to
> > the domain example.org: fancyname.example.org.
> >
> > if i set another domain name in another network "neutron net-update
> > --dns-domain anotherdomain.org. net2" and create a port in this
> > network the dns records is still in the example.org. domain.
> > is there a way to overwrite the global dns domain in a project and
> > inherit the dns-domain to the ports in this network?
> >
> > Best regards
> > Kim
> > Kim-Norman Sahm
> > Cloud & Infrastructure(OCI)
> > noris network AG
> > Thomas-Mann-Straße 16-20
> > 90471 Nürnberg
> > Deutschland
> > Tel +49 911 9352 1433
> > Fax +49 911 9352 100
> >
> > kim-norman.s...@noris.de
> > https://www.noris.de - Mehr Leistung als Standard
> > Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
> > Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB
> > 17689
> >
> >
> >
> >
> >  
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> > cribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Kim-Norman Sahm
> Cloud & Infrastructure(OCI)
> noris network AG
> Thomas-Mann-Straße 16-20
> 90471 Nürnberg
> Deutschland
> Tel +49 911 9352 1433
> Fax +49 911 9352 100
>
> kim-norman.s...@noris.de
> https://www.noris.de - Mehr Leistung als Standard
> Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
> Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] s390x zKVM CI

2017-10-06 Thread Andreas Scheuring
Probably it’s the following setting in zuul.conf

job_name_in_report = true

We’ll see if it works out in 1.5h when the first test succeeded…

Thanks!

---
Andreas Scheuring (andreas_s)



> On 6. Oct 2017, at 12:53, Sean Dague  wrote:
> 
> On 10/06/2017 04:07 AM, Andreas Scheuring wrote:
>> Hi,
>> yesterday the nova s390x (zkvm) CI got the permission to vote +1/-1 on
>> nova again [3]. Technically it was just an addition the the gerri group
>> [1]. Now Jenkins is showing up behind the “Verified” field in the review
>> with its +1/-1. However it’s not yet showing up in the long table
>> underneath that and therefore (probably) also not in the third party ci
>> watch [2] page. What else needs to be done that our CI appears there as
>> well?
>> 
>> Thanks!
>> 
>> [1] https://review.openstack.org/#/admin/groups/511,members 
>> [2] http://ci-watch.tintri.com/project?project=nova
>> [3] 
>> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123195.html
>> 
>> 
>> Andreas Scheuring (andreas_s)
> 
> Look at how the markup for the jenkins and power vm results posts are:
> 
>  href="http://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/nova/11/457711/35/check/tempest-dsvm-full-xenial/ec523a2/;
> rel="nofollow">tempest-dsvm-full-xenial  class="comment_test_result">SUCCESS
> in 53m 59s
> 
> Those css classes are used as selectors by the Toggle CI code to extract
> test results. The zKVM results posting is missing them.
> 
>   -Sean
> 
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] sitemap automation suggestions

2017-10-06 Thread Jeremy Stanley
On 2017-10-05 22:51:31 -0400 (-0400), me...@openstack.org wrote:
[...]
> * setup of Google Webmaster tools (https://www.google.com/webmasters/)
[...]

I'm having a hard time figuring out from that page what exactly
"Google Webmaster tools" is, and under what license it's
distributed. Looks like maybe it's been renamed "Google Search
Console" according to Wikipedia, but it's still unclear to me where
to download the source and how to install it on the server.

The search console link just goes to a login page and wants me to
have an account with Google before it will (presumably) give me any
more detail, so that was the point where I stopped.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] my notes from the PTG in Denver

2017-10-06 Thread Dmitry Tantsur

Hi all!

Here are my notes from the ironic (and a bit of nova) room in Denver.
The same content in a nicely rendered form is on my blog:
http://dtantsur.github.io/posts/ironic-ptg-denver-2017-1.html
http://dtantsur.github.io/posts/ironic-ptg-denver-2017-2.html

Here goes the raw rst-formatted version. Feel free to comment and ask questions 
here or there.


Status of Pike priorities
-

In the Pike cycle, we had 22 priority items. Quite a few planned priorities
did land completely, despite the well-known staffing problems.

Finished


Booting from cinder volumes
^^^

This includes the iRMC implementation, but excludes the iLO one. There is
also a nova patch for updating IP addresses for volume connectors on review:
https://review.openstack.org/#/c/468353/.

Next, we need to update cinder to support FCoE - then we'll be able to
support it in the generic PXE boot interface. Finally, there is some interest
in implementing out-of-band BFV for UCS drivers too.

Rolling (online) upgrades between releases
^^

We've found a bug that was backported to stable/pike soon after the release
and now awaits a point release. We also need developer documentation and
some post-Pike clean ups.

We also discussed fast-forward upgrades. We may need an explicit migration
for VIFs from port.extra to port.internal_info, **rloo** will track this.
Overall, we need to always make our migrations explicit and runnable without
the services running.

The driver composition reform
^

Finished, with hardware types created for all supported hardware, and the
classic drivers pending deprecation in Queens.

`Removing the classic drivers`_ is planned for Rocky.

Standalone jobs (jobs without nova)
^^^

These are present and voting, but we're not using their potential. The
discussion is summarized below in `Future development of our CI`_.

Feature parity between two CLI implementations
^^

The ``openstack baremetal`` CLI is now complete and preferred, with the
deprecation of the ``ironic`` CLI expected in Queens.

We would like OSC to have less dependencies though. There were talks about
having a standalone ``openstack`` command without dependencies on other
clients, only on ``keystoneauth1``. **rloo** will follow up here.

**TheJulia** will check if there are any implications from the
interoperability team point of view.

Redfish hardware type
^

The ``redfish`` hardware type now provides all the basic stuff we need, i.e.
power and boot device management. There is an ongoing effort to implement
inspection. It is unclear whether more features can be implemented in a
vendor-agnostic fashion; **rpioso** is looking into Dell, while **crushil**
is looking into Lenovo.

Other
^

Also finished are:

* Post-deploy VIF attach/detach.

* Physical network awareness.

Not finished


OSC default API version
^^^

We now issue a warning of no explicit version is provided to the CLI.
The next step will be to change the version to latest, but our current
definition of latest does not fit this goal really well. We use the latest
version known to the client, which will prevent it from working out-of-box
with older clouds. Instead, we need to finally implement API version
negotiation in ironicclient, and negotiate the latest version.

Reference architectures guide
^

There is one patch that lays out considerations that are going to be shared
between all proposed architectures. The use cases we would like to cover:

* Admin-only provisioner (standalone architectures)

  * Small fleet and/or rare provisions.

Here a non-HA architecture may be acceptable, and a *noop* or *flat*
networking can be used.

  * Large fleet or frequent provisions.

Here we will recommend HA and *neutron* networking. *Noop* networking is
also acceptable.

* Bare metal cloud for end users (with nova)

  * Smaller single-site cloud.

Non-HA architecture and *flat* or *noop* networking is acceptable.
Ironic conductors can live on OpenStack controller nodes.

  * Large single-site cloud.

HA is required, and it is recommended to split ironic conductors with
their TFTP/HTTP servers to separate machines. *Neutron* networking
should be used, and thus compatible switches will be required, as well
as their ML2 mechanism drivers.

It is preferred to use virtual media instead of PXE/iPXE for deployment
and cleaning, if supported by hardware. Otherwise, especially large
clouds may consider splitting away TFTP servers.

  * Large multi-site cloud.

The same as a single-site cloud plus using Cells v2.

Deploy steps


We agreed to continue this effort, even though the ansible deploy driver solves
some of its use cases. The crucial point is how to pass the 

Re: [openstack-dev] [ptls] Sydney Forum Project Onboarding Rooms

2017-10-06 Thread Emilien Macchi
On Thu, Oct 5, 2017 at 8:50 AM, Kendall Nelson  wrote:
[...]
> If you are interested in reserving a spot, just reply directly to me and I
> will put your project on the list. Please let me know if you want one and
> also include the names and emails anyone that will be speaking with you.

TripleO - Emilien Macchi - emil...@redhat.com

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC Candidacy

2017-10-06 Thread Amrith Kumar
I wish to submit my candidacy for election to the OpenStack Technical
Committee. I have never been elected to the Technical Committee before
but, I have never believed that being elected is a requirement to
participate in the deliberations. I have therefore been a frequent
participant and a consistent follower of the activities of the TC.

My focus on the TC would be (consistent with prior candidacies for the
TC) to make it easier for people to deploy and use OpenStack, and make
it easier for people to get involved in, participate, and contribute
to OpenStack.

I have been an active technical contributor to OpenStack for about
four years[1], involved in Trove for that entire period of time. I have
been the PTL for Trove for three cycles (Newton, Ocata and Pike). I
was reelected for Queen but at the PTG that position was transitioned
over to another member of the team[2] as I move my focus to a new DBaaS
implementation for OpenStack (tenantively named Hoard).

During these four years, I have been involved in a number of the
initiatives of the TC including most significantly the Stewardship
Working Group which began the process of writing the TC Vision
document. I attended the first of the Stewardship training sessions in
Michigan where the idea of the cross-project goals was conceived of
(by Doug Hellmann and others), an initiative that I am strongly
supportive of.

I have also been active in activities to improve the overall
participation in OpenStack, through mentorship, outreach to
educational institions. This is a key aspect of my current role,
employed at Verizon Wireless in the team that has built, and operates
one of the largest OpenStack deployments around. The team now includes
Brian Rosmaita, and Graham Hayes, and we are actively hiring more
contributors to the team.

I thank you for reading, and appreciate your vote.

[1] http://stackalytics.com/?release=all=commits_id=amrith
[2] http://openstack.markmail.org/thread/txurdd5bhbzsebtq

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how does UEFI booting of VM manage per-instance copies of OVMF_VARS.fd ?

2017-10-06 Thread Waines, Greg
Hey just a follow up on this ...

FYI ... it does appear that when UEFI booting a VM, a per-instance copy of the 
OVMF_VARS.fd is indeed created.
See below:


root  97276  1  0 Oct05 ?00:01:41 /usr/libexec/qemu-kvm -c 
0x0001 -n 4 --proc-type=secondary --file-prefix=vs 
-- -enable-dpdk -name guest=instance-003a,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-instance-003a/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -drive 
file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on 
-drive 
file=/var/lib/libvirt/qemu/nvram/instance-003a_VARS.fd,if=pflash,format=raw,unit=1
 -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/huge-2048kB/libvirt/qemu/2-instance-003a,share=yes,size=536870912,host-nodes=0,policy=bind
 -numa node,nodeid=0,cpus=0,memdev=ram-node0 -uuid 
13c69f91-e91d-4162-aea5-d53aaa7053b0 -smbios type=1,manufacturer=Fedora 
Project,product=OpenStack 
Nova,version=14.0.3-1.tis.243,serial=81f8fdfa-744c-4f60-bd39-edb5f0cfd39d,uuid=13c69f91-e91d-4162-aea5-d53aaa7053b0,family=Virtual
 Machine -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-instance-003a/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot 
reboot-timeout=5000,strict=on -global i440FX-pcihost.pci-hole64-size=67108864K 
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/dev/disk/by-path/ip-192.168.205.6:3260-iscsi-iqn.2010-10.org.openstack:volume-4c1d2d08-5f13-42ee-8cd4-db950614e031-lun-0,format=raw,if=none,id=drive-ide0-0-0,readonly=on,serial=4c1d2d08-5f13-42ee-8cd4-db950614e031,cache=none,aio=native
 -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 
-drive 
file=/dev/disk/by-path/ip-192.168.205.6:3260-iscsi-iqn.2010-10.org.openstack:volume-c2c57148-c7ca-4516-8f06-6ed205524057-lun-0,format=raw,if=none,id=drive-virtio-disk0,serial=c2c57148-c7ca-4516-8f06-6ed205524057,cache=none,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -chardev 
socket,id=charnet0,path=/var/run/vswitch/usvhost-b3113aee-fc06-4277-8e65-c6f2c3b0415d
 -netdev vhost-user,chardev=charnet0,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:69:16:ec,bus=pci.0,addr=0x3 
-add-fd set=0,fd=30 -chardev file,id=charserial0,path=/dev/fdset/0,append=on 
-device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on


Greg.


From: Steve Gordon 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, September 28, 2017 at 2:50 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [nova] how does UEFI booting of VM manage 
per-instance copies of OVMF_VARS.fd ?

- Original Message -
From: "Jay Pipes" >
To: openstack-dev@lists.openstack.org
Sent: Thursday, September 28, 2017 12:53:16 PM
Subject: Re: [openstack-dev] [nova] how does UEFI booting of VM manage 
per-instance copies of OVMF_VARS.fd ?
On 09/27/2017 09:09 AM, Waines, Greg wrote:
> Hey there ... a question about UEFI booting of VMs.
>
> i.e.
>
> glance image-create --file cloud-2730. qcow --disk-format qcow2
> --container-format bare --property “hw-firmware-type=uefi” --name
> clear-linux-image
>
> in order to specify that you want to use UEFI (instead of BIOS) when
> booting VMs with this image
>
> i.e./usr/share/OVMF/OVMF_CODE.fd
>
>/usr/share/OVMF/OVMF_VARS.fd
>
> and I believe you can boot into the UEFI Shell, i.e. to change UEFI
> variables in NVRAM (OVMF_VARS.fd) by
>
> booting VM with /usr/share/OVMF/UefiShell.iso in cd ...
>
> e.g. to changes Secure Boot keys or something like that.
>
> My QUESTION ...
>
> ·how does NOVA manage a unique instance of OVMF_VARS.fd for each instance ?
>
> oi believe OVMF_VARS.fd is suppose to just be used as a template, and
> is supposed to be copied to make a unique instance for each VM that UEFI
> boots
>
> ohow does NOVA manage this ?
>
> §e.g. is the unique instance of OVMF_VARS.fd created in
>   /etc/nova/instances//  ?
>
> o... and does this get migrated to another compute if VM is migrated ?
Hi Greg,
I think the following part of the code essentially sums up what you're
experiencing [1]:
LOG.warning("uefi support is without some kind of "

[openstack-dev] [release] Release countdown for week R-20, October 6-13

2017-10-06 Thread Sean McGinnis
Welcome to our regular release countdown email. Hopefully things will settle
down and I'll start getting these out a little earlier.

Development Focus
-

We are just one week away from the first Queens milestone already. Time flies.

Team focus should be on spec approval and implementation for priority features.
We are starting to enter spec freeze for many projects. Please be aware of the
project specific deadlines that vary slightly from the overall release
schedule.

Speaking of deadlines - this is the last week for projects to switch release
models if they are considering it. Stop by the #openstack-release channel if
you have any questions about how this works.

Teams should now be making progress towards the cycle goals [1]. Please
prioritize reviews for these appropriately. There has been some good progress
so far, but definitely more work to do. Things may have stalled a little with
the tempest slit-out goal due to zuul activities, but policy work should be
able to continue [2].

[1] https://governance.openstack.org/tc/goals/queens/index.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123040.html

Another quick reminder - if your project has a library that is still a 0.x
release, start thinking about when it will be appropriate to do a 1.0 version.
The version number does signal the state, real or perceived, of the library, so
we strongly encourage going to a full major version once things are in a good
and usable state.

Finally, we would love to have all the liaisons attend the release team meeting
every Friday [3]. Anyone is welcome.

[3] http://eavesdrop.openstack.org/#Release_Team_Meeting

General Information
---

Next week is the first milestone of the Queens development cycle.

Also note that there is just a little more time to do final wrap up of Newton.
Tony gave that a few more days due to other distractions.

Upcoming Deadlines & Dates
--

Queens-1 milestone: October 19 (R-19 week)
Forum at OpenStack Summit in Sydney: November 6-8
Last Newton Library releases October 8
Newton Branch EOL October 18

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 36

2017-10-06 Thread Chris Dent


Update 37 is struggling to contain itself, too much code associated
with placement!

# Most Important^wRecent

Discussion last night uncovered some disagreement and confusion over
the content of the Selection object that will be used to send
alternate destination hosts when doing builds. The extent to which an
allocation request is not always opaque and the need to be explicit
about microversions was clarified, so edleafe is going to make some
adjustments, after first resolving the prerequisite code (alternate
hosts: https://review.openstack.org/#/c/486215/ ).

# What's Changed

Nested providers spec merged, selection objects spec merged (but see
above and below), alternate hosts spec merge, request traits in nova
spec merged, minimal cache header spec merged, POST /allocations spec
merged.

* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/migration-allocations.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/placement-cache-headers.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/post-allocations.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/return-alternate-hosts.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/return-selection-objects.html

There are additional specs not yet merged, placement related, some of
which the above depend on, that need some attention.

* https://review.openstack.org/#/c/502306/
  Network bandwidth resource provider

* https://review.openstack.org/#/c/507052/
  Support traits in the Ironic driver

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations
  (The POST /allocations spec depends on this one)

* https://review.openstack.org/#/c/509136/
  Fix issues for post-allocations spec

* https://review.openstack.org/#/c/504540/
  Spec for limiting GET /allocation_candidates

* https://review.openstack.org/#/c/497713/
  Add trait support in the allocation candidates API

# Main Themes

## Nested Resource Providers

While working on nested resource providers it became clear there was
a lot of mixed up crusted in the resource provider objects, so before
the nested work there is now

https://review.openstack.org/#/q/status:open+topic:no-orm-resource-providers

which is a stack of cleanups to how the SQL is managed in there. When
that is done, the conflicts at

https://review.openstack.org/#/q/status:open+topic:no-orm-resource-providers

will be resolved and nested work will continue.

## Migration allocations

The migration allocations work is happening at:

 https://review.openstack.org/#/q/topic:bp/migration-allocations

Management of those allocations currently involves some raciness,
birthing the specs (above) to allow POST /allocations, some of the
code for that is in progress at

https://review.openstack.org/#/q/topic:bp/post-allocations

## Alternate Hosts

We want to be able to do retries within cells, so we need some
alternate hosts when returning a destination that are structured
nicely for RPC:

https://review.openstack.org/#/q/topic:bp/return-selection-objects

# Other Stuff

* https://review.openstack.org/#/c/508149/
  A spec in neutron for QoS minimum bandwidth allocation in Placement
  API

There's plenty of other other stuff too, but much of it is covered in
the links above to avoid a tyranny of choice, I'll just leave it off
for now. There's plenty of existing stuff to think about.

# End

Your prize this week is vegetable tempura.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] using different dns domains in neutorn networks

2017-10-06 Thread Kim-Norman Sahm
I've tried to upgrade my testing environment to ocata but i have the
same issue...

does anybody has an idea?

regards
Kim



Am Freitag, den 29.09.2017, 09:32 +0200 schrieb Kim-Norman Sahm:
> Hi,
>
> i'm currently testing the integration of designate and i found the
> dns integration on neutron:
> https://docs.openstack.org/newton/networking-guide/config-dns-int.htm
> l
>
> In this example the value "dns_domain = example.org." is set in the
> neutron.conf.
> if i create a port with the "--dns_name fancyname" it is assigned to
> the domain example.org: fancyname.example.org.
>
> if i set another domain name in another network "neutron net-update
> --dns-domain anotherdomain.org. net2" and create a port in this
> network the dns records is still in the example.org. domain.
> is there a way to overwrite the global dns domain in a project and
> inherit the dns-domain to the ports in this network?
>
> Best regards
> Kim
> Kim-Norman Sahm
> Cloud & Infrastructure(OCI)
> noris network AG
> Thomas-Mann-Straße 16-20
> 90471 Nürnberg
> Deutschland
> Tel +49 911 9352 1433
> Fax +49 911 9352 100
>
> kim-norman.s...@noris.de
> https://www.noris.de - Mehr Leistung als Standard
> Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
> Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB
> 17689
>
>
>
>
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kim-Norman Sahm
Cloud & Infrastructure(OCI)
noris network AG
Thomas-Mann-Straße 16-20
90471 Nürnberg
Deutschland
Tel +49 911 9352 1433
Fax +49 911 9352 100

kim-norman.s...@noris.de
https://www.noris.de - Mehr Leistung als Standard
Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689






smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] TC candidacy (dims)

2017-10-06 Thread Davanum Srinivas
Team,

Please consider my candidacy for Queens. I just realized how time flies, reading
my previous two self nominations [1][2]. As a TC we have made significant
strides in the last cycle[3] and my contributions have been around how we can
do better working with adjacent communities and getting people from outside
US involved and productive. We have significant challenges in both areas. Let's
start with the one about engaging new potentially part-time contributors, the
newly forming SIG around "First Contact" is something we need to pool
our efforts
into it. Encouraging activities around SIG-OpenStack in kubernetes (and the
mirror SIG-Kubernetes proposed in OpenStack) is turning the corner seeing the
level of interest at the PTG [5]. I fully intend to help in both areas so
our Foundation can stay at the cutting edge and relevant in the current and
emerging eco systems.

Thanks,
Dims


[1] 
https://git.openstack.org/cgit/openstack/election/plain/candidates/newton/TC/Davanum_Srinivas.txt
[2] 
https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/TC/dims.txt
[3] http://lists.openstack.org/pipermail/openstack-dev/2017-October/122962.html
[4] 
http://lists.openstack.org/pipermail/openstack-sigs/2017-September/thread.html#74
[5] https://etherpad.openstack.org/p/queens-ptg-sig-k8s

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Unbranched repositories and testing

2017-10-06 Thread Jiří Stránský

On 5.10.2017 22:40, Alex Schultz wrote:

Hey folks,

So I wandered across the policy spec[0] for how we should be handling
unbranched repository reviews and I would like to start a broader
discussion around this topic.  We've seen it several times over the
recent history where a change in oooqe or tripleo-ci ends up affecting
either a stable branch or an additional set of jobs that were not run
on the change.  I think it's unrealistic to run every possible job
combination on every submission and it's also a giant waste of CI
resources.  I also don't necessarily agree that we should be using
depends-on to prove things are fine for a given patch for the same
reasons. That being said, we do need to minimize our risk for patches
to these repositories.

At the PTG retrospective I mentioned component design structure[1] as
something we need to be more aware of. I think this particular topic
is one of those types of things where we could benefit from evaluating
the structure and policy around these unbranched repositories to see
if we can improve it.  Is there a particular reason why we continue to
try and support deployment of (at least) 3 or 4 different versions
within a single repository?  Are we adding new features that really
shouldn't be consumed by these older versions such that perhaps it
makes sense to actually create stable branches?  Perhaps there are
some other ideas that might work?


Other folks probably have a better view of the full context here, but 
i'll chime in with my 2 cents anyway..


I think using stable branches for tripleo-quickstart-extras could be 
worth it. The content there is quite tightly coupled with the expected 
TripleO end-user workflows, which tend to evolve considerably between 
releases. Branching extras might be a good way to "match the reality" in 
that sense, and stop worrying about breaking older workflows. (Just 
recently it came up that the upgrade workflow in O is slightly updated 
to make it work in P, and will change quite a bit for Q. Minor updates 
also changed between O and P.)


I'd say that tripleo-quickstart is a different story though. It seems 
fairly release-agnostic in its focus. We may want to keep it unbranched 
(?). That probably applies even more for tripleo-ci, where ability to 
make changes which affect how TripleO does CIing in general, across 
releases, is IMO a significant feature.


Maybe branching quickstart-extras might require some code reshuffling 
between what belongs there and what belongs into quickstart itself.


(Just my 2 cents, i'm likely not among the most important stakeholders 
in this...)


Jirka



Thanks,
-Alex

[0] https://review.openstack.org/#/c/478488/
[1] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] QA Bug Czar

2017-10-06 Thread Andrea Frittoli
Dear all,

as discussed during the PTG, starting in the Queens cycle we will have a
bug czar.

chandakumar has graciously put his name forward for the role, and he'll be
QA bug czar for the Queens cycle.
Thank you!

The bug czar ensures that all bugs to OpenStack projects under the QA
umbrella are triaged and acted upon in a timely fashion based on
criticality.
The bug czar responsibilities include:

- ensure that new bugs are triaged
- prune old / stale bugs that are not relevant anymore
- maintain a list of low hanging fruit bugs for new contributors to pick
and fix
- ensure that PTLs / core reviewers are aware of high-priority bugs / raise
attention to critical issues
- run a regular session in IRC for discussion on bugs, mentoring of bug
triage contributor
- drive implementation / adoption of automation that may help with any of
the above

Kind regards,

Andrea Frittoli (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][inspector] Virtual Queens PTG

2017-10-06 Thread milanisko k
Folks,

as there have been multiple questions about Inspector that I couldn't
answer due to my PTG absence, I've decided to make it up to You and
organise a virtual inspector-focused PTG call (over some BlueJeans or TBD).

The other Inspector Cores gracefully agreed to. I've selected the times all
of us were most comfortable with and would like to ask You to do the
same[1].

As far as the program is concerned, I've created an Etherpad with some
bullets to base our discussion on[2], TL;DR of it being *current Inspector
status on HA filtering and plan on Ironic--Inspector merge* and
picking action items.

I'd like to kindly invite You, the Community, to participate.

Best Regards,
milan

[1] https://doodle.com/poll/q7m4mnvv3h7k8eae
[2] https://etherpad.openstack.org/p/inspector-queens-virtual-ptg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][all] QA Office Hours

2017-10-06 Thread Andrea Frittoli
Dear all,

Staring next week the QA team will start running office hours be-weekly
(every send week) on Thursdays at 9:00 UTC.
Office hours will happen in the #openstack-qa channel, and we'll use
meetbot there to record info, action and generate dedicated logs.

Several core developers from the QA program will be available, so please
join us in the #openstack-qa channel to ask any question related QA
projects.

Faithfully,

Andrea Frittoli (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] s390x zKVM CI

2017-10-06 Thread Sean Dague
On 10/06/2017 04:07 AM, Andreas Scheuring wrote:
> Hi,
> yesterday the nova s390x (zkvm) CI got the permission to vote +1/-1 on
> nova again [3]. Technically it was just an addition the the gerri group
> [1]. Now Jenkins is showing up behind the “Verified” field in the review
> with its +1/-1. However it’s not yet showing up in the long table
> underneath that and therefore (probably) also not in the third party ci
> watch [2] page. What else needs to be done that our CI appears there as
> well?
> 
> Thanks!
> 
> [1] https://review.openstack.org/#/admin/groups/511,members 
> [2] http://ci-watch.tintri.com/project?project=nova
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123195.html
> 
> 
> Andreas Scheuring (andreas_s)

Look at how the markup for the jenkins and power vm results posts are:

http://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/nova/11/457711/35/check/tempest-dsvm-full-xenial/ec523a2/;
rel="nofollow">tempest-dsvm-full-xenial SUCCESS
in 53m 59s

Those css classes are used as selectors by the Toggle CI code to extract
test results. The zKVM results posting is missing them.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, October 6th

2017-10-06 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Election period started ==

We are renewing 6 of the 13 TC seats, nominations are open until EOD
Tuesday, October 10. If you are interested in open source project
governance and want to help continuously adapting it to an ever-evolving
environment, please consider running !

https://governance.openstack.org/election/


== Recently-approved changes ==

* Adopt python-openstacksdk into shade project [1]
* Remove stable:follows-policy tag from TripleO [2]
* Add api interop assert tag to nova [3]

[1] https://review.openstack.org/#/c/501426/
[2] https://review.openstack.org/#/c/507924/
[3] https://review.openstack.org/#/c/506255/

A bit of additional context about the first two changes:

While the name was implying otherwise, python-openstacksdk never was the
one official OpenStack Python SDK. To present a less confusing, more
usable and more complementary face to end users of OpenStack clouds, the
Shade project team (with the help of the OpenStackClient team) proposed
to adopt the now semi-abandoned python-openstacksdk and work to refactor
the 3 projects to avoid overlap and increase complementarity.

Deployment projects like TripleO have needs and policies around stable
branches that are different from classic service projects, which make
the current stable:follows-policy tag difficult to follow, so they asked
to have the tag removed. This may point to a need for a specific common
stable policy for deployment/lifecycle management projects.


== Voting in progress ==

Masakari, a new project to cover Virtual Machine HA needs, reached
majority support and will be approved next Tuesday unless last-minutes
objections are posted:

https://review.openstack.org/#/c/500118/

fungi's change moving the "Infra sysadmins" entry in the top-5 help
wanted list to #2 also reached majority and will be approved next
Tuesday unless someone objects:

https://review.openstack.org/#/c/507637/

smcginnis's proposal to get rid of (unused) supports-zero-impact-upgrade
is still missing a couple of votes to pass:

https://review.openstack.org/#/c/506241/


== Open discussion ==

The supports-accessible-upgrade is also being considered for removal,
since the amount of information it conveys is questionable. Alternatives
are being discussed on the mailing-list. Join on the review or the thread:

* https://review.openstack.org/#/c/506263/
*
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123061.html

Monty posted new patchsets on his proposed updates around the project
testing interface. Please review at:

* https://review.openstack.org/#/c/508693/
* https://review.openstack.org/#/c/508694/

3 other new project teams are still being actively discussed, with no
clear consensus yet. Please join the discussion if you have an opinion
on whether they should be made official OpenStack project teams:

* Glare (https://review.openstack.org/479285)
* Stackube (https://review.openstack.org/#/c/462460/)
* Mogan (https://review.openstack.org/#/c/508400/)


== TC member actions for the coming week(s) ==

Monty should answer the feedback on the supported database version
resolution (https://review.openstack.org/493932) so that we can make
progress there.

John should review Lance's response to his objection on:
https://review.openstack.org/#/c/500985/

Doug and Thierry will draft a top-5 help wanted list addition around
'champions' or leaders of various efforts, as Lance and Chandan prove it
 to be a successful way of getting cross-project work done.


== Need for a TC meeting next Tuesday ==

Nothing is blocked to the point of requiring a specific meeting. Expect
the election season and the last project team applications to be the
main topics discussed during TC office hours next week. You can find TC
office hours at:

https://governance.openstack.org/tc/#office-hours

Cheers,

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-06 Thread James Page
On Tue, 3 Oct 2017 at 18:15 Doug Hellmann  wrote:

> Excerpts from Jesse Pretorius's message of 2017-10-03 15:57:17 +:
> > On 10/3/17, 3:01 PM, "Doug Hellmann"  wrote:
> >
> > >> Given that this topic has gone through several cycles of discussion
> and has never gone anywhere, does it perhaps merit definition as a project
> interface so that we can define the problem this is trying to solve and set
> a standard formally once and for all?
> >
> > >Maybe a couple of the various packaging projects can agree and just
> > >set a de facto rule (and document it). That worked out OK for us
> > >with the doc reorganization when we updated the docs.o.o site
> > >templates.
> >
> > I’m happy to facilitate that. Is there some sort of place where such
> standards are recorded? Ie Where do I submit a review to and is there an
> example to reference for the sort of information that should be in it?
> >
>
> The docs team put that info in the spec for the migration. Do we
> have a packaging SIG yet? That seems like an ideal body to own a
> standard like this long term. Short term, just getting some agreement
> on the mailing list would be a good start.


Bah - missed the start of this thread but here's my tuppence

1) +1 for a consistent approach across projects - /usr/share/
sounds like a sensible location - avoiding any complexity with managing
changes made by users in /etc/ for deploy from source use-cases,
and allowing packagers to know where to expect reference/sample config
files to appear during the package build-out/install process.

2) Looking at the Ubuntu packaging for OpenStack projects, we have quite a
few places where oslo-config-generator or oslo-policy-generator is used to
generate sample configuration files as part of the package build; I might
have missed it in my read through of this thread but it would be awesome if
those could be integrated as part of this process as well as the
originating project would then be able to provide some level for assurance
to the content of generated files in downstream distributions.

I'd also be +1 on a packaging SIG; I don't think it will ever be a high
overhead SIG but it sounds like there are common challenges for deployment
projects and distributors which would benefit from shared focus.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Ideas for Graduate Student

2017-10-06 Thread Luke Hinds
On Thu, Oct 5, 2017 at 11:37 PM, Mike Perez  wrote:

> On 15:27 Oct 05, Mike Perez wrote:
> > On 02:59 Oct 05, Puneet Jain wrote:
> > > Hello all,
> > >
> > > I am a graduate student and have intermediate knowledge and huge in
> cloud
> > > computing. I am looking for a project idea, particularly any new
> feature I
> > > can implement in OpenStack.
> > >
> > > I thought of implementing multi-factor authentication but happened to
> know
> > > that it has already been implemented.
> > > https://docs.openstack.org/security-guide/identity/authentication.html
> > >
> > > I would prefer to do something in security. Any ideas?
> > >
> > > Looking forward to hearing from you guys. Thanks in advance!
> >
> > Welcome to the community Puneet! We have various Security group related
> > projects listed here:
> >
> > https://wiki.openstack.org/wiki/Security
> >
> > You can also find various Security/Identity/Compliance OpenStack project
> > services listed in our project navigator:
> >
> > https://www.openstack.org/software/project-navigator/
> >
> > Also on freenode irc we have #openstack-security. You can see more
> channels:
> >
> > https://wiki.openstack.org/wiki/IRC
> >
> > Here are some helpful documents in setting up IRC, git, and the various
> > accounts you'll be using in our community:
> >
> > https://docs.openstack.org/upstream-training/irc.html
> > https://docs.openstack.org/upstream-training/git.html
> > https://docs.openstack.org/upstream-training/accounts.html
> >
> > --
> > Mike Perez
>
> Actually this account setup documentation is more up-to-date and better:
>
> https://docs.openstack.org/infra/manual/developers.html#account-setup
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Further to what is already said, you're welcome to join us on the weekly
security meeting where we discuss security projects such as Bandit
(security lint scanner) and Syntribos (API fuzzer). We also help incubate
new projects if you have some ideas you would like to float.

We meet on freenode IRC channel #openstack-meeting-alt @ 17:00 UTC each
week.

As mentioned there is also Barbican who meet as well
on #openstack-meeting-alt Monday @ 20.00 UTC and are very welcoming group
of folks who welcome new contributions.

Welcome to come along and say hi!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Nova] s390x zKVM CI

2017-10-06 Thread Andreas Scheuring
Hi,
yesterday the nova s390x (zkvm) CI got the permission to vote +1/-1 on nova 
again [3]. Technically it was just an addition the the gerri group [1]. Now 
Jenkins is showing up behind the “Verified” field in the review with its +1/-1. 
However it’s not yet showing up in the long table underneath that and therefore 
(probably) also not in the third party ci watch [2] page. What else needs to be 
done that our CI appears there as well?

Thanks!

[1] https://review.openstack.org/#/admin/groups/511,members 
[2] http://ci-watch.tintri.com/project?project=nova 

[3] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123195.html


Andreas Scheuring (andreas_s)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] blockquotes in docs

2017-10-06 Thread Markus Zoeller
Just a short reminder that rst puts stuff in blockquotes when you're not
careful with the spacing. Example:

 * item 1
 * item 2

That's in blockquotes because of the 1 blank at the beginning. The
TripleO docs are full with them, which looks a bit ugly, to be frank.

This change removes all unintentional blockquotes in the tripleo docs:
https://review.openstack.org/#/c/504518/

You can find them by yourself with:
$ tox -e docs
$ grep -rn blockquote doc/build/html/ --include *.html


I often use http://rst.aaroniles.net/ for a quick check.

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-10-06 Thread Clint Byrum
A long time ago, a few Canonical employees (Scott Moser was one of them,
forget who else was doing it, maybe Dave Walker and/or Dustin Kirkland)
worked out a scheme for general usage that doesn't require extra plumbing:

 * Client generates a small SSH host key locally and pushes it into
   user data in a way which causes the image to boot and install this
   key from user_data as the host SSH key.
 * Client SSH's in with the strict requirement that the host key be the
   one they just generated and pushed into the instance.
 * Client now triggers new host key generation, and copies new public
   key into client known_hosts.

With this system you don't have to scrape console logs for SSH keys or
build your system on hope.

Excerpts from Giuseppe de Candia's message of 2017-09-29 14:21:06 -0500:
> Hi Folks,
> 
> 
> 
> My intent in this e-mail is to solicit advice for how to inject SSH host
> certificates into VM instances, with minimal or no burden on users.
> 
> 
> 
> Background (skip if you're already familiar with SSH certificates): without
> host certificates, when clients ssh to a host for the first time (or after
> the host has been re-installed), they have to hope that there's no man in
> the middle and that the public key being presented actually belongs to the
> host they're trying to reach. The host's public key is stored in the
> client's known_hosts file. SSH host certicates eliminate the possibility of
> Man-in-the-Middle attack: a Certificate Authority public key is distributed
> to clients (and written to their known_hosts file with a special syntax and
> options); the host public key is signed by the CA, generating an SSH
> certificate that contains the hostname and validity period (among other
> things). When negotiating the ssh connection, the host presents its SSH
> host certificate and the client verifies that it was signed by the CA.
> 
> 
> 
> How to support SSH host certificates in OpenStack?
> 
> 
> 
> First, let's consider doing it by hand, instance by instance. The only
> solution I can think of is to VNC to the instance, copy the public key to
> my CA server, sign it, and then write the certificate back into the host
> (again via VNC). I cannot ssh without risking a MITM attack. What about
> using Nova user-data? User-data is exposed via the metadata service.
> Metadata is queried via http (reply transmitted in the clear, susceptible
> to snooping), and any compute node can query for any instance's
> meta-data/user-data.
> 
> 
> 
> At this point I have to admit I'm ignorant of details of cloud-init. I know
> cloud-init allows specifying SSH private keys (both for users and for SSH
> service). I have not yet studied how such information is securely injected
> into an instance. I assume it should only be made available via ConfigDrive
> rather than metadata-service (again, that service transmits in the clear).
> 
> 
> 
> What about providing SSH host certificates as a service in OpenStack? Let's
> keep out of scope issues around choosing and storing the CA keys, but the
> CA key is per project. What design supports setting up the SSH host
> certificate automatically for every VM instance?
> 
> 
> 
> I have looked at Vendor Data and I don't see a way to use that, mainly
> because 1) it doesn't take parameters, so you can't pass the public key
> out; and 2) it's queried over http, not https.
> 
> 
> 
> Just as a feasibility argument, one solution would be to modify Nova
> compute instance boot code. Nova compute can securely query a CA service
> asking for a triplet (private key, public key, SSH certificate) for the
> specific hostname. It can then inject the triplet using ConfigDrive. I
> believe this securely gets the private key into the instance.
> 
> 
> 
> I cannot figure out how to get the equivalent functionality without
> modifying Nova compute and the boot process. Every solution I can think of
> risks either exposing the private key or vulnerability to a MITM attack
> during the signing process.
> 
> 
> 
> Your help is appreciated.
> 
> 
> 
> --Pino

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DIB] DIB Meetings

2017-10-06 Thread Andreas Scheuring
Thanks!

> On 6. Oct 2017, at 04:25, Ian Wienand  wrote:
> 
> On 10/06/2017 02:19 AM, Andreas Scheuring wrote:
>> seems like there is some confusing information about the DIB
>> meetings in the wiki [1]. The meeting is alternating between 15:00
>> and 20:00 UTC.  But whenever the Text says 15:00 UTC, the link
>> points to a 20:00 UTC worldclock site and vice versa.
> 
>> What is the correct meeting time? At least today 15:00 UTC no one
>> was there...
> 
> Sorry about that, the idea was to alternate every 2 weeks between an
> EU time and a APAC/USA time.  But as you noted I pasted everything in
> backwards causing great confusion :) Thanks to tonyb we're fixed up
> now.
> 
>> I put an item on the agenda for today's meeting but can't make 20:00
>> UTC today. It would be great if you could briefly discuss it and
>> provide feedback on the patch (it's about adding s390x support to
>> DIB). I'm also open for any offline discussions.
> 
> Sorry, with all going on this fell down a bit.  I'll comment there
> 
> Thanks,
> 
> -i
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev