Re: [openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread Kai Qiang Wu
Perhaps it is better to append details, what error you hit ? and check
devstack irc channel if it is not magnum part, and if something like magnum
error message, check magnum IRC channel or open a bug.
Sounds OK ?

On Sat, Aug 6, 2016 at 2:46 AM, Yasemin DEMİRAL (BİLGEM BTE) <
yasemin.demi...@tubitak.gov.tr> wrote:

>
> I follow this page but when I run ./stack command, it gives error. It
> didnt create openstack user, the error about rabbitmq connection.
> I didnt work successfully :/
> --
> *Kimden: *"Spyros Trigazis" 
> *Kime: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Gönderilenler: *5 Ağustos Cuma 2016 19:32:11
> *Konu: *Re: [openstack-dev] [magnum] devstack magnum.conf
>
>
> Hi,
>
> better follow the quickstart guide [1].
>
> Cheers,
> Spyros
>
> [1] http://docs.openstack.org/developer/magnum/dev/quickstart.html
>
> On 5 August 2016 at 06:22, Yasemin DEMİRAL (BİLGEM BTE) <
> yasemin.demi...@tubitak.gov.tr> wrote:
>
>>
>> Hi
>>
>> I try to magnum on devstack, in the manual  Configure magnum: section
>> has sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command,
>> but there is no magnum.conf.
>>  What should i do ?
>>
>> Thanks
>>
>> Yasemin
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Taas can not capture the packet, if the two VM on the same host. Is it a Bug?

2016-08-05 Thread Anil Rao
Hi Jimmy,

I am working on a fix for this problem. I'll send out a patch for code-review 
next week.

Best regards,
Anil

-Original Message-
From: SUZUKI, Kazuhiro [mailto:k...@jp.fujitsu.com] 
Sent: Tuesday, July 05, 2016 5:26 AM
To: gmzhan...@gmail.com
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][taas] Taas can not capture the packet, 
if the two VM on the same host. Is it a Bug?

Hi Jimmy,

I guess that it has not been resoved yet.
You should try to ask it on IRC meeting, I think.

http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting

Regards,
Kaz


From: 张广明 
Subject: Re: [openstack-dev] [neutron][taas] Taas can not capture the  packet, 
if the two VM on the same host. Is it a Bug?
Date: Tue, 5 Jul 2016 19:31:14 +0800

> Hi Kaz
> Thanks for your answer. But int the log, i can not find how to 
> resolve this issue. In fact ,this issue is not related with br-ex.
> In OVS, the normal action add or remove vlan id when output the pac 
> ket. So we should add another rule that use in_port that belongs to 
> the same vlan with mirror port as rule condition  in br- int.
>
>
>
>  Jimmy
>
> 2016-07-05 17:01 GMT+08:00 SUZUKI, Kazuhiro :
>
>> Hi,
>>
>> I also have seen the same situation.
>> The same issue is discussed at the IRC meeting of TaaS.
>> Please take a look at the log.
>>
>>
>> http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-04-13-
>> 06.30.log.html
>>
>> Regards,
>> Kaz
>>
>>
>> From: 张广明 
>> Subject: [openstack-dev] [neutron][taas] Taas can not capture the 
>> packet, if the two VM on the same host. Is it a Bug?
>> Date: Fri, 1 Jul 2016 16:03:53 +0800
>>
>> > Hi ,
>> > I found a limitation when use taas.  My test case is descrip
>> ped as
>> > follow:
>> > VM1 and VM2 is running on the same host and  they are belong
>>  the
>> vlan.
>> > The monitor VM is on the same host or the  other host . I want t
>> o monitor
>> > the only INPUT flow to the VM1.
>> > So I configure the tap-flow like this "neutron tap-flow-crea
>> te
>> --port
>> > 2a5a4382-a600-4fb1-8955-00d0fc9f648f  --tap-service 
>> > c510e5db-4ba8-48e3-bfc8-1f0b61f8f41b --direction IN ".
>> > When ping from VM2 to VM1.  I can not get the flow in the mo
>> nitor VM.
>> >The reason is the the flow from VM2 to VM1 in br-int has not
>> vlan
>> > information. The vlan tag was added in flow when output the pack
>> et  in
>> OVS.
>> > So the code in file ovs_taas.py did not work in this case .
>> >
>> >  if direction == 'IN' or direction == 'BOTH':
>> > port_mac = tap_flow['port_mac']
>> >  self.int_br.add_flow(table=0,
>> >  priority=20,
>> > dl_vlan=port_vlan_id,
>> > dl_dst=port_mac,
>> >
>> actions="normal,mod_vlan_vid:%s,output:%s" %
>> >  (str(taas_id), str(patch_int_ta
>> p_id)))
>> >
>> >
>> >
>> >
>> >  Is this is a Bug or a Design ??
>> >
>> >
>> >
>> > Thanks.
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 06:40 PM, Fox, Kevin M wrote:


*From:* Adam Young [ayo...@redhat.com]
*Sent:* Friday, August 05, 2016 3:06 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [keystone][tripleo] Federation, 
mod_mellon, and HA Proxy


On 08/05/2016 04:54 PM, Adam Young wrote:

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to 
tell it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ 
ssl_fc }

  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is 
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;
 
Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
 ForceAuthn="false"
 IsPassive="false"
 
AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
 >
 
https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata
 


My guess is HA proxy is not passing on the proper, and the 
mod_auth_mellon does not know to rewrite it from 5000 to 13000




"rewriting is more expensive then getting the web server to return the 
right prefix. Is that an option? Usually its just a bug that needs a 
minor patch to fix.


Thanks,
Kevin"


Well, I think in this case, the expense is not something to worry 
about:  SAML is way more chatty than normal traffic, and the rewrite 
won't be a drop a in the bucket.


I think the right thing to do is to get HA proxy top pass on the correct 
URL, including the port, to the backend, but I don't think it is done in 
the rsprep directive.  As John Dennis pointed out to me, the 
mod_auth_mellon code uses the apache ap_construct_url(r->pool, 
cfg->endpoint_path, r) where r is the current request record.  And that 
has to be passed from HA proxy to Apache.


HA proxy is terminating SSL, and then calling Apache via


server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000 rise 2
and two others.  Everything appears to be properly translated except the 
port.








__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Needing volunteers for Geography Coordinators for making use of OSIC cluster

2016-08-05 Thread Jeffrey Zhang
+1 for APAC

On Sat, Aug 6, 2016 at 12:11 AM, Steven Dake (stdake)  wrote:
> Typo is subject tag – please see inside :)
>
> From: Steven Dake 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Friday, August 5, 2016 at 6:52 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kollla] Needing volunteers for Geography
> Coordinators for making use of OSIC cluster
>
> Hey folks,
>
> The kind folks at OSIC have granted the Kolla team access to 132 nodes of
> super high powered gear for scale testing Kolla.  The objectives are 3 fold:
>
> Determine if Kolla can scale to 132 nodes for a variety of test cases – if
> not fix bugs around those problems
> If scalable to 132 nodes, record benchmark data around our various test
> scenarios as outlined in the etherpad
> Produce documentation in our repository at conclusion of OSIC scale testing
> indicating the results we found
>
> The geography coordinators are responsible for coordinating various testing
> going on within their respective geography to coordinate the activities
> taking place on the loaned OSIC gear so we can "follow-the-sun" and make the
> most use of the gear while we have it.  The geo coordinators are also
> responsible for ensuring all bugs related to problems found during osic
> scale testing are tagged with "osic" in launchpad.
>
> We need a geo coordinator for APAC, EMEA, and US.  First individual to
> respond on list gets the job (per geo – need 3 volunteers)
>
> We have the gear for 4 weeks.  We are making use of the first 3 weeks to do
> scale testing of existing Kolla and the last week to test / validate / debug
> Sean's bifrost automated bare metal deployment work at scale.
>
> The current state is the hardware is undergoing manual bare metal deployment
> at present – closing in on this task being completed hopefully by end of day
> (Friday Aug 5th, 2016).
>
> For more information, please reference the Etherpad here:
> https://etherpad.openstack.org/p/kolla-N-midcycle-osic
>
> TIA to volunteers.
>
> Cheers,
> -steak
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osic][bifrost] OSIC cluster status

2016-08-05 Thread Steven Dake (stdake)
Hey folks,

Thanks for the video, those are great - if not a little hard for the
casual observer to follow.

That said, benchmark results of scale testing can be a dangerous game to
play, so lets not play games about it.

I'd like to see a full characterization of the hardware gear and topology
under test so others can duplicate the results if they so desire.  Lets
get this stuff in the repo in the doc directory asap - starting now (or
Monday, since its the weekend and our team needs some well earned R).
I'll login to the OSIC cluster this weekend and get the documentation
started with what I think would be helpful information for folks
evaluating scale testing benchmarks.  Since I wasn't actively involved
with the gear setup, I will be requiring help (in a starred set of
reviews) to correctly identify the hardware under test, how the networks
are setup, how the filesystems are setup, how the raid is setup, how
docker is configured, the exact make and model of the CPU, chipset, brand
and model of nvme SSDs, and memory bandwidth (I assume its quad channel
memory, but more details are needed).  I'd like to objectively benchmark
performance of /var/lib/docker, the root filesystem, and the memory
bandwidth of the using respected third party benchmarking tools.

The results in this thread are just preliminary to share with the
community, but please wait for the final documentation in our repository
to be completed in about 3 weeks to pass judgement on the scale
performance.

One thing that is clear from the video Michal produced is indeed Kolla can
scale to 4 controller nodes and 101 compute nodes (IIRC I believe on IRC
he indicated this was the system configuration used during our dead
chicken testing) with no problems other then human error (which Sean
Mooney is rapidly working to eliminate via his team's bare metal
deployment work with BiFrost thanks in no small part to the contributions
of the BiFrost and Ironic community).  The deployment is very fast (20
minutes) on very fast hardware (the OSIC kit is super fantastically fast).
 We also intend to measure day 2 scenarios (such as upgrade and
reconfigure) with a variety of configuration options (including ceph as a
storage system).

One thing that wasn't mentioned in these threads is some basic testing of
the OpenStack cluster was done to validate it was functional in the
control and data planes of the system.

These results are real with video evidence but as I'e stated the
characterization is incomplete.

I'd like to thank OSIC for facilitating this community effort to help
gather some real-wrold benchmarks of how well atleast Kolla performs in
deployment and operational functionality.

If folks have other interests in specific data please weigh in.  Once our
loan of the hardware is up, it may be some time before we have another
shot at using it.

We are far from finished with using the OSIC cluster - we have a tough
slog ahead finishing our scale testing benchmarking results - thanks to
the Kolla community for sticking with it and doing their best to produce
objective results for consumption by a variety of individuals.

I'm sure there is something I missed in the need to characterize the
systems under test, so if I missed something above, please point it out so
we can fix it up front.

Thanks and fantastic work!
-steve


On 8/5/16, 11:53 AM, "Michał Jastrzębski"  wrote:

>And we finished our first deployment
>We had some hurdles due to misconfiguration, you can see it in along
>with a fix. After these fixes and cleanups performed (we don't want to
>affect resulting time now do we?), we deployed functional openstack
>successfully within 20min:) More videos and tests to come!
>
>https://www.youtube.com/watch?v=RNZMtym5x1c
>
>
>
>On 5 August 2016 at 11:48, Paul Bourke  wrote:
>> Hi Kolla,
>>
>> Thought it will be helpful to send a status mail once we hit
>>checkpoints in
>> the osic cluster work, so people can keep up to speed without having to
>> trawl IRC.
>>
>> Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic
>>
>> Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
>> complete. The 131 bare metal nodes have been provisioned with Ubuntu
>>14.04,
>> networking is configured, and all Kolla prechecks are passing.
>>
>> The default set of images (--profile default) have been built and
>>pushed to
>> a registry running on the deployment node, the build taking a very
>>speedy
>> 5m37.040s.
>>
>> Cheers,
>> -Paul
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 

Re: [openstack-dev] [Glance][Heat][Horizon] Glance v2 and custom locations

2016-08-05 Thread Brad Pokorny
Sorry guys, I'm a little late responding to this topic.

>From a Horizon perspective, it would help if there were an API to discover
whether custom locations are off or on in a Glance v2 installation. We
could then expose it to the user only if it's enabled.

What we've done to prevent this from being a blocker to v2 adoption so far
is add a config value in Horizon that allows an operator to turn it off
explicitly [0], but that requires operators to turn it on/off in 2
services separately.

[0] https://review.openstack.org/#/c/320039/

Thanks,
Brad

On 7/27/16, 5:08 AM, "Flavio Percoco"  wrote:

>On 26/07/16 14:32 +0300, Mikhail Fedosin wrote:
>>Hello!
>>
>>As you may know glance v1 is going to be deprecated in Newton cycle.
>>Almost
>>all projects support glance v2 at this moment, Nova uses it by default.
>>Only one thing that blocks us from complete adoption is a possibility to
>>set custom locations to images. In v1 any user can set a location to his
>>image, but in v2 this functionality is not allowed by default, which
>>prevents v2 adoption in services like Horizon or Heat.
>>
>>It all happens because of differences between v1 and v2 locations. In v1
>>it
>>is pretty easy - user specifies an url and send a request, glance adds
>>this
>>url to the image and activates it.
>>In v2 things are more complicated: v2 supports multiple locations per
>>image, which means that when user wants to download image file glance
>>will
>>choose the best one from the list of locations. It leads to some
>>inconsistencies: user can add or delete locations from his image even if
>>it
>>is active.
>>
>>To enable adding custom locations operator has to set True to config
>>option
>>'show_multiple_locations'. After that any user will be able to add or
>>remove his image locations, update locations metadata, and finally see
>>locations of all images even if they were uploaded to local storage. All
>>this things are not desired if glance v2 has public interface, because it
>>exposes inner cloud architecture. It leads to the fact that Heat and
>>Horizon and Nova in some cases and other services that used to set custom
>>locations in glance v1 won't be able to adopt glance v2. Unfortunately,
>>removing this behavior in v2 isn't easy, because it requires serious
>>architecture changes and breaks API. Moreover, many vendors use these
>>features in their clouds for private glance deployments and they really
>>won't like if we break anything.
>>
>>So, I want to hear opinions from Glance community and other involved
>>people.
>
>I agree the current situation is not ideal but I don't think there's a
>perfect
>solution that will let other services magically use the location's
>implementation in v2. The API itself is different and it requires a
>different
>call.
>
>With that in mind, I think the right thing to do here is to get rid of
>that
>option[0] and let operators manage this through poilicies. This does not
>mean
>the policies available are perfect.
>
>I'm not an expert on service tokens but I think we said that we could
>probably
>just use service tokens to allow for this feature to be used by other
>services
>instead of keeping it wide open everywhere.
>
>While I don't think the current situation is ideal, I think it's better
>than
>keeping it wide open.
>
>Hope the above helps,
>Flavio
>
>[0] https://review.openstack.org/#/c/313936/
>
>>
>>Best regards,
>>Mikhail Fedosin
>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>-- 
>@flaper87
>Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-05 Thread Carl Baldwin
On Tue, Aug 2, 2016 at 6:55 PM, Carl Baldwin  wrote:

> On Aug 2, 2016 6:52 PM, "Kevin Benton"  wrote:
> > If we decide to just fix the exception handler inside of ipam itself for
> rollbacks (which would be a quick fix), I would be okay with that but we
> need to be clear that any driver depending on that alone for state
> synchronization is in a very dangerous position of becoming inconsistent
> (i.e. I want something to point people to if we get bug reports saying that
> the delete call wasn't made when the port failed to create).
>
> I think we could fix it in steps. I do think that both issues are worth
> fixing and will pursue them both. I'll file a bugs.
>
After some discussion in IRC [2], I think I have a plan. The short term fix
is to stop calling rollback for the in-tree driver. Since it uses the same
DB session as Neutron, its changes will be rolled back regardless. I
implemented that in the context of the original patch that I linked early
in this thread [1]. I also cleaned up the unit test *a lot* now that I had
a little time to see what it really needed to do. I think that can merge
now to fix rollback for the in-tree driver (which would be the only one
broken in this way anyway).

This still leaves the fact that IPAM rollback is really pretty broken for
other drivers. This isn't new in Newton but now we understand better how
badly it is broken. I've filed a bug about that [3].

Carl

[1] https://review.openstack.org/#/c/348956
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-08-03.log.html#t2016-08-03T18:08:58
[3] https://bugs.launchpad.net/neutron/+bug/1610483
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-05 Thread Emilien Macchi
On Fri, Aug 5, 2016 at 4:19 PM, Wesley Hayutin  wrote:
>
>
> On Fri, Aug 5, 2016 at 4:08 PM, Emilien Macchi  wrote:
>>
>> On Fri, Aug 5, 2016 at 1:58 PM, Steven Hardy  wrote:
>> > On Thu, Aug 04, 2016 at 09:46:20PM -0400, Emilien Macchi wrote:
>> >> Hi,
>> >>
>> >> I'm currently working by iteration to get a new upstream job that test
>> >> upgrades and update.
>> >> Until now, I'm doing baby steps. I bootstrapped the work to upgrade
>> >> undercloud, see https> ://review.openstack.org/#/c/346995/ for details
>> >> (it's almost working hitting a packaging issue now).
>> >>
>> >> Now I am interested by having 2 overcloud jobs:
>> >>
>> >> - update: Newton -> Newton: basically, we already have it with
>> >> gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
>> >> multinode work that James started.
>> >> I have a PoC (2 lines of code):
>> >> https://review.openstack.org/#/c/351330/1 that works, it deploys an
>> >> overcloud using packaging, applies the patch in THT and run overcloud
>> >> update. I tested it and it works fine, (I tried to break Keystone).
>> >> Right now the job name is
>> >> gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
>> >> example from the existing ovb job that does the exact same thing.
>> >> I propose to rename it to
>> >> gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
>> >> think?
>> >
>> > This sounds good, and it seems to be a valid replacement for the old
>> > "upgrades" job - it won't catch all kinds of update bugs (in particular
>> > it
>> > obviously won't run any packaged based updates at all), but it will
>> > catch
>> > the most serious template regressions, which will be useful coverage to
>> > maintain I think.
>> >
>> >> - upgrade: Mitaka -> Newton: I haven't started anything yet but the
>> >> idea is to test the upgrade from stable to master, using multinode job
>> >> now (not ovb).
>> >> I can prototype something but I would like to hear from our community
>> >> before.
>> >
>> > I think getting this coverage in place is very important, we're
>> > experiencing a lot of post-release pain due to the lack of this
>> > coverage,
>> > so +1 on any steps we can take to get some coverage here, I'd say go
>> > ahead
>> > and do the prototype if you have time to do it.
>>
>> ok, /me working on it.
>>
>> > You may want to chat with weshay, as I know there are some RDO upgrade
>> > tests which were planned to be run as third-party jobs to get some
>> > upgrade
>> > coverage - I'm not sure if there is any scope for reuse here, or if it
>> > will
>> > be easier to just wire in the upgrade via our current scripts (obviously
>> > some form of reuse would be good if possible).
>>
>> ack
>>
>> >> Please give some feedback if you are interested by this work and I
>> >> will spend some time during the next weeks on $topic.
>> >>
>> >> Note: please also look my thread about undercloud upgrade job, I need
>> >> your feedback too.
>> >
>> > My only question about undercloud upgrades is whether we might combine
>> > the
>> > overcloud upgrade job with this, e.g upgrade undercloud, then updgrade
>> > overcloud.  Probably the blocker here will be the gate timeout I guess,
>> > even if we're using pre-cached images etc.
>>
>> Yes, my final goal was to have a job like:
>> 1) deploy Mitaka undercloud
>> 2) deploy Mitaka overcloud
>> 3) run pingtest
>> 4) upgrade undercloud to Newton
>> 5) upgrade overcloud to newton
>> 6) re-run pingtest
>
>
> FYI.. Mathieu wrote up https://review.openstack.org/#/c/323750/
>
> Emilien feel free to take it over, just sync up w/ Mathieu when he returns
> from PTO on Monday.
> Thanks
>

Ok so I didn't modify his code, though I took over to add more bits.

Also, I prepared everything to start tests in upstream CI:

1) Rename upgrades to updates jobs:
Rename it in openstack-infra/project-config https://review.openstack.org/351914
Rename it in tripleo-ci: https://review.openstack.org/#/c/351937
Once it's done, we'll have 2 experimental jobs for upgrading
overcloud: updates and upgrades, as we agreed in this thread.

2) Undercloud upgrade job was rebased: https://review.openstack.org/#/c/346995/
It contains some workarounds. Now the Undercloud Upgrade blueprint has
been merged, people involved in upgrade should help me in 346995 (by
review) to discuss about where we put the code that we need to
upgrade.

3) Overcloud update job was renamed and rebased:
https://review.openstack.org/#/c/351330/
It is passing CI, please review it and once it's merged, I'll propose
to move it to check queue eventually, since we don't run the OVB
updates job for all TripleO patches at this time, but only on periodic
and experimental pipelines.
Having gate-tripleo-ci-centos-7-nonha-multinode-updates-nv in the
check queue will help us to again having an update job in place for
free. This job will be useful until we get an upgrade job working.

4) Overcloud upgrade job: 

Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Fox, Kevin M
rewriting is more expensive then getting the web server to return the right 
prefix. Is that an option? Usually its just a bug that needs a minor patch to 
fix.

Thanks,
Kevin

From: Adam Young [ayo...@redhat.com]
Sent: Friday, August 05, 2016 3:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA 
Proxy

On 08/05/2016 04:54 PM, Adam Young wrote:
On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to tell it to 
rewrite redirects.  Otherwise, I get a link to

http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone section looks 
like this:


listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem
  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 2000 rise 2

And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was interrupted while 
the page was loading."

Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, and Apache 
is running behind  HA proxy (Tripleo setup).


There is no SSL setup inside the Keystone server, it is just doing straight 
HTTP.  While I'd like to change this long term, I'd like to get things working 
this way first, but am willing to make whatever changes are needed to get SAML 
and Federation working soonest.




Ah...just noticed the redirect is to :5000, not port :13000 which is the HA 
Proxy port.

OK, this is due to the SAML request:



https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the mod_auth_mellon does 
not know to rewrite it from 5000 to 13000




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 04:54 PM, Adam Young wrote:

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to tell 
it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ 
ssl_fc }

  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is 
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the 
mod_auth_mellon does not know to rewrite it from 5000 to 13000






__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-05 Thread Kevin Benton
Hi,

In neutron there is a new feature under active development to allow a VM to
attach to many networks via its single interface using VLAN tags.

We would like this to be tested in a scenario test in the gate, but in
order to do that the guest instance must have support for VLAN tags (the
8021q kernel module for Linux VMs). Cirros does not ship with this module
so I have a few questions.

Do any other projects need to load a kernel module for a specific test? If
not, where would the best place be to store the module so we can load it
for that test; or, should we download it directly from the Internet
(worried about the stability of this)?

Thanks,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Fox, Kevin M
If glare was docker repo api compatible though, I think it would be quite 
useful. then each tenant doesn't have to set one up themselves.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Friday, August 05, 2016 1:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Replied inline.

From: Mikhail Fedosin [mailto:mfedo...@mirantis.com]
Sent: August-05-16 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Thank you all for your responses!

>From my side I can add that our separation is a deliberate step. We 
>pre-weighed all pros and cons and our final decision was that moving forward 
>as a new project is the lesser of two evils. Undoubtedly, in the short term it 
>will be painful, but I believe that in the long run Glare will win.

Also, I want to say, that Glare was designed as an open project and we want to 
build a good community with members from different companies. Glare suppose to 
be a backend for Heat (and therefore TripleO), App-Catalog, Tacker and 
definitely Nova. In addition we are considering the possibility of storage 
Docker containers, which may be useful for Magnum.

[Hongbin Lu] Magnum doesn’t have any plan to store docker images at Glare, 
because COE (i.e. Kubernetes) is simply incompatible with any API other than 
docker registry. Zun might have use cases to store docker images at Glare if 
Glare is part of Glance, but I am reluctant to set a dependency on Glare if 
Glare is a totally branch new service.

Then, I think that comparison between Image API and Artifact API is not 
correct. Moreover, in my opinion Image API imposes artificial constraints. Just 
imagine that your file system can only store images in JPG format (more 
precisely, it could store any data, but it is imperative that all files must 
have the extension ".jpg"). Likewise Glance - I can put there any data, it can 
be both packages and templates, as well as video from my holiday. And this 
interface, though not ideal, may not work for all services. But those 
artificial limitations that have been created, do Glance uncomfortable even for 
storing images.

On the other hand Glare provides unified interface for all possible binary data 
types. If we take the example with filesystem, in Glare's case it supports all 
file extensions, folders, history of file changes on your disk, data validation 
and conversion, import/export files from different computers and so on. These 
features are not presented in Glance and I think they never will, because of 
deficiencies in the architecture.

For this reason I think Glare's adoption is important and it will be a huge 
step forward for OpenStack and the whole community.

Thanks again! If you want to support us, please vote for our talk on Barcelona 
summit - https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ 
Search "Glare" and there will be our presentation.

Best,
Mike

On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx 
> wrote:

I don't have a strong opinion on the split vs stay discussion. It
does seem there's been sustained if ineffective attempts to keep this
together so I lean toward supporting the divorce.

But let's not pretend there are no costs for this.

On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
:On 08/04/2016 06:40 PM, Clint Byrum wrote:

:>But, if I look at this from a user perspective, if I do want to use
:>anything other than images as cloud artifacts, the story is pretty
:>confusing.
:
:Actually, I beg to differ. A unified OpenStack Artifacts API,
:long-term, will be more user-friendly and less confusing since a
:single API can be used for various kinds of similar artifacts --
:images, Heat templates, Tosca flows, Murano app manifests, maybe
:Solum things, maybe eventually Nova flavor-like things, etc.

The confusion is the current state of two API's, not having a future
integrated API.

Remember how well that served us with nova-network and neutron (né
quantum).

I also agree with Tim's point.  Yes if a new project is fully
documented and integrated well into packaging and config management
implementing it is trivial, but history again teaches this is a long
road.

It also means extra dev overhead to create and mange these
supporting structures to hide the complexity from end users. Now if
the two project are sufficiently different this may not be a
significant delta as the new docs and config management code would be
need in the old project if the new service stayed stayed there.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Sean Dague
On 08/05/2016 04:32 PM, Armando M. wrote:
> 
> 
> On 5 August 2016 at 13:05, Dan Smith  > wrote:
> 
> > I haven't been able to reproduce it either, but it's unclear how packets
> > would get into a VM on an island since there is no router interface, and
> > the VM can't respond even if it did get it.
> >
> > I do see outbound pings from the connected VM get to eth0, hit the
> > masquerade rule, and continue on their way.  But those packets get
> > dropped at my ISP since they're in the 10/8 range, so perhaps something
> > in the datacenter where this is running is responding?  Grasping at
> > straws is right until we see the results of Armando's test patch.
> 
> Right, that's what I was thinking when I said "something with the
> provider" in my other reply. A provider could potentially always reflect
> 10/8 back at you to eliminate the possibility of ever escaping like
> that, which would presumably come back, hit the 10.1/20 route that we
> have and continue on in. I'm not entirely sure why that's not being hit
> right now (i.e. before this change), but I'm less familiar with the
> current state of the art than I am this patch.
> 
> 
> Still digging but we have a clean pass in [0]. The multinode setup
> involves br-ex [1,2], I am not quite sure how changing iptables rules
> fiddles with it, if at all.
> 
> [0]
> http://logs.openstack.org/76/351876/1/experimental/gate-tempest-dsvm-neutron-dvr-multinode-full/3a81575/logs/testr_results.html.gz
> [1] 
> https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L1108
> [2] 
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L130

So... interesting relevant data which supports Dan and Brian's theory.

The test in question only runs on neutron configurations. Every failure
of the test is on OVH nodes. Every time that test has run not on OVH
nodes, it's passed. http://goo.gl/Sppc72 (logstash results). After the
last failure on the regular job that we had, Dan said we could add a
'-s' flag to be safe, and it looks like it *fixed* it. But the reality
is that it just ran on internap instead. And then when I updated the
commit message, that ran on rax.

OVH networking is kind of unique with the way they give us a /32
address, it's very possible other things in their infrastructure are
causing this reflection.

This would also speak to the fact that our gate tests probably never
produced guests which could actually talk to the outside world. We don't
ever test that they do.The masq rule openned this up for the first time
in our gate as well.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to tell 
it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is the 
HA Proxy port.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young
Today I discovered that we need to modify the HA proxy config to tell it 
to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone section 
looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000 
rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 2000 
rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 2000 
rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was interrupted 
while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, and 
Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to get 
things working this way first, but am willing to make whatever changes 
are needed to get SAML and Federation working soonest.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Armando M.
On 5 August 2016 at 13:05, Dan Smith  wrote:

> > I haven't been able to reproduce it either, but it's unclear how packets
> > would get into a VM on an island since there is no router interface, and
> > the VM can't respond even if it did get it.
> >
> > I do see outbound pings from the connected VM get to eth0, hit the
> > masquerade rule, and continue on their way.  But those packets get
> > dropped at my ISP since they're in the 10/8 range, so perhaps something
> > in the datacenter where this is running is responding?  Grasping at
> > straws is right until we see the results of Armando's test patch.
>
> Right, that's what I was thinking when I said "something with the
> provider" in my other reply. A provider could potentially always reflect
> 10/8 back at you to eliminate the possibility of ever escaping like
> that, which would presumably come back, hit the 10.1/20 route that we
> have and continue on in. I'm not entirely sure why that's not being hit
> right now (i.e. before this change), but I'm less familiar with the
> current state of the art than I am this patch.
>

Still digging but we have a clean pass in [0]. The multinode setup involves
br-ex [1,2], I am not quite sure how changing iptables rules fiddles with
it, if at all.

[0]
http://logs.openstack.org/76/351876/1/experimental/gate-tempest-dsvm-neutron-dvr-multinode-full/3a81575/logs/testr_results.html.gz
[1]
https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L1108
[2]
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L130


>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Hongbin Lu
Replied inline.

From: Mikhail Fedosin [mailto:mfedo...@mirantis.com]
Sent: August-05-16 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Thank you all for your responses!

From my side I can add that our separation is a deliberate step. We pre-weighed 
all pros and cons and our final decision was that moving forward as a new 
project is the lesser of two evils. Undoubtedly, in the short term it will be 
painful, but I believe that in the long run Glare will win.

Also, I want to say, that Glare was designed as an open project and we want to 
build a good community with members from different companies. Glare suppose to 
be a backend for Heat (and therefore TripleO), App-Catalog, Tacker and 
definitely Nova. In addition we are considering the possibility of storage 
Docker containers, which may be useful for Magnum.

[Hongbin Lu] Magnum doesn’t have any plan to store docker images at Glare, 
because COE (i.e. Kubernetes) is simply incompatible with any API other than 
docker registry. Zun might have use cases to store docker images at Glare if 
Glare is part of Glance, but I am reluctant to set a dependency on Glare if 
Glare is a totally branch new service.

Then, I think that comparison between Image API and Artifact API is not 
correct. Moreover, in my opinion Image API imposes artificial constraints. Just 
imagine that your file system can only store images in JPG format (more 
precisely, it could store any data, but it is imperative that all files must 
have the extension ".jpg"). Likewise Glance - I can put there any data, it can 
be both packages and templates, as well as video from my holiday. And this 
interface, though not ideal, may not work for all services. But those 
artificial limitations that have been created, do Glance uncomfortable even for 
storing images.

On the other hand Glare provides unified interface for all possible binary data 
types. If we take the example with filesystem, in Glare's case it supports all 
file extensions, folders, history of file changes on your disk, data validation 
and conversion, import/export files from different computers and so on. These 
features are not presented in Glance and I think they never will, because of 
deficiencies in the architecture.

For this reason I think Glare's adoption is important and it will be a huge 
step forward for OpenStack and the whole community.

Thanks again! If you want to support us, please vote for our talk on Barcelona 
summit - https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ 
Search "Glare" and there will be our presentation.

Best,
Mike

On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx 
> wrote:

I don't have a strong opinion on the split vs stay discussion. It
does seem there's been sustained if ineffective attempts to keep this
together so I lean toward supporting the divorce.

But let's not pretend there are no costs for this.

On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
:On 08/04/2016 06:40 PM, Clint Byrum wrote:

:>But, if I look at this from a user perspective, if I do want to use
:>anything other than images as cloud artifacts, the story is pretty
:>confusing.
:
:Actually, I beg to differ. A unified OpenStack Artifacts API,
:long-term, will be more user-friendly and less confusing since a
:single API can be used for various kinds of similar artifacts --
:images, Heat templates, Tosca flows, Murano app manifests, maybe
:Solum things, maybe eventually Nova flavor-like things, etc.

The confusion is the current state of two API's, not having a future
integrated API.

Remember how well that served us with nova-network and neutron (né
quantum).

I also agree with Tim's point.  Yes if a new project is fully
documented and integrated well into packaging and config management
implementing it is trivial, but history again teaches this is a long
road.

It also means extra dev overhead to create and mange these
supporting structures to hide the complexity from end users. Now if
the two project are sufficiently different this may not be a
significant delta as the new docs and config management code would be
need in the old project if the new service stayed stayed there.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-05 Thread Wesley Hayutin
On Fri, Aug 5, 2016 at 4:08 PM, Emilien Macchi  wrote:

> On Fri, Aug 5, 2016 at 1:58 PM, Steven Hardy  wrote:
> > On Thu, Aug 04, 2016 at 09:46:20PM -0400, Emilien Macchi wrote:
> >> Hi,
> >>
> >> I'm currently working by iteration to get a new upstream job that test
> >> upgrades and update.
> >> Until now, I'm doing baby steps. I bootstrapped the work to upgrade
> >> undercloud, see https> ://review.openstack.org/#/c/346995/ for details
> >> (it's almost working hitting a packaging issue now).
> >>
> >> Now I am interested by having 2 overcloud jobs:
> >>
> >> - update: Newton -> Newton: basically, we already have it with
> >> gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
> >> multinode work that James started.
> >> I have a PoC (2 lines of code):
> >> https://review.openstack.org/#/c/351330/1 that works, it deploys an
> >> overcloud using packaging, applies the patch in THT and run overcloud
> >> update. I tested it and it works fine, (I tried to break Keystone).
> >> Right now the job name is
> >> gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
> >> example from the existing ovb job that does the exact same thing.
> >> I propose to rename it to
> >> gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
> >> think?
> >
> > This sounds good, and it seems to be a valid replacement for the old
> > "upgrades" job - it won't catch all kinds of update bugs (in particular
> it
> > obviously won't run any packaged based updates at all), but it will catch
> > the most serious template regressions, which will be useful coverage to
> > maintain I think.
> >
> >> - upgrade: Mitaka -> Newton: I haven't started anything yet but the
> >> idea is to test the upgrade from stable to master, using multinode job
> >> now (not ovb).
> >> I can prototype something but I would like to hear from our community
> before.
> >
> > I think getting this coverage in place is very important, we're
> > experiencing a lot of post-release pain due to the lack of this coverage,
> > so +1 on any steps we can take to get some coverage here, I'd say go
> ahead
> > and do the prototype if you have time to do it.
>
> ok, /me working on it.
>
> > You may want to chat with weshay, as I know there are some RDO upgrade
> > tests which were planned to be run as third-party jobs to get some
> upgrade
> > coverage - I'm not sure if there is any scope for reuse here, or if it
> will
> > be easier to just wire in the upgrade via our current scripts (obviously
> > some form of reuse would be good if possible).
>
> ack
>
> >> Please give some feedback if you are interested by this work and I
> >> will spend some time during the next weeks on $topic.
> >>
> >> Note: please also look my thread about undercloud upgrade job, I need
> >> your feedback too.
> >
> > My only question about undercloud upgrades is whether we might combine
> the
> > overcloud upgrade job with this, e.g upgrade undercloud, then updgrade
> > overcloud.  Probably the blocker here will be the gate timeout I guess,
> > even if we're using pre-cached images etc.
>
> Yes, my final goal was to have a job like:
> 1) deploy Mitaka undercloud
> 2) deploy Mitaka overcloud
> 3) run pingtest
> 4) upgrade undercloud to Newton
> 5) upgrade overcloud to newton
> 6) re-run pingtest
>

FYI.. Mathieu wrote up https://review.openstack.org/#/c/323750/

Emilien feel free to take it over, just sync up w/ Mathieu when he returns
from PTO on Monday.
Thanks


>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-05 Thread Ben Nemec
On 08/05/2016 12:58 PM, Steven Hardy wrote:
> On Thu, Aug 04, 2016 at 09:46:20PM -0400, Emilien Macchi wrote:
>> Hi,
>>
>> I'm currently working by iteration to get a new upstream job that test
>> upgrades and update.
>> Until now, I'm doing baby steps. I bootstrapped the work to upgrade
>> undercloud, see https> ://review.openstack.org/#/c/346995/ for details
>> (it's almost working hitting a packaging issue now).
>>
>> Now I am interested by having 2 overcloud jobs:
>>
>> - update: Newton -> Newton: basically, we already have it with
>> gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
>> multinode work that James started.
>> I have a PoC (2 lines of code):
>> https://review.openstack.org/#/c/351330/1 that works, it deploys an
>> overcloud using packaging, applies the patch in THT and run overcloud
>> update. I tested it and it works fine, (I tried to break Keystone).
>> Right now the job name is
>> gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
>> example from the existing ovb job that does the exact same thing.
>> I propose to rename it to
>> gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
>> think?
> 
> This sounds good, and it seems to be a valid replacement for the old
> "upgrades" job - it won't catch all kinds of update bugs (in particular it
> obviously won't run any packaged based updates at all), but it will catch
> the most serious template regressions, which will be useful coverage to
> maintain I think.
> 
>> - upgrade: Mitaka -> Newton: I haven't started anything yet but the
>> idea is to test the upgrade from stable to master, using multinode job
>> now (not ovb).
>> I can prototype something but I would like to hear from our community before.
> 
> I think getting this coverage in place is very important, we're
> experiencing a lot of post-release pain due to the lack of this coverage,
> so +1 on any steps we can take to get some coverage here, I'd say go ahead
> and do the prototype if you have time to do it.
> 
> You may want to chat with weshay, as I know there are some RDO upgrade
> tests which were planned to be run as third-party jobs to get some upgrade
> coverage - I'm not sure if there is any scope for reuse here, or if it will
> be easier to just wire in the upgrade via our current scripts (obviously
> some form of reuse would be good if possible).
> 
>> Please give some feedback if you are interested by this work and I
>> will spend some time during the next weeks on $topic.
>>
>> Note: please also look my thread about undercloud upgrade job, I need
>> your feedback too.
> 
> My only question about undercloud upgrades is whether we might combine the
> overcloud upgrade job with this, e.g upgrade undercloud, then updgrade
> overcloud.  Probably the blocker here will be the gate timeout I guess,
> even if we're using pre-cached images etc.

Yeah, we'd probably have to cut a bunch of runtime off somewhere.  Just
the undercloud upgrade alone starting from a pre-built Mitaka image
(which we might be able to do in CI, but it would be a little tricky and
I'm not positive it would work) was taking 20-25 minutes when I was
running it a while ago.  It's probably even slower now.  Add that to the
already long time the overcloud part takes and I don't see how it would
fit in under the timeout.

This is why I was running a job locally instead of adding it to
tripleo-ci (with the intent that some day I would figure out how to move
it into upstream infra like Emilien did :-).

> 
> Thanks for looking into this!
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-05 Thread Emilien Macchi
On Fri, Aug 5, 2016 at 1:58 PM, Steven Hardy  wrote:
> On Thu, Aug 04, 2016 at 09:46:20PM -0400, Emilien Macchi wrote:
>> Hi,
>>
>> I'm currently working by iteration to get a new upstream job that test
>> upgrades and update.
>> Until now, I'm doing baby steps. I bootstrapped the work to upgrade
>> undercloud, see https> ://review.openstack.org/#/c/346995/ for details
>> (it's almost working hitting a packaging issue now).
>>
>> Now I am interested by having 2 overcloud jobs:
>>
>> - update: Newton -> Newton: basically, we already have it with
>> gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
>> multinode work that James started.
>> I have a PoC (2 lines of code):
>> https://review.openstack.org/#/c/351330/1 that works, it deploys an
>> overcloud using packaging, applies the patch in THT and run overcloud
>> update. I tested it and it works fine, (I tried to break Keystone).
>> Right now the job name is
>> gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
>> example from the existing ovb job that does the exact same thing.
>> I propose to rename it to
>> gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
>> think?
>
> This sounds good, and it seems to be a valid replacement for the old
> "upgrades" job - it won't catch all kinds of update bugs (in particular it
> obviously won't run any packaged based updates at all), but it will catch
> the most serious template regressions, which will be useful coverage to
> maintain I think.
>
>> - upgrade: Mitaka -> Newton: I haven't started anything yet but the
>> idea is to test the upgrade from stable to master, using multinode job
>> now (not ovb).
>> I can prototype something but I would like to hear from our community before.
>
> I think getting this coverage in place is very important, we're
> experiencing a lot of post-release pain due to the lack of this coverage,
> so +1 on any steps we can take to get some coverage here, I'd say go ahead
> and do the prototype if you have time to do it.

ok, /me working on it.

> You may want to chat with weshay, as I know there are some RDO upgrade
> tests which were planned to be run as third-party jobs to get some upgrade
> coverage - I'm not sure if there is any scope for reuse here, or if it will
> be easier to just wire in the upgrade via our current scripts (obviously
> some form of reuse would be good if possible).

ack

>> Please give some feedback if you are interested by this work and I
>> will spend some time during the next weeks on $topic.
>>
>> Note: please also look my thread about undercloud upgrade job, I need
>> your feedback too.
>
> My only question about undercloud upgrades is whether we might combine the
> overcloud upgrade job with this, e.g upgrade undercloud, then updgrade
> overcloud.  Probably the blocker here will be the gate timeout I guess,
> even if we're using pre-cached images etc.

Yes, my final goal was to have a job like:
1) deploy Mitaka undercloud
2) deploy Mitaka overcloud
3) run pingtest
4) upgrade undercloud to Newton
5) upgrade overcloud to newton
6) re-run pingtest



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Dan Smith
> I haven't been able to reproduce it either, but it's unclear how packets
> would get into a VM on an island since there is no router interface, and
> the VM can't respond even if it did get it.
> 
> I do see outbound pings from the connected VM get to eth0, hit the
> masquerade rule, and continue on their way.  But those packets get
> dropped at my ISP since they're in the 10/8 range, so perhaps something
> in the datacenter where this is running is responding?  Grasping at
> straws is right until we see the results of Armando's test patch.

Right, that's what I was thinking when I said "something with the
provider" in my other reply. A provider could potentially always reflect
10/8 back at you to eliminate the possibility of ever escaping like
that, which would presumably come back, hit the 10.1/20 route that we
have and continue on in. I'm not entirely sure why that's not being hit
right now (i.e. before this change), but I'm less familiar with the
current state of the art than I am this patch.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storlets] Towards becoming an official project team

2016-08-05 Thread Doron Chen
Hi All,
I support Eran's proposal to be PTL.
Good luck to us all in turning storlets into an official project.
Doron



From:   e...@itsonlyme.name
To: openstack-dev@lists.openstack.org
Date:   05/08/2016 03:32 PM
Subject:[openstack-dev] [storlets] Towards becoming an official 
project team



Hi All,
Before making the motion we need to pick a PTL.
I would like to propose myself for the coming period, that is
until the October Summit and for the cycle that begins in October.
If there are any objections / other volunteers please speak up :-)

Otherwise,
1. We now have an independent release (stable/mitaka) currently 
aligned with tag 0.2.0.
2. I have added some initial info to the wiki in: 
https://wiki.openstack.org/wiki/Storlets
and would like to use it for design thoughts. I will add there the 
security design as
soon as I am done with the Spark work.
4. I have updated the storlets driver team in Launchpad: 
https://launchpad.net/~storlets-drivers
3. The actual request for becoming an official team is to propose a 
patch to: 
https://github.com/openstack/governance/blob/master/reference/projects.yaml

please find below an initial suggestion for the patch. 
Comments/Suggestions are
mostly welcome!

Thanks!
Eran

storlets:
   ptl:
 name: Eran Rom
 irc: eranrom
 email: e...@itsonlyme.name
   irc-channel: openstack-storlets
   mission: >
 To enable a user friendly, cost effective scalable and secure way for
 executing storage centric user defined functions near the data within
 Openstack Swift
   url: https://wiki.openstack.org/wiki/Storlets
   tags:
 - team:diverse-affiliation
   deliverables:
 storlets:
   repos:
 - openstack/storlets
   tags:
 - release:independent
 - type: service-extension   <--- Not sure there is such a type...




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Brian Haley

On 08/05/2016 02:32 PM, Armando M. wrote:


>
> Looking at the health trend for DVR [1], the test hasn't failed in a
> while, so I wonder if this is induced by the proposed switch, even
> though I can't correlate it just yet (still waiting for caffeine to 
kick
> in). Perhaps we can give ourselves today to look into it and pull the
> trigger for 351450 > on Monday?
>
> [1]

http://status.openstack.org/openstack-health/#/job/gate-tempest-dsvm-neutron-dvr



The only functional difference in the new code that happens in the gate
is the iptables rule:

local default_dev=""
default_dev=$(ip route | grep ^default | awk '{print $5}')
sudo iptables -t nat -A POSTROUTING -o $default_dev -s
$FLOATING_RANGE -j MASQUERADE


I skipped this in [0], to give us further data pointsclasping at straws 
still.

[0] https://review.openstack.org/#/c/351876/


I haven't been able to reproduce it either, but it's unclear how packets would 
get into a VM on an island since there is no router interface, and the VM can't 
respond even if it did get it.


I do see outbound pings from the connected VM get to eth0, hit the masquerade 
rule, and continue on their way.  But those packets get dropped at my ISP since 
they're in the 10/8 range, so perhaps something in the datacenter where this is 
running is responding?  Grasping at straws is right until we see the results of 
Armando's test patch.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-05 Thread Britt Houser (bhouser)
W0t  Great work so far everyone! =)




On 8/5/16, 2:53 PM, "Michał Jastrzębski"  wrote:

>And we finished our first deployment
>We had some hurdles due to misconfiguration, you can see it in along
>with a fix. After these fixes and cleanups performed (we don't want to
>affect resulting time now do we?), we deployed functional openstack
>successfully within 20min:) More videos and tests to come!
>
>https://www.youtube.com/watch?v=RNZMtym5x1c
>
>
>
>On 5 August 2016 at 11:48, Paul Bourke  wrote:
>> Hi Kolla,
>>
>> Thought it will be helpful to send a status mail once we hit checkpoints in
>> the osic cluster work, so people can keep up to speed without having to
>> trawl IRC.
>>
>> Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic
>>
>> Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
>> complete. The 131 bare metal nodes have been provisioned with Ubuntu 14.04,
>> networking is configured, and all Kolla prechecks are passing.
>>
>> The default set of images (--profile default) have been built and pushed to
>> a registry running on the deployment node, the build taking a very speedy
>> 5m37.040s.
>>
>> Cheers,
>> -Paul
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-05 Thread Michał Jastrzębski
And we finished our first deployment
We had some hurdles due to misconfiguration, you can see it in along
with a fix. After these fixes and cleanups performed (we don't want to
affect resulting time now do we?), we deployed functional openstack
successfully within 20min:) More videos and tests to come!

https://www.youtube.com/watch?v=RNZMtym5x1c



On 5 August 2016 at 11:48, Paul Bourke  wrote:
> Hi Kolla,
>
> Thought it will be helpful to send a status mail once we hit checkpoints in
> the osic cluster work, so people can keep up to speed without having to
> trawl IRC.
>
> Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic
>
> Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
> complete. The 131 bare metal nodes have been provisioned with Ubuntu 14.04,
> networking is configured, and all Kolla prechecks are passing.
>
> The default set of images (--profile default) have been built and pushed to
> a registry running on the deployment node, the build taking a very speedy
> 5m37.040s.
>
> Cheers,
> -Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread BİLGEM BTE

I follow this page but when I run ./stack command, it gives error. It didnt 
create openstack user, the error about rabbitmq connection. 
I didnt work successfully :/ 
- Orijinal Mesaj -

Kimden: "Spyros Trigazis"  
Kime: "OpenStack Development Mailing List (not for usage questions)" 
 
Gönderilenler: 5 Ağustos Cuma 2016 19:32:11 
Konu: Re: [openstack-dev] [magnum] devstack magnum.conf 

Hi, 

better follow the quickstart guide [1]. 

Cheers, 
Spyros 

[1] http://docs.openstack.org/developer/magnum/dev/quickstart.html 

On 5 August 2016 at 06:22, Yasemin DEMİRAL (BİLGEM BTE) < 
yasemin.demi...@tubitak.gov.tr > wrote: 




Hi 

I try to magnum on devstack, in the manual Configure magnum: section has sudo 
cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command, but there is 
no magnum.conf. 
What should i do ? 

Thanks 

Yasemin 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Dan Smith
> The only functional difference in the new code that happens in
> the gate
> is the iptables rule:
> 
> local default_dev=""
> default_dev=$(ip route | grep ^default | awk '{print $5}')
> sudo iptables -t nat -A POSTROUTING -o $default_dev -s
> $FLOATING_RANGE -j MASQUERADE
> 
> 
> I skipped this in [0], to give us further data pointsclasping at
> straws still.
> 
> [0] https://review.openstack.org/#/c/351876/

This rule only takes effect for packets leaving our public (real,
physical) interface. If that is causing packets to be routed from one
fixed range to another, then I think they must be leaving the box and
bouncing back from the provider somehow.

I don't understand what all DVR has to do with it. Maybe someone could
describe what is different about that scenario in terms of what extra
components, routes, etc are in play?

Also, are we sure that in that sort of run we have properly chosen our
outbound interface and aren't doing something stupid like

  iptables -t nat -A POSTROUTING -o br_ex ...

?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Armando M.
On 5 August 2016 at 11:25, Armando M.  wrote:

>
>
> On 5 August 2016 at 10:21, Sean Dague  wrote:
>
>> On 08/05/2016 11:34 AM, Armando M. wrote:
>> >
>> >
>> > On 5 August 2016 at 05:59, Sean Dague > > > wrote:
>> >
>> > On 08/04/2016 09:15 PM, Armando M. wrote:
>> > > So glad we are finally within the grasp of this!
>> > >
>> > > I posted [1], just to err on the side of caution and get the
>> opportunity
>> > > to see how other gate jobs for Neutron might be affected by this
>> change.
>> > >
>> > > Are there any devstack-gate changes lined up too that we should
>> be aware of?
>> > >
>> > > Cheers,
>> > > Armando
>> > >
>> > > [1] https://review.openstack.org/#/c/351450/
>> > 
>> >
>> > Nothing at this point. devstack-gate bypasses the service defaults
>> in
>> > devstack, so it doesn't impact that at all. Over time we'll want to
>> make
>> > neutron the default choice for all devstack-gate setups, and
>> nova-net to
>> > be the exception. But that actually can all be fully orthoginal to
>> this
>> > change.
>> >
>> >
>> > Ack
>> >
>> >
>> > The experimental results don't quite look in yet, it looks like one
>> test
>> > is failing on dvr (which is the one that tests for cross tenant
>> > connectivity) -
>> > http://logs.openstack.org/50/350750/5/experimental/gate-tem
>> pest-dsvm-neutron-dvr/4958140/
>> > > tempest-dsvm-neutron-dvr/4958140/>
>> >
>> > That test has been pretty twitchy during this patch series, and it's
>> > quite complex, so figuring out exactly why it's impacted here is a
>> bit
>> > beyond me atm. I think we need to decide if that is going to get
>> deeper
>> > inspection, we live with the fails, or we disable the test for now
>> so we
>> > can move forward and get this out to everyone.
>> >
>> >
>> > Looking at the health trend for DVR [1], the test hasn't failed in a
>> > while, so I wonder if this is induced by the proposed switch, even
>> > though I can't correlate it just yet (still waiting for caffeine to kick
>> > in). Perhaps we can give ourselves today to look into it and pull the
>> > trigger for 351450  on
>> Monday?
>> >
>> > [1] http://status.openstack.org/openstack-health/#/job/gate-temp
>> est-dsvm-neutron-dvr
>>
>> The only functional difference in the new code that happens in the gate
>> is the iptables rule:
>>
>> local default_dev=""
>> default_dev=$(ip route | grep ^default | awk '{print $5}')
>> sudo iptables -t nat -A POSTROUTING -o $default_dev -s
>> $FLOATING_RANGE -j MASQUERADE
>>
>
I skipped this in [0], to give us further data pointsclasping at straws
still.

[0] https://review.openstack.org/#/c/351876/


>
>> That's the thing to consider. It is the bit that's a little janky, but
>> it was the best idea we had for making things act like we expect
>> otherwise on the single node environment (especially guests being able
>> to egress). It's worth noting, we never seem to test guest egress in the
>> gate (at least not that I could find), so this is something that might
>> just never have been working the way we expected.
>>
>
> Latest run showed that the single node passed the test [1] (though it
> failed on bug [2] for which we have a fix in place [3]). However the
> multi-node failed on the same again [4]. I'll keep on digging...
>
> [1] http://logs.openstack.org/50/350750/5/experimental/gate-
> tempest-dsvm-neutron-dvr/85f8633/logs/testr_results.html.gz
> [2] https://launchpad.net/bugs/1609693
> [3] https://review.openstack.org/#/c/340659/
> [4] http://logs.openstack.org/50/350750/5/experimental/gate-
> tempest-dsvm-neutron-dvr-multinode-full/8d9ac8f/logs/testr_results.html.gz
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] [ci] Please PIN your 3rd party ci setup to JJB 1.6.1

2016-08-05 Thread Asselin, Ramy
All,

In case you're still using JJB master branch, it is highly recommended that you 
pin to 1.6.1. There are recent/upcoming changes that could break your CI setup.

You can do this by updating your puppet hiera file 
(/etc/puppet/environments/common.yaml [1]) as shown here [2][3]

Re-run puppet (sudo puppet apply --verbose /etc/puppet/manifests/site.pp [1]) 
and that will ensure your JJB & zuul installations are pinned to stable 
versions.

Ramy 

[1] http://docs.openstack.org/infra/openstackci/third_party_ci.html
[2] diff: 
https://review.openstack.org/#/c/348035/4/contrib/single_node_ci_data.yaml
[3] full: 
http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/single_node_ci_data.yaml

-Original Message-
From: Asselin, Ramy 
Sent: Friday, July 08, 2016 7:30 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [third-party] [ci] Upcoming changes to Nodepool for 
Zuul v3

All,

If you haven't already, it's recommended to pin nodepool to the 0.3.0 tag and 
not use master.
If you're using the puppet-openstackci solution, you can update your puppet 
hiera file as shown here: https://review.openstack.org/293112
Re-run puppet and restart nodepool.

Ramy 

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Thursday, July 07, 2016 6:21 PM
To: openstack-in...@lists.openstack.org
Subject: [OpenStack-Infra] Upcoming changes to Nodepool for Zuul v3

Hey all!

tl;dr - nodepool 0.3.0 tagged, you should pin

Longer version:

As you are probably aware, we've been working towards Zuul v3 for a while. 
Hopefully you're as excited about that as we are.

We're about to start working in earnest on changes to nodepool in support of 
that. One of our goals with Zuul v3 is to make nodepool supportable in a CD 
manner for people who are not us. In support of that, we may break a few things 
over the next month or two.

So that it's not a steady stream of things you should pay attention to - we've 
cut a tag:

0.3.0

of what's running in production right now. If your tolerance for potentially 
breaking change is low, we strongly recommend pinning your install to it.

We will still be running CD from master the whole time - but we are also paying 
constant attention when we're landing things.

Once this next iteration is ready, we'll send out another announcement that 
master is in shape for consuming CD-style.

Thanks!
Monty

___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Armando M.
On 5 August 2016 at 10:21, Sean Dague  wrote:

> On 08/05/2016 11:34 AM, Armando M. wrote:
> >
> >
> > On 5 August 2016 at 05:59, Sean Dague  > > wrote:
> >
> > On 08/04/2016 09:15 PM, Armando M. wrote:
> > > So glad we are finally within the grasp of this!
> > >
> > > I posted [1], just to err on the side of caution and get the
> opportunity
> > > to see how other gate jobs for Neutron might be affected by this
> change.
> > >
> > > Are there any devstack-gate changes lined up too that we should be
> aware of?
> > >
> > > Cheers,
> > > Armando
> > >
> > > [1] https://review.openstack.org/#/c/351450/
> > 
> >
> > Nothing at this point. devstack-gate bypasses the service defaults in
> > devstack, so it doesn't impact that at all. Over time we'll want to
> make
> > neutron the default choice for all devstack-gate setups, and
> nova-net to
> > be the exception. But that actually can all be fully orthoginal to
> this
> > change.
> >
> >
> > Ack
> >
> >
> > The experimental results don't quite look in yet, it looks like one
> test
> > is failing on dvr (which is the one that tests for cross tenant
> > connectivity) -
> > http://logs.openstack.org/50/350750/5/experimental/gate-
> tempest-dsvm-neutron-dvr/4958140/
> >  tempest-dsvm-neutron-dvr/4958140/>
> >
> > That test has been pretty twitchy during this patch series, and it's
> > quite complex, so figuring out exactly why it's impacted here is a
> bit
> > beyond me atm. I think we need to decide if that is going to get
> deeper
> > inspection, we live with the fails, or we disable the test for now
> so we
> > can move forward and get this out to everyone.
> >
> >
> > Looking at the health trend for DVR [1], the test hasn't failed in a
> > while, so I wonder if this is induced by the proposed switch, even
> > though I can't correlate it just yet (still waiting for caffeine to kick
> > in). Perhaps we can give ourselves today to look into it and pull the
> > trigger for 351450  on Monday?
> >
> > [1] http://status.openstack.org/openstack-health/#/job/gate-
> tempest-dsvm-neutron-dvr
>
> The only functional difference in the new code that happens in the gate
> is the iptables rule:
>
> local default_dev=""
> default_dev=$(ip route | grep ^default | awk '{print $5}')
> sudo iptables -t nat -A POSTROUTING -o $default_dev -s
> $FLOATING_RANGE -j MASQUERADE
>
> That's the thing to consider. It is the bit that's a little janky, but
> it was the best idea we had for making things act like we expect
> otherwise on the single node environment (especially guests being able
> to egress). It's worth noting, we never seem to test guest egress in the
> gate (at least not that I could find), so this is something that might
> just never have been working the way we expected.
>

Latest run showed that the single node passed the test [1] (though it
failed on bug [2] for which we have a fix in place [3]). However the
multi-node failed on the same again [4]. I'll keep on digging...

[1]
http://logs.openstack.org/50/350750/5/experimental/gate-tempest-dsvm-neutron-dvr/85f8633/logs/testr_results.html.gz
[2] https://launchpad.net/bugs/1609693
[3] https://review.openstack.org/#/c/340659/
[4]
http://logs.openstack.org/50/350750/5/experimental/gate-tempest-dsvm-neutron-dvr-multinode-full/8d9ac8f/logs/testr_results.html.gz


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][os-api-ref] openstackdocstheme integration

2016-08-05 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-05 17:04:35 +:
> Hey,
> 
> We look like we are getting close to merging the os-api-ref integration
> with openstackdocstheme.
> 
> Unfortunately, there is no "phased" approach available - the version
> released with compatibility for openstackdocstheme will not work
> with oslo.sphinx.

In what way doesn't it work? Is one of the themes missing something?

Doug

> So, we need a way to use oslosphinx until it is released, and the new
> theme after it is released.
>
> 
> I suggest we put a temporary section of code in the `conf.py` of each
> project using os-api-ref - I have a WIP preview for designate up for
> review [0]
> 
> Can I get some feedback, if people think this is a good way forward?
> 
> The list of repos I have using os-api-ref is (from [1]:
> 
> openstack/networking-sfc
> openstack/ceilometer
> openstack/glance
> openstack/heat
> openstack/ironic
> openstack/keystone
> openstack/manila
> openstack/designate
> openstack/neutron-lib
> openstack/nova
> openstack/sahara
> openstack/searchlight
> openstack/senlin
> openstack/swift
> openstack/zaqar
> 
> Thanks,
> 
> Graham
> 
> 0 - https://review.openstack.org/#/c/351800/
> 1 -
> http://codesearch.openstack.org/?q=os_api_ref=nope=api-ref%2Fsource%2Fconf.py=
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Mikhail Fedosin
Thank you all for your responses!

>From my side I can add that our separation is a deliberate step. We
pre-weighed all pros and cons and our final decision was that moving
forward as a new project is the lesser of two evils. Undoubtedly, in the
short term it will be painful, but I believe that in the long run Glare
will win.

Also, I want to say, that Glare was designed as an open project and we want
to build a good community with members from different companies. Glare
suppose to be a backend for Heat (and therefore TripleO), App-Catalog,
Tacker and definitely Nova. In addition we are considering the possibility
of storage Docker containers, which may be useful for Magnum.

Then, I think that comparison between Image API and Artifact API is not
correct. Moreover, in my opinion Image API imposes artificial constraints.
Just imagine that your file system can only store images in JPG format
(more precisely, it could store any data, but it is imperative that all
files must have the extension ".jpg"). Likewise Glance - I can put there
any data, it can be both packages and templates, as well as video from my
holiday. And this interface, though not ideal, may not work for all
services. But those artificial limitations that have been created, do
Glance uncomfortable even for storing images.

On the other hand Glare provides unified interface for all possible binary
data types. If we take the example with filesystem, in Glare's case it
supports all file extensions, folders, history of file changes on your
disk, data validation and conversion, import/export files from different
computers and so on. These features are not presented in Glance and I think
they never will, because of deficiencies in the architecture.

For this reason I think Glare's adoption is important and it will be a huge
step forward for OpenStack and the whole community.

Thanks again! If you want to support us, please vote for our talk on
Barcelona summit -
https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ Search
"Glare" and there will be our presentation.

Best,
Mike

On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx 
wrote:

>
> I don't have a strong opinion on the split vs stay discussion. It
> does seem there's been sustained if ineffective attempts to keep this
> together so I lean toward supporting the divorce.
>
> But let's not pretend there are no costs for this.
>
> On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
> :On 08/04/2016 06:40 PM, Clint Byrum wrote:
>
> :>But, if I look at this from a user perspective, if I do want to use
> :>anything other than images as cloud artifacts, the story is pretty
> :>confusing.
> :
> :Actually, I beg to differ. A unified OpenStack Artifacts API,
> :long-term, will be more user-friendly and less confusing since a
> :single API can be used for various kinds of similar artifacts --
> :images, Heat templates, Tosca flows, Murano app manifests, maybe
> :Solum things, maybe eventually Nova flavor-like things, etc.
>
> The confusion is the current state of two API's, not having a future
> integrated API.
>
> Remember how well that served us with nova-network and neutron (né
> quantum).
>
> I also agree with Tim's point.  Yes if a new project is fully
> documented and integrated well into packaging and config management
> implementing it is trivial, but history again teaches this is a long
> road.
>
> It also means extra dev overhead to create and mange these
> supporting structures to hide the complexity from end users. Now if
> the two project are sufficiently different this may not be a
> significant delta as the new docs and config management code would be
> need in the old project if the new service stayed stayed there.
>
> -Jon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-05 Thread Steven Hardy
On Thu, Aug 04, 2016 at 09:46:20PM -0400, Emilien Macchi wrote:
> Hi,
> 
> I'm currently working by iteration to get a new upstream job that test
> upgrades and update.
> Until now, I'm doing baby steps. I bootstrapped the work to upgrade
> undercloud, see https> ://review.openstack.org/#/c/346995/ for details
> (it's almost working hitting a packaging issue now).
> 
> Now I am interested by having 2 overcloud jobs:
> 
> - update: Newton -> Newton: basically, we already have it with
> gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
> multinode work that James started.
> I have a PoC (2 lines of code):
> https://review.openstack.org/#/c/351330/1 that works, it deploys an
> overcloud using packaging, applies the patch in THT and run overcloud
> update. I tested it and it works fine, (I tried to break Keystone).
> Right now the job name is
> gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
> example from the existing ovb job that does the exact same thing.
> I propose to rename it to
> gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
> think?

This sounds good, and it seems to be a valid replacement for the old
"upgrades" job - it won't catch all kinds of update bugs (in particular it
obviously won't run any packaged based updates at all), but it will catch
the most serious template regressions, which will be useful coverage to
maintain I think.

> - upgrade: Mitaka -> Newton: I haven't started anything yet but the
> idea is to test the upgrade from stable to master, using multinode job
> now (not ovb).
> I can prototype something but I would like to hear from our community before.

I think getting this coverage in place is very important, we're
experiencing a lot of post-release pain due to the lack of this coverage,
so +1 on any steps we can take to get some coverage here, I'd say go ahead
and do the prototype if you have time to do it.

You may want to chat with weshay, as I know there are some RDO upgrade
tests which were planned to be run as third-party jobs to get some upgrade
coverage - I'm not sure if there is any scope for reuse here, or if it will
be easier to just wire in the upgrade via our current scripts (obviously
some form of reuse would be good if possible).

> Please give some feedback if you are interested by this work and I
> will spend some time during the next weeks on $topic.
> 
> Note: please also look my thread about undercloud upgrade job, I need
> your feedback too.

My only question about undercloud upgrades is whether we might combine the
overcloud upgrade job with this, e.g upgrade undercloud, then updgrade
overcloud.  Probably the blocker here will be the gate timeout I guess,
even if we're using pre-cached images etc.

Thanks for looking into this!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Sean Dague
On 08/05/2016 11:34 AM, Armando M. wrote:
> 
> 
> On 5 August 2016 at 05:59, Sean Dague  > wrote:
> 
> On 08/04/2016 09:15 PM, Armando M. wrote:
> > So glad we are finally within the grasp of this!
> >
> > I posted [1], just to err on the side of caution and get the opportunity
> > to see how other gate jobs for Neutron might be affected by this change.
> >
> > Are there any devstack-gate changes lined up too that we should be 
> aware of?
> >
> > Cheers,
> > Armando
> >
> > [1] https://review.openstack.org/#/c/351450/
> 
> 
> Nothing at this point. devstack-gate bypasses the service defaults in
> devstack, so it doesn't impact that at all. Over time we'll want to make
> neutron the default choice for all devstack-gate setups, and nova-net to
> be the exception. But that actually can all be fully orthoginal to this
> change.
> 
> 
> Ack
>  
> 
> The experimental results don't quite look in yet, it looks like one test
> is failing on dvr (which is the one that tests for cross tenant
> connectivity) -
> 
> http://logs.openstack.org/50/350750/5/experimental/gate-tempest-dsvm-neutron-dvr/4958140/
> 
> 
> 
> That test has been pretty twitchy during this patch series, and it's
> quite complex, so figuring out exactly why it's impacted here is a bit
> beyond me atm. I think we need to decide if that is going to get deeper
> inspection, we live with the fails, or we disable the test for now so we
> can move forward and get this out to everyone.
> 
> 
> Looking at the health trend for DVR [1], the test hasn't failed in a
> while, so I wonder if this is induced by the proposed switch, even
> though I can't correlate it just yet (still waiting for caffeine to kick
> in). Perhaps we can give ourselves today to look into it and pull the
> trigger for 351450  on Monday?
> 
> [1] 
> http://status.openstack.org/openstack-health/#/job/gate-tempest-dsvm-neutron-dvr

The only functional difference in the new code that happens in the gate
is the iptables rule:

local default_dev=""
default_dev=$(ip route | grep ^default | awk '{print $5}')
sudo iptables -t nat -A POSTROUTING -o $default_dev -s
$FLOATING_RANGE -j MASQUERADE

That's the thing to consider. It is the bit that's a little janky, but
it was the best idea we had for making things act like we expect
otherwise on the single node environment (especially guests being able
to egress). It's worth noting, we never seem to test guest egress in the
gate (at least not that I could find), so this is something that might
just never have been working the way we expected.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][os-api-ref] openstackdocstheme integration

2016-08-05 Thread Hayes, Graham
Hey,

We look like we are getting close to merging the os-api-ref integration
with openstackdocstheme.

Unfortunately, there is no "phased" approach available - the version
released with compatibility for openstackdocstheme will not work
with oslo.sphinx.

So, we need a way to use oslosphinx until it is released, and the new
theme after it is released.

I suggest we put a temporary section of code in the `conf.py` of each
project using os-api-ref - I have a WIP preview for designate up for
review [0]

Can I get some feedback, if people think this is a good way forward?

The list of repos I have using os-api-ref is (from [1]:

openstack/networking-sfc
openstack/ceilometer
openstack/glance
openstack/heat
openstack/ironic
openstack/keystone
openstack/manila
openstack/designate
openstack/neutron-lib
openstack/nova
openstack/sahara
openstack/searchlight
openstack/senlin
openstack/swift
openstack/zaqar

Thanks,

Graham

0 - https://review.openstack.org/#/c/351800/
1 -
http://codesearch.openstack.org/?q=os_api_ref=nope=api-ref%2Fsource%2Fconf.py=

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][openstack] os-client-config 1.19.1 release (newton)

2016-08-05 Thread no-reply
We are delighted to announce the release of:

os-client-config 1.19.1: OpenStack Client Configuation Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

Changes in os-client-config 1.19.0..1.19.1
--

cfa87b1 Add test for precedence rules
ddfed7f Pass the argparse data into to validate_auth
d71a902 Revert "Fix precedence for pass-in options"


Diffstat (except docs and test files)
-

os_client_config/config.py| 62 +++
2 files changed, 101 insertions(+), 14 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-05 Thread Mathieu Mitchell

Good idea Jay, it has been bugging me too :)

New link with the 6 months constraint:
https://review.openstack.org/#/q/project:openstack/ironic-specs+status:open+age:6months

As you said, they're not deleted, just abandoned.

Mathieu

On 2016-08-05 12:25 PM, Jay Faulkner wrote:

They're available in review.openstack.org, if you 
filter by ironic-specs, and status:open.

https://review.openstack.org/#/q/project:openstack/ironic-specs+status:open

Since six months ago would be 2/5/2016, pretty much you're looking at the specs 
older than that.

To be clear; abandoning a spec can be undone by a proposer simply by pushing a 
button. These are specs that all have negative feedback, would not cleanly 
merge, and need attention that they haven't gotten in the last six months.

Thanks,
Jay Faulkner
OSIC
On Aug 5, 2016, at 8:00 AM, milanisko k 
> wrote:

Hi Jay,

I think it might be useful to share the list of those specs in here.

Cheers,
milan

čt 4. 8. 2016 v 21:41 odesílatel Jay Faulkner > 
napsal:
Hi all,

I'd like to abandon any ironic-specs reviews that haven't had any updates in 6 
months or more. This works out to about 27 patches. The primary reason for this 
is to get items out of the review queue that are old and stale.

I'll be performing this action next week unless there's objections posted here.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][osic] OSIC cluster status

2016-08-05 Thread Paul Bourke

Hi Kolla,

Thought it will be helpful to send a status mail once we hit checkpoints 
in the osic cluster work, so people can keep up to speed without having 
to trawl IRC.


Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic

Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now 
complete. The 131 bare metal nodes have been provisioned with Ubuntu 
14.04, networking is configured, and all Kolla prechecks are passing.


The default set of images (--profile default) have been built and pushed 
to a registry running on the deployment node, the build taking a very 
speedy 5m37.040s.


Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Let's restructure murano-apps repo

2016-08-05 Thread Dmytro Dovbii
Hi, all!

As you probably know we use murano-app repository to store there the ready
to deploy applications.

But at the moment we can say that the complexity and the level of
production use of applications there are quite different. A number of apps
are absolutely simple and may be used only as examples. So we have some
confusion there and I suggest making some reorganization of this repository.

It is proposed to divide the applications into three categories:

*Production Grade applications.*
These applications have all functionallity to use them in production. Let's
leave them in murano-apps repo.

*Example apps.*
Simple applications created just for demonstration of how app can be
written. These apps should be placed in murano repo in some new directory
like* murano-apps-example* or in *contrib/examples-apps*

*Complex apps.*
Big complex applications, the functionality of which is growing and
updated. We need to create a separate repositories for them.

Please see the doc [1] with my proposal for reorganization of existing
apps. Please feel free to add comments with your suggestions.

Also, we can discuss it on next murano community meeting.

[1] https://etherpad.openstack.org/p/restructure-murano-apps

-- 
Best regard,
Dmytro Dovbii,
SE in Murano Team | Mirantis, Kharkiv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread Spyros Trigazis
Hi,

better follow the quickstart guide [1].

Cheers,
Spyros

[1] http://docs.openstack.org/developer/magnum/dev/quickstart.html

On 5 August 2016 at 06:22, Yasemin DEMİRAL (BİLGEM BTE) <
yasemin.demi...@tubitak.gov.tr> wrote:

>
> Hi
>
> I try to magnum on devstack, in the manual  Configure magnum: section
> has sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command,
> but there is no magnum.conf.
>  What should i do ?
>
> Thanks
>
> Yasemin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-05 Thread Jay Faulkner
They're available in review.openstack.org, if you 
filter by ironic-specs, and status:open.

https://review.openstack.org/#/q/project:openstack/ironic-specs+status:open

Since six months ago would be 2/5/2016, pretty much you're looking at the specs 
older than that.

To be clear; abandoning a spec can be undone by a proposer simply by pushing a 
button. These are specs that all have negative feedback, would not cleanly 
merge, and need attention that they haven't gotten in the last six months.

Thanks,
Jay Faulkner
OSIC
On Aug 5, 2016, at 8:00 AM, milanisko k 
> wrote:

Hi Jay,

I think it might be useful to share the list of those specs in here.

Cheers,
milan

čt 4. 8. 2016 v 21:41 odesílatel Jay Faulkner > 
napsal:
Hi all,

I'd like to abandon any ironic-specs reviews that haven't had any updates in 6 
months or more. This works out to about 27 patches. The primary reason for this 
is to get items out of the review queue that are old and stale.

I'll be performing this action next week unless there's objections posted here.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-05 Thread Steven Dake (stdake)
Tango,

Sorry to hear that, but glad I could help clarify things :)

Regards
-steve

From: Ton Ngo >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, August 5, 2016 at 7:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub


Thanks Steve, Spyros. I checked with Docker Hub support and the "magnum" 
account is not registered to Steve,
so we will just use the new account "openstackmagnum".
Ton,

[Inactive hide details for Spyros Trigazis ---08/02/2016 09:27:38 AM---I just 
filed a ticket to acquire the username openstackma]Spyros Trigazis 
---08/02/2016 09:27:38 AM---I just filed a ticket to acquire the username 
openstackmagnum. I included Hongbin's contact informat

From: Spyros Trigazis >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 08/02/2016 09:27 AM
Subject: Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub





I just filed a ticket to acquire the username openstackmagnum.

I included Hongbin's contact information explaining that he's the project's PTL.

Thanks Steve,
Spyros


On 2 August 2016 at 13:29, Steven Dake (stdake) 
> wrote:

Ton,

I may or may not have set it up early in Magnum's development.  I just don't 
remember.  My recommendation is to file a support ticket with docker and see if 
they will tell you who it belongs to (as in does it belong to one of the 
founders of Magnum) or if it belongs to some other third party.  Their support 
is very fast.  They may not be able to give you the answer if its not an 
openstacker.

Regards
-steve


From: Ton Ngo >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, August 1, 2016 at 1:06 PM
To: OpenStack Development Mailing List 
>
Subject: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub
Hi everyone,
At the last IRC meeting, the team discussed the need for hosting some container 
images on Docker Hub
to facilitate development. There is currently a Magnum account on Docker Hub, 
but this is not owned by anyone
on the team, so we would like to find who the owner is and whether this account 
was set up for OpenStack Magnum.
Thanks in advance!
Ton Ngo,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Needing volunteers for Geography Coordinators for making use of OSIC cluster

2016-08-05 Thread Steven Dake (stdake)
Typo is subject tag - please see inside :)

From: Steven Dake >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, August 5, 2016 at 6:52 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kollla] Needing volunteers for Geography Coordinators 
for making use of OSIC cluster

Hey folks,

The kind folks at OSIC have granted the Kolla team access to 132 nodes of super 
high powered gear for scale testing Kolla.  The objectives are 3 fold:

  1.  Determine if Kolla can scale to 132 nodes for a variety of test cases - 
if not fix bugs around those problems
  2.  If scalable to 132 nodes, record benchmark data around our various test 
scenarios as outlined in the etherpad
  3.  Produce documentation in our repository at conclusion of OSIC scale 
testing indicating the results we found

The geography coordinators are responsible for coordinating various testing 
going on within their respective geography to coordinate the activities taking 
place on the loaned OSIC gear so we can "follow-the-sun" and make the most use 
of the gear while we have it.  The geo coordinators are also responsible for 
ensuring all bugs related to problems found during osic scale testing are 
tagged with "osic" in launchpad.

We need a geo coordinator for APAC, EMEA, and US.  First individual to respond 
on list gets the job (per geo - need 3 volunteers)

We have the gear for 4 weeks.  We are making use of the first 3 weeks to do 
scale testing of existing Kolla and the last week to test / validate / debug 
Sean's bifrost automated bare metal deployment work at scale.

The current state is the hardware is undergoing manual bare metal deployment at 
present - closing in on this task being completed hopefully by end of day 
(Friday Aug 5th, 2016).

For more information, please reference the Etherpad here:
https://etherpad.openstack.org/p/kolla-N-midcycle-osic

TIA to volunteers.

Cheers,
-steak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Hongbin Lu
Add [heat] to the title to get more feedback.

Best regards,
Hongbin

From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
Sent: August-05-16 5:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of 
requests should be higher but we had some internal issues. We have a submission 
for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a burden, 
and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to 
deploy the cluster) on heat when it seems to be crunching the kube_minions 
nested stacks. Once it's done, it still adds new stacks gradually, so it 
doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this 
working.

This delay is already visible in clusters of 512 nodes, but 40% of the time in 
1000 nodes seems like something we could improve. Any hints on Heat 
configuration optimizations for large stacks very welcome.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol 
> wrote:

Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680

[Inactive hide details for Ton Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo 
for sharing the data, this is really encouraging! T]Ton Ngo---06/17/2016 
12:10:33 PM---Thanks Ricardo for sharing the data, this is really encouraging! 
Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions\)" 
>
Date: 06/17/2016 12:10 PM
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes





Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

[Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just 
thought the Magnum team would be happy to hear :)]Ricardo Rocha ---06/17/2016 
08:16:15 AM---Hi. Just thought the Magnum team would be happy to hear :)

From: Ricardo Rocha >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes




Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests / sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html

We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-05 Thread Rick Jones

On 08/05/2016 02:52 AM, Kevin Benton wrote:

Sorry I didn't elaborate a bit more, I was replying from my phone. The
agent has logic that calculates the required flows for ports when it
starts up and then reconciles that with the current flows in OVS so it
doesn't disrupt traffic on every restart. The tests for that run
constant pings in the background while constantly calling the restart
logic to ensure no packets are lost.



Thanks.

rick


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Armando M.
On 5 August 2016 at 07:39, Brian Haley  wrote:

> On 08/05/2016 08:59 AM, Sean Dague wrote:
>
>> On 08/04/2016 09:15 PM, Armando M. wrote:
>>
>>> So glad we are finally within the grasp of this!
>>>
>>> I posted [1], just to err on the side of caution and get the opportunity
>>> to see how other gate jobs for Neutron might be affected by this change.
>>>
>>> Are there any devstack-gate changes lined up too that we should be aware
>>> of?
>>>
>>> Cheers,
>>> Armando
>>>
>>> [1] https://review.openstack.org/#/c/351450/
>>>
>>
>> Nothing at this point. devstack-gate bypasses the service defaults in
>> devstack, so it doesn't impact that at all. Over time we'll want to make
>> neutron the default choice for all devstack-gate setups, and nova-net to
>> be the exception. But that actually can all be fully orthoginal to this
>> change.
>>
>> The experimental results don't quite look in yet, it looks like one test
>> is failing on dvr (which is the one that tests for cross tenant
>> connectivity) -
>> http://logs.openstack.org/50/350750/5/experimental/gate-temp
>> est-dsvm-neutron-dvr/4958140/
>>
>> That test has been pretty twitchy during this patch series, and it's
>> quite complex, so figuring out exactly why it's impacted here is a bit
>> beyond me atm. I think we need to decide if that is going to get deeper
>> inspection, we live with the fails, or we disable the test for now so we
>> can move forward and get this out to everyone.
>>
>
> I took a quick look at this and can't reproduce it yet, here's what the
> test seems to do:
>
> 1a. Create a network/subnet (10.100.0.0/28)
>  b. attach a router interface to the subnet
>  c. boot VM1 on the network
>
> 2a. Create a network/subnet (10.100.0.16/28)
>  b. do NOT attach a router interface to the subnet
>  c. boot VM2 on the network
>
> 3. Ssh to VM1 and ping VM2 - it should fail since there's no route to the
> network, but it succeeds
>
> The only place you should be able to ping that VM2 IP from is the dhcp
> namespace, which does work for me.
>
> So if you are seeing it be flaky it could the VM placement (same host vs
> different host) is impacting it?  In the logs it showed the same hostId,
> but so did my test, so I don't have a good answer.


Test *test_connectivity_between_vms_on_different_networks*  failed on
single node twice in a row. I think that VM placement may have nothing to
do with it.


>
>
> -Brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Armando M.
On 5 August 2016 at 05:59, Sean Dague  wrote:

> On 08/04/2016 09:15 PM, Armando M. wrote:
> > So glad we are finally within the grasp of this!
> >
> > I posted [1], just to err on the side of caution and get the opportunity
> > to see how other gate jobs for Neutron might be affected by this change.
> >
> > Are there any devstack-gate changes lined up too that we should be aware
> of?
> >
> > Cheers,
> > Armando
> >
> > [1] https://review.openstack.org/#/c/351450/
>
> Nothing at this point. devstack-gate bypasses the service defaults in
> devstack, so it doesn't impact that at all. Over time we'll want to make
> neutron the default choice for all devstack-gate setups, and nova-net to
> be the exception. But that actually can all be fully orthoginal to this
> change.
>
>
Ack


> The experimental results don't quite look in yet, it looks like one test
> is failing on dvr (which is the one that tests for cross tenant
> connectivity) -
> http://logs.openstack.org/50/350750/5/experimental/gate-
> tempest-dsvm-neutron-dvr/4958140/
>
> That test has been pretty twitchy during this patch series, and it's
> quite complex, so figuring out exactly why it's impacted here is a bit
> beyond me atm. I think we need to decide if that is going to get deeper
> inspection, we live with the fails, or we disable the test for now so we
> can move forward and get this out to everyone.
>
>
Looking at the health trend for DVR [1], the test hasn't failed in a while,
so I wonder if this is induced by the proposed switch, even though I can't
correlate it just yet (still waiting for caffeine to kick in). Perhaps we
can give ourselves today to look into it and pull the trigger for 351450
 on Monday?

[1]
http://status.openstack.org/openstack-health/#/job/gate-tempest-dsvm-neutron-dvr


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-05 Thread Hayes, Graham
On 05/08/2016 16:04, James Bottomley wrote:
> On Thu, 2016-08-04 at 17:09 +1000, Mike Carden wrote:
>> On Thu, Aug 4, 2016 at 4:26 PM, Antoni Segura Puimedon <
>> toni+openstac...@midokura.com> wrote:
>>
>>>
>>> It would be really awesome if, in true OSt and OSS spirit this work
>>> happened in an OpenStack repository with an open, text based format
>>> like SVG. This way people could contribute and review.
>>>
>>>
>> I am strongly in favour of images being stored in open formats. Right
>> now the most widely supported open formats are PNG and SVG. Let's
>> make sure that as often as possible, we all store non-photographic
>> images in formats like these.
>
> As someone who acts as web monkey for various conference websites,
> could I just say please use SVG.  Scalable formats are so much easier
> for website designers to work with and pngs have a habit of looking
> ugly when you're forced to scale them (which inevitably happens when
> you have a bunch and you're trying to get them to look uniform).
>
> James
>

Yeah - Can I echo that. When working on using them for other uses
(t-shirts / USB Keys / presentations / printed docs) having a vector
format makes it *much* easier.

On that note - I may have missed it, but what licence are these logos
being released under? Is there any restrictions on their usage like
there is on the main OpenStack logo?

- Graham

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Ton Ngo

Hi Ricardo,
 For your question 1, you can modify the Heat template to not create
the Cinder volume and tweak the call to
configure-docker-storage.sh to use local storage.  It should be fairly
straightforward.  You just need to make
sure the local storage of the flavor is sufficient to host the containers
in the benchmark.
 If you think this is a common scenario, we can open a blueprint for
this option.
Ton,



From:   Ricardo Rocha 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   08/05/2016 04:51 AM
Subject:Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
requests should be higher but we had some internal issues. We have a
submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to
deploy the cluster) on heat when it seems to be crunching the kube_minions
nested stacks. Once it's done, it still adds new stacks gradually, so it
doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack
with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this
working.

This delay is already visible in clusters of 512 nodes, but 40% of the time
in 1000 nodes seems like something we could improve. Any hints on Heat
configuration optimizations for large stacks very welcome.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol  wrote:
  Thanks Ricardo! This is very exciting progress!

  --Brad


  Brad Topol, Ph.D.
  IBM Distinguished Engineer
  OpenStack
  (919) 543-0646
  Internet: bto...@us.ibm.com
  Assistant: Kendra Witherspoon (919) 254-0680

  Inactive hide details for Ton Ngo---06/17/2016 12:10:33 PM---Thanks
  Ricardo for sharing the data, this is really encouraging! TTon
  Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this
  is really encouraging! Ton,

  From: Ton Ngo/Watson/IBM@IBMUS
  To: "OpenStack Development Mailing List \(not for usage questions\)" <
  openstack-dev@lists.openstack.org>
  Date: 06/17/2016 12:10 PM
  Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
  nodes






  Thanks Ricardo for sharing the data, this is really encouraging!
  Ton,

  Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi.
  Just thought the Magnum team would be happy to hear :)Ricardo Rocha
  ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
  happy to hear :)

  From: Ricardo Rocha 
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Date: 06/17/2016 08:16 AM
  Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes



  Hi.

  Just thought the Magnum team would be happy to hear :)

  We had access to some hardware the last couple days, and tried some
  tests with Magnum and Kubernetes - following an original blog post
  from the kubernetes team.

  Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
  sec.

  Check here for some details:
  
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html


  We'll try bigger in a couple weeks, also using the Rally work from
  Winnie, Ton and Spyros to see where it breaks. Already identified a
  couple issues, will add bugs or push patches for those. If you have
  ideas or suggestions for the next tests let us know.

  Magnum is looking pretty good!

  Cheers,
  Ricardo

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-05 Thread milanisko k
Hi Jay,

I think it might be useful to share the list of those specs in here.

Cheers,
milan

čt 4. 8. 2016 v 21:41 odesílatel Jay Faulkner  napsal:

> Hi all,
>
> I'd like to abandon any ironic-specs reviews that haven't had any updates
> in 6 months or more. This works out to about 27 patches. The primary reason
> for this is to get items out of the review queue that are old and stale.
>
> I'll be performing this action next week unless there's objections posted
> here.
>
> Thanks,
> Jay Faulkner
> OSIC
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-05 Thread James Bottomley
On Thu, 2016-08-04 at 17:09 +1000, Mike Carden wrote:
> On Thu, Aug 4, 2016 at 4:26 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
> 
> > 
> > It would be really awesome if, in true OSt and OSS spirit this work
> > happened in an OpenStack repository with an open, text based format 
> > like SVG. This way people could contribute and review.
> > 
> > 
> I am strongly in favour of images being stored in open formats. Right 
> now the most widely supported open formats are PNG and SVG. Let's 
> make sure that as often as possible, we all store non-photographic 
> images in formats like these.

As someone who acts as web monkey for various conference websites,
could I just say please use SVG.  Scalable formats are so much easier
for website designers to work with and pngs have a habit of looking
ugly when you're forced to scale them (which inevitably happens when
you have a bunch and you're trying to get them to look uniform).

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominate Vladimir Khlyunev for fuel-qa core

2016-08-05 Thread Andrey Sledzinskiy
Vladimir, congrats

On Tue, Aug 2, 2016 at 9:50 PM, Alexey Stepanov 
wrote:

> +1
>
> On Tue, Aug 2, 2016 at 2:56 PM, Artem Panchenko 
> wrote:
>
>> +1
>>
>> On Tue, Aug 2, 2016 at 1:52 PM, Dmitry Tyzhnenko > > wrote:
>>
>>> +1
>>>
>>> On Tue, Aug 2, 2016 at 12:51 PM, Artur Svechnikov <
>>> asvechni...@mirantis.com> wrote:
>>>
 +1

 Best regards,
 Svechnikov Artur

 On Tue, Aug 2, 2016 at 12:40 PM, Andrey Sledzinskiy <
 asledzins...@mirantis.com> wrote:

> Hi,
> I'd like to nominate Vladimir Khlyunev for fuel-qa [0] core.
>
> Vladimir has become a valuable member of fuel-qa project in quite
> short period of time. His solid expertise and constant contribution gives
> me no choice but to nominate him for fuel-qa core.
>
> If anyone has any objections, speak now or forever hold your peace
>
> [0] http://stackalytics.com/?company=mirantis=all;
> module=fuel-qa_id=vkhlyunev
> 
>
> --
> Thanks,
> Andrey Sledzinskiy
> QA Engineer,
> Mirantis, Kharkiv
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> WBR,
>>> Dmitry T.
>>> Fuel QA Engineer
>>> http://www.mirantis.com
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Alexey Stepanov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,
Andrey Sledzinskiy
QA Engineer,
Mirantis, Kharkiv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Brian Haley

On 08/05/2016 08:59 AM, Sean Dague wrote:

On 08/04/2016 09:15 PM, Armando M. wrote:

So glad we are finally within the grasp of this!

I posted [1], just to err on the side of caution and get the opportunity
to see how other gate jobs for Neutron might be affected by this change.

Are there any devstack-gate changes lined up too that we should be aware of?

Cheers,
Armando

[1] https://review.openstack.org/#/c/351450/


Nothing at this point. devstack-gate bypasses the service defaults in
devstack, so it doesn't impact that at all. Over time we'll want to make
neutron the default choice for all devstack-gate setups, and nova-net to
be the exception. But that actually can all be fully orthoginal to this
change.

The experimental results don't quite look in yet, it looks like one test
is failing on dvr (which is the one that tests for cross tenant
connectivity) -
http://logs.openstack.org/50/350750/5/experimental/gate-tempest-dsvm-neutron-dvr/4958140/

That test has been pretty twitchy during this patch series, and it's
quite complex, so figuring out exactly why it's impacted here is a bit
beyond me atm. I think we need to decide if that is going to get deeper
inspection, we live with the fails, or we disable the test for now so we
can move forward and get this out to everyone.


I took a quick look at this and can't reproduce it yet, here's what the test 
seems to do:


1a. Create a network/subnet (10.100.0.0/28)
 b. attach a router interface to the subnet
 c. boot VM1 on the network

2a. Create a network/subnet (10.100.0.16/28)
 b. do NOT attach a router interface to the subnet
 c. boot VM2 on the network

3. Ssh to VM1 and ping VM2 - it should fail since there's no route to the 
network, but it succeeds


The only place you should be able to ping that VM2 IP from is the dhcp 
namespace, which does work for me.


So if you are seeing it be flaky it could the VM placement (same host vs 
different host) is impacting it?  In the logs it showed the same hostId, but so 
did my test, so I don't have a good answer.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-05 Thread Ton Ngo

Thanks Steve, Spyros.  I checked with Docker Hub support and the "magnum"
account is not registered to Steve,
so we will just use the new account "openstackmagnum".
Ton,



From:   Spyros Trigazis 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   08/02/2016 09:27 AM
Subject:Re: [openstack-dev] [docker] [magnum] Magnum account on Docker
Hub



I just filed a ticket to acquire the username openstackmagnum.

I included Hongbin's contact information explaining that he's the project's
PTL.

Thanks Steve,
Spyros


On 2 August 2016 at 13:29, Steven Dake (stdake)  wrote:
  Ton,

  I may or may not have set it up early in Magnum's development.  I just
  don't remember.  My recommendation is to file a support ticket with
  docker and see if they will tell you who it belongs to (as in does it
  belong to one of the founders of Magnum) or if it belongs to some other
  third party.  Their support is very fast.  They may not be able to give
  you the answer if its not an openstacker.

  Regards
  -steve


  From: Ton Ngo 
  Reply-To: "OpenStack Development Mailing List (not for usage questions)"
  
  Date: Monday, August 1, 2016 at 1:06 PM
  To: OpenStack Development Mailing List 
  Subject: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub



Hi everyone,
At the last IRC meeting, the team discussed the need for hosting
some container images on Docker Hub
to facilitate development. There is currently a Magnum account on
Docker Hub, but this is not owned by anyone
on the team, so we would like to find who the owner is and whether
this account was set up for OpenStack Magnum.
Thanks in advance!
Ton Ngo,

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Jonathan D. Proulx

I don't have a strong opinion on the split vs stay discussion. It
does seem there's been sustained if ineffective attempts to keep this
together so I lean toward supporting the divorce.

But let's not pretend there are no costs for this.

On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
:On 08/04/2016 06:40 PM, Clint Byrum wrote:

:>But, if I look at this from a user perspective, if I do want to use
:>anything other than images as cloud artifacts, the story is pretty
:>confusing.
:
:Actually, I beg to differ. A unified OpenStack Artifacts API,
:long-term, will be more user-friendly and less confusing since a
:single API can be used for various kinds of similar artifacts --
:images, Heat templates, Tosca flows, Murano app manifests, maybe
:Solum things, maybe eventually Nova flavor-like things, etc.

The confusion is the current state of two API's, not having a future
integrated API.

Remember how well that served us with nova-network and neutron (né
quantum). 

I also agree with Tim's point.  Yes if a new project is fully
documented and integrated well into packaging and config management
implementing it is trivial, but history again teaches this is a long
road.  

It also means extra dev overhead to create and mange these
supporting structures to hide the complexity from end users. Now if
the two project are sufficiently different this may not be a
significant delta as the new docs and config management code would be
need in the old project if the new service stayed stayed there.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Group-based-policy]

2016-08-05 Thread Kiruthiga R
Hi Team,

This is Kiruthiga from Infosys limited.

I have few queries on GBP installation and configuration. I have a two node 
Openstack liberty set up. All the documents which I could find on internet was 
related to GBP with devstack installation. And I could not find any common 
repository from where I can download GBP packages.

With no other choice left, I am following the neutron RDO installation guide 
for the reference.
https://www.rdoproject.org/networking/neutron-gbp/

I managed to install openstack-neutron-gbp and python-gbpclient packages on my 
controller node. But I could not find any packages for openstack-dashboard-gbp.

While installing python-gbpclient, as a dependency it requires 
python-neutronclient to be of version 2.3.9. But, as per liberty installation, 
the version of python-neutronclient is 3.1.0. When I downgrade my  
neutronclient, I face issues in the nova command line. And if I just ignore the 
dependency and proceed the installation of python-gbpclient, my neutron-server 
won't start.

It would be of great help if you can provide me some solutions. Thanks in 
advance.


Thanks & Regards,
Kiruthiga
[cid:image001.png@01CEC0E2.A9DBA890]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread stuart . mclaren

I think this makes sense.

"Should Artifacts be part of Glance?" is something folks have been
debating for several years now, with seemingly equal numbers on
each side.

Even though it was far from unanimous, several summits ago it was decided
to aim for a a v3 Glance API which would support all artifact types,
including images. That turned out to be too ambitious for a project that
really was starting at times to operate close to maintenance mode. (
eg I haven't contributed at all this cycle). The realisation that
V3 wasn't going to happen in our lifetime, difficulties mapping between
V3 and V2 (and V1!) calls, and (IIRC) DefCore feedback, meant V3 was
abandoned: we'd have two separate APIs. That's actually a decision that
was taken a while ago, and isn't new.

As a previously stressed out operator (is there any other kind?) I
understand that anything that means you have more work to do is painful,
and should be considered a negative. But it shouldn't be the only
consideration. For me, moving a distinct API to its own project makes
the most longterm sense.

Thanks to Mike, Alex, Kairat et al for their (hopefully continuing)
Glance contributions and patience.

-Stuart


Hi all,
after 6 months of Glare v1 API development we have decided to continue our
work in a separate project in the "openstack" namespace with its own core
team (me, Kairat Kushaev, Darja Shkhray and the original creator -
Alexander Tivelkov). We want to thank Glance community for their support
during the incubation period, valuable advice and suggestions - this time
was really productive for us. I believe that this step will allow the Glare
project to concentrate on feature development and move forward faster.
Having the independent service also removes inconsistencies in
understanding what Glance project is: it seems that a single project cannot
own two different APIs with partially overlapping functionality. So with
the separation of Glare into a new project, Glance may continue its work on
the OpenStack Images API, while Glare will become the reference
implementation of the new OpenStack Artifacts API.

Nevertheless, Glare team would like to continue to collaborate with the
Glance team in a new - cross-project - format. We still have lots in
common, both in code and usage scenarios, so we are looking forward for
fruitful work with the rest of the Glance team. Those of you guys who are
interested in Glare and the future of Artifacts API are also welcome to
join the Glare team: we have a lot of really exciting tasks and will always
welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I
will continue to be part of the Glance team and for sure I'm going to
contribute in Glance, because I am interested in this project and want to
help it be successful.

We'll have the formal patches pushed to project-config earlier next week,
appropriate repositories, wiki and launchpad space will be created soon as
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC
Mondays in #openstack-meeting-alt, it will just become a Glare project
meeting instead of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still
be called "Glare", but since we are on our own now this acronym becomes
recursive: GLARE now stands for "GLare Artifact REpository" :)
-- next part --
An HTML attachment was scrubbed...
URL: 


--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur



Well, except for you need some non-openstack starting point, because unlike
with e.g. ansible installing any openstack service(s) does not end at "dnf
install ".


You might like to watch Dan's demo again.

It goes something like:

yum install python-tripleoclient
openstack undercloud deploy

Done!


That's pretty awesome indeed. But it also moves the user further away 
from the actual code running, so in case of an unobvious failure they'll 
have to inspect more layers. I guess I am getting at the debugability 
point again... At least it's good to know we're getting all the output 
visible, that's really great.





The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


I'm not against reusing puppet bits, I'm against building the same heavy
abstraction layer with heat around it.


Sure, this is a valid concern to raise, and analternative to what Dan has
prototyped would be to refactor the undercloud puppet manifest to use the
puppet-tripleo profiles, somebody still has to do this work and it still
doesn't help at all with either container integration or multi-node
underclouds.


So, this multi-node thing. Will it still be as easy as running one 
command? I guess we assume that the OS is already provisioned on all 
nodes, right?





2. Better modularity, far easier to enable/disable services


Why? Do you expect enabling/disabling Nova, for example? In this regard
undercloud is fundamentally different from overcloud: for the former we have
a list of required services and a pretty light list of optional services.


Actually, yes!  I'd love to be able to disable Nova and instead deploy
nodes directly via a mistral workflow that drives Ironic.  That's why I
started this:

https://review.openstack.org/#/c/313048/


++ to this

However, it brings a big QE concern. If we say we support deployment 
with and without nova, it increases a number of things to test wrt 
provisioning twice. I still suspect we'll end up with one "blesses" way, 
and the other "probably working" ways. Which might not be so good.




There are reasons such as static IPs for everything where you might want to
be able to make Neutron optional, and there are already a bunch of optional
services (such as all the telemetry services).

Ok, every time I want to disable or add a new service I can hack on the
manifest, but it's just extra work compared to reusing the exact same
method we already support for overcloud deployments.


3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud


I would love a defined debugging workflow for the overcloud first..


Sure, and it's something we have to improve regardless.


5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet


If you mean instack-undercloud element, we're getting rid of them anyway,
no?


Quite a few still remain, but yeah there are less than there was, which is
good.


I think I've seen the patches up for removing all of them (except for 
puppet-stack-config obviously).





6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.


Again, why? A service won't integrate itself into the deployment. And to be
honest, the amount of options TripleO has already cases real world problems.
I would rather see a well defined set of functionality for it..


It means it's easy to enable any service which is one less barrier to
integration, I'm not really sure how that could be construed as a bad
thing.


7. Potential for much easier implementation of a multi-node undercloud


Ideally, I would love to see:

 for node in nodes:
   ssh $node puppet apply blah-blah


Haha, this is a delightful over-simplification, but it completely ignores
all of the logic to create the per-node manifests and hieradata.  This is
what the Heat templates already do for us, over multiple nodes by default.


A bit unrelated, but while we're here... I wonder if we could stop after 
instances are deployed with Heat returning a set of hieredata files for 
nodes... Haven't thought is through, just a quick idea.





Maybe we're not there, but it only means we have to improve our puppet
modules.


There is a layer of orchestration outside of the per-service modules which
is needed here.  We do that simply in the current undercloud implementation
by having a hard-coded manifest, which works OK.  We do that in the
overcloud by orchestrating puppet via Heat over multiple nodes, which also
works OK.


Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to

[openstack-dev] [requirements][elections] requirements PTL election now open

2016-08-05 Thread Anita Kuno
The OpenStack Requirements PTL election is now open. The poll will close 
after 13:00 utc  August 11, 2016.


The electorate has been sent ballots to the gerrit Preferred Email.

If you have patches returned when you issue this gerrit query, you are 
part of the Requirements electorate: 
https://review.openstack.org/#/q/project:openstack/requirements+is:owner+is:merged+after:2015-07-31+before:2016-08-01
Look in your gerrit preferred email inbox for your ballot. What to do if 
you don't see the email and have a commit returned when running the 
above query:
* check the trash or spam folder of your gerrit Preferred Email address, 
in case it went into trash or spam

* wait a bit and check again, in case your email server is a bit slow
* find the sha of at least one commit from the openstack/requirements 
repositoryand email me and Doug[1]. If we can confirm that you are 
entitled to vote, we will add you to the voters list and you will be 
emailed a ballot.


Candidate statements/platforms can be found in the mailing list thread 
kicking off the election[0].


Thank youfor participating the in the election,

Anita and Doug

[0]http://lists.openstack.org/pipermail/openstack-dev/2016-July/thread.html#100173
[1] d...@doughellmann.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kollla] Needing volunteers for Geography Coordinators for making use of OSIC cluster

2016-08-05 Thread Steven Dake (stdake)
Hey folks,

The kind folks at OSIC have granted the Kolla team access to 132 nodes of super 
high powered gear for scale testing Kolla.  The objectives are 3 fold:

  1.  Determine if Kolla can scale to 132 nodes for a variety of test cases - 
if not fix bugs around those problems
  2.  If scalable to 132 nodes, record benchmark data around our various test 
scenarios as outlined in the etherpad
  3.  Produce documentation in our repository at conclusion of OSIC scale 
testing indicating the results we found

The geography coordinators are responsible for coordinating various testing 
going on within their respective geography to coordinate the activities taking 
place on the loaned OSIC gear so we can "follow-the-sun" and make the most use 
of the gear while we have it.  The geo coordinators are also responsible for 
ensuring all bugs related to problems found during osic scale testing are 
tagged with "osic" in launchpad.

We need a geo coordinator for APAC, EMEA, and US.  First individual to respond 
on list gets the job (per geo - need 3 volunteers)

We have the gear for 4 weeks.  We are making use of the first 3 weeks to do 
scale testing of existing Kolla and the last week to test / validate / debug 
Sean's bifrost automated bare metal deployment work at scale.

The current state is the hardware is undergoing manual bare metal deployment at 
present - closing in on this task being completed hopefully by end of day 
(Friday Aug 5th, 2016).

For more information, please reference the Etherpad here:
https://etherpad.openstack.org/p/kolla-N-midcycle-osic

TIA to volunteers.

Cheers,
-steak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 13:56 +0200, Dmitry Tantsur wrote:
> On 08/05/2016 01:21 PM, Steven Hardy wrote:
> > 
> > On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:
> > > 
> > > On 08/04/2016 11:48 PM, Dan Prince wrote:
> > > > 
> > > > Last week I started some prototype work on what could be a new
> > > > way to
> > > > install the Undercloud. The driving force behind this was some
> > > > of the
> > > > recent "composable services" work we've done in TripleO so
> > > > initially I
> > > > called in composable undercloud. There is an etherpad here with
> > > > links
> > > > to some of the patches already posted upstream (many of which
> > > > stand as
> > > > general imporovements on their own outside the scope of what
> > > > I'm
> > > > talking about here).
> > > > 
> > > > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > > > 
> > > > The idea in short is that we could spin up a small single
> > > > process all-
> > > > in-one heat-all (engine and API) and thereby avoid things like
> > > > Rabbit,
> > > > and MySQL. Then we can use Heat templates to drive the
> > > > Undercloud
> > > > deployment just like we do in the Overcloud.
> > > I don't want to sound rude, but please no. The fact that you have
> > > a hammer
> > > does not mean everything around is nails :( What problem are you
> > > trying to
> > > solve by doing it?
> > I think Dan explains it pretty well in his video, and your comment
> > indicates a fundamental misunderstanding around the entire TripleO
> > vision,
> > which is about symmetry and reuse between deployment tooling and
> > the
> > deployed cloud.
> Well, except for you need some non-openstack starting point, because 
> unlike with e.g. ansible installing any openstack service(s) does
> not 
> end at "dnf install ".
> 
> > 
> > 
> > The problems this would solve are several:
> > 
> > 1. Remove divergence between undercloud and overcloud puppet
> > implementation
> > (instead of having an undercloud specific manifest, we reuse the
> > *exact*
> > same stuff we use for overcloud deployments)
> I'm not against reusing puppet bits, I'm against building the same
> heavy 
> abstraction layer with heat around it.

What do you mean by heavy exactly. The entire point here was to
demonstrate that this can work and *is* actually quite lightweight I
think.

We are already building an abstraction layer. So why not just use it in
2 places instead of one.

> 
> > 
> > 
> > 2. Better modularity, far easier to enable/disable services
> Why? Do you expect enabling/disabling Nova, for example? In this
> regard 
> undercloud is fundamentally different from overcloud: for the former
> we 
> have a list of required services and a pretty light list of optional 
> services.

I think this is a very narrow view of the Undercloud and ignores the
fact that continually adding booleans to enable or disable features is
not scalable. Using the same composability and deployment framework we
have developed for the Overcloud might make better sense to me.

There is also real potential here to re-use this as a means to install
other package based types of setups. An "anything is an undercloud"
sort of approach could be the next logic step... all of this for free
because we are building abstractions to install these things in the
Overcloud as well.

> 
> > 
> > 
> > 3. Get container integration "for free" when we land it in the
> > overcloud
> > 
> > 4. Any introspection and debugging workflow becomes identical
> > between the
> > undercloud and overcloud
> I would love a defined debugging workflow for the overcloud first..

The nice thing about demo I showed for debugging is all the output
comes back to the console. Heat, os-collect-config, puppet, etc. all
there at your fingertips. Set 'debug=True' and you have everything you
need I think.

After building it I've quite enjoyed how fast it is to test and debug
creating a prototype undercloud.yaml.

> 
> > 
> > 
> > 5. We remove dependencies on a bunch of legacy scripts which run
> > outside of
> > puppet
> If you mean instack-undercloud element, we're getting rid of them 
> anyway, no?

We mean all of the elements. Besides a few bootstrapping things we have
gradually moved towards using Heat hooks to run things as opposed to
the traditional os-apply-config/os-refresh-config hooks. This provides
better signalling back to heat and arguably makes debugging much easier
when something fails too.

> 
> > 
> > 
> > 6. Whenever someone lands support for a new service in the
> > overcloud, we
> > automatically get undercloud support for it, completely for free.
> Again, why? A service won't integrate itself into the deployment. And
> to 
> be honest, the amount of options TripleO has already cases real
> world 
> problems. I would rather see a well defined set of functionality for
> it..
> 
> > 
> > 
> > 7. Potential for much easier implementation of a multi-node
> > undercloud
> Ideally, I would love to see:
> 
>   for node in nodes:
> ssh 

Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Steven Hardy
On Fri, Aug 05, 2016 at 01:56:32PM +0200, Dmitry Tantsur wrote:
> On 08/05/2016 01:21 PM, Steven Hardy wrote:
> > On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:
> > > On 08/04/2016 11:48 PM, Dan Prince wrote:
> > > > Last week I started some prototype work on what could be a new way to
> > > > install the Undercloud. The driving force behind this was some of the
> > > > recent "composable services" work we've done in TripleO so initially I
> > > > called in composable undercloud. There is an etherpad here with links
> > > > to some of the patches already posted upstream (many of which stand as
> > > > general imporovements on their own outside the scope of what I'm
> > > > talking about here).
> > > > 
> > > > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > > > 
> > > > The idea in short is that we could spin up a small single process all-
> > > > in-one heat-all (engine and API) and thereby avoid things like Rabbit,
> > > > and MySQL. Then we can use Heat templates to drive the Undercloud
> > > > deployment just like we do in the Overcloud.
> > > 
> > > I don't want to sound rude, but please no. The fact that you have a hammer
> > > does not mean everything around is nails :( What problem are you trying to
> > > solve by doing it?
> > 
> > I think Dan explains it pretty well in his video, and your comment
> > indicates a fundamental misunderstanding around the entire TripleO vision,
> > which is about symmetry and reuse between deployment tooling and the
> > deployed cloud.
> 
> Well, except for you need some non-openstack starting point, because unlike
> with e.g. ansible installing any openstack service(s) does not end at "dnf
> install ".

You might like to watch Dan's demo again.

It goes something like:

yum install python-tripleoclient
openstack undercloud deploy

Done!

> > The problems this would solve are several:
> > 
> > 1. Remove divergence between undercloud and overcloud puppet implementation
> > (instead of having an undercloud specific manifest, we reuse the *exact*
> > same stuff we use for overcloud deployments)
> 
> I'm not against reusing puppet bits, I'm against building the same heavy
> abstraction layer with heat around it.

Sure, this is a valid concern to raise, and analternative to what Dan has
prototyped would be to refactor the undercloud puppet manifest to use the
puppet-tripleo profiles, somebody still has to do this work and it still
doesn't help at all with either container integration or multi-node
underclouds.

> > 2. Better modularity, far easier to enable/disable services
> 
> Why? Do you expect enabling/disabling Nova, for example? In this regard
> undercloud is fundamentally different from overcloud: for the former we have
> a list of required services and a pretty light list of optional services.

Actually, yes!  I'd love to be able to disable Nova and instead deploy
nodes directly via a mistral workflow that drives Ironic.  That's why I
started this:

https://review.openstack.org/#/c/313048/

There are reasons such as static IPs for everything where you might want to
be able to make Neutron optional, and there are already a bunch of optional
services (such as all the telemetry services).

Ok, every time I want to disable or add a new service I can hack on the
manifest, but it's just extra work compared to reusing the exact same
method we already support for overcloud deployments.

> > 3. Get container integration "for free" when we land it in the overcloud
> > 
> > 4. Any introspection and debugging workflow becomes identical between the
> > undercloud and overcloud
> 
> I would love a defined debugging workflow for the overcloud first..

Sure, and it's something we have to improve regardless.

> > 5. We remove dependencies on a bunch of legacy scripts which run outside of
> > puppet
> 
> If you mean instack-undercloud element, we're getting rid of them anyway,
> no?

Quite a few still remain, but yeah there are less than there was, which is
good.

> > 6. Whenever someone lands support for a new service in the overcloud, we
> > automatically get undercloud support for it, completely for free.
> 
> Again, why? A service won't integrate itself into the deployment. And to be
> honest, the amount of options TripleO has already cases real world problems.
> I would rather see a well defined set of functionality for it..

It means it's easy to enable any service which is one less barrier to
integration, I'm not really sure how that could be construed as a bad
thing.

> > 7. Potential for much easier implementation of a multi-node undercloud
> 
> Ideally, I would love to see:
> 
>  for node in nodes:
>ssh $node puppet apply blah-blah

Haha, this is a delightful over-simplification, but it completely ignores
all of the logic to create the per-node manifests and hieradata.  This is
what the Heat templates already do for us, over multiple nodes by default.

> Maybe we're not there, but it only means we have to improve our puppet
> 

Re: [openstack-dev] [ptl][requirements] nomination period started

2016-08-05 Thread Anita Kuno

On 16-07-27 09:41 AM, Matthew Thode wrote:

We've started a period of self nomination in preparation for the
requirements project fully moving into project (as it's still under Doug
Hellmann).

We are gathering the self nominations here before we vote next week.
https://etherpad.openstack.org/p/requirements-ptl-newton

Nominees should also send an email to the openstack-dev list.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The candidates for the OpenStack Requirements PTL election are as follows:

* Matthew Thode - prometheanfire

* Tony Breeds - tonyb

* Swapnil Kulkarni - coolsvap

Ballots are forthcoming.


Thank you,

Anita and Doug.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-05 Thread Zane Bitter

On 04/08/16 23:00, joehuang wrote:

I think all the problem is caused by the definition "official OpenStack 
project" for one big-tent project.

I understand that each OpenStack vendor wants some differentiation in their 
solution, while also would
like to collaborate with common core projects.


Nobody wants this. We want to build a fully-featured cloud that can run 
the same kinds of apps that users might develop for AWS/Azure/GCE, and 
we want those apps to be portable substantially everywhere. It's all 
right there in the Mission Statement.



If we replace the title "official OpenStack project" to "OpenStack ecosystem player", and 
make "big-tent"
as "ecosystem play yard" ( no close roof ), TCs can put more focus on 
governance of core projects
(current non-big-tent projects), and provide a more open place to grow abundant 
ecosystem.


You're describing the exact situation we had before the 'big-tent' reform.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-05 Thread Chris Dent

On Tue, 2 Aug 2016, Alex Xu wrote:


Chris have a thought about using ResourceClass to describe Capabilities
with an infinite inventory. In the beginning we brain storming the idea of
Tags, Tan Lin have same thought, but we say no very quickly, due to the
ResourceClass is really about Quantitative stuff. But Chris give very good
point about simplify the ResourceProvider model and the API.


I'm still leaning in this direction. I realized I wasn't explaining
myself very well and "because I like it" isn't really a good enough
for doing anything, so I wrote something up about it:

   https://anticdent.org/simple-resource-provision.html

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-05 Thread Sean Dague
On 08/04/2016 09:15 PM, Armando M. wrote:
> So glad we are finally within the grasp of this!
> 
> I posted [1], just to err on the side of caution and get the opportunity
> to see how other gate jobs for Neutron might be affected by this change.
> 
> Are there any devstack-gate changes lined up too that we should be aware of?
> 
> Cheers,
> Armando
> 
> [1] https://review.openstack.org/#/c/351450/

Nothing at this point. devstack-gate bypasses the service defaults in
devstack, so it doesn't impact that at all. Over time we'll want to make
neutron the default choice for all devstack-gate setups, and nova-net to
be the exception. But that actually can all be fully orthoginal to this
change.

The experimental results don't quite look in yet, it looks like one test
is failing on dvr (which is the one that tests for cross tenant
connectivity) -
http://logs.openstack.org/50/350750/5/experimental/gate-tempest-dsvm-neutron-dvr/4958140/

That test has been pretty twitchy during this patch series, and it's
quite complex, so figuring out exactly why it's impacted here is a bit
beyond me atm. I think we need to decide if that is going to get deeper
inspection, we live with the fails, or we disable the test for now so we
can move forward and get this out to everyone.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] Towards becoming an official project team

2016-08-05 Thread eran

Hi All,
Before making the motion we need to pick a PTL.
I would like to propose myself for the coming period, that is
until the October Summit and for the cycle that begins in October.
If there are any objections / other volunteers please speak up :-)

Otherwise,
1. We now have an independent release (stable/mitaka) currently  
aligned with tag 0.2.0.
2. I have added some initial info to the wiki in:  
https://wiki.openstack.org/wiki/Storlets
   and would like to use it for design thoughts. I will add there the  
security design as

   soon as I am done with the Spark work.
4. I have updated the storlets driver team in Launchpad:  
https://launchpad.net/~storlets-drivers
3. The actual request for becoming an official team is to propose a  
patch to:   
https://github.com/openstack/governance/blob/master/reference/projects.yaml
   please find below an initial suggestion for the patch.  
Comments/Suggestions are

   mostly welcome!

Thanks!
Eran

storlets:
  ptl:
name: Eran Rom
irc: eranrom
email: e...@itsonlyme.name
  irc-channel: openstack-storlets
  mission: >
To enable a user friendly, cost effective scalable and secure way for
executing storage centric user defined functions near the data within
Openstack Swift
  url: https://wiki.openstack.org/wiki/Storlets
  tags:
- team:diverse-affiliation
  deliverables:
storlets:
  repos:
- openstack/storlets
  tags:
- release:independent
- type: service-extension   <--- Not sure there is such a type...




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Gerard Braad
Hi,

On Fri, Aug 5, 2016 at 7:56 PM, Dmitry Tantsur  wrote:
> Ideally, I would love to see:
>
>  for node in nodes:
>ssh $node puppet apply blah-blah
>
> Maybe we're not there, but it only means we have to improve our puppet
> modules.

This is the same thought I had, Shouldn't the config be just a call to
a manifest?

The undercloud should be a simple install process with some config (or
image based deployment). Using Heat to deploy the undercloud means
involves bootstrapping a Heat environment. I believe Ansible feels
like a much better fit for this. What would the user/administrator
want? Is customization of the undercloud something realistically
happening?

regards,


Gerard

-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [yaql] Evaluate YAQL expressions in the yaqluator

2016-08-05 Thread Kirill Zaitsev
Hi and thanks for your continued support for yaql =) 

Please take note, that we’re currently have a big effort in updating and 
writing yaql documentation (I hope we’ll get it done and ready for barcelona). 
Feel free to propose a short article to official yaql docs about yaqluator ;)

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

On 29 juillet 2016 at 09:08:41, Elisha, Moshe (Nokia - IL) 
(moshe.eli...@nokia.com) wrote:

Hi,

I saw that starting the Newton release - Heat supports yaql function[1].
I think this will prove to be very powerful and very handy.

I wanted to make sure you are familiar with the yaqluator[2] as it might be 
useful for you.

yaqluator – is a free online YAQL evaluator.
* Enter a YAML / JSON and a YAQL expression and evaluate to see the result.
* There is a catalog of commonly used OpenStack API responses to run YAQL 
expressions against.
* It is open-source[3] and any contribution is welcome.

I hope you will find it useful.


[1] http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#yaql
[2] http://yaqluator.com
[3] https://github.com/ALU-CloudBand/yaqluator

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 13:39 +0200, Thomas Herve wrote:
> On Thu, Aug 4, 2016 at 11:48 PM, Dan Prince 
> wrote:
> > 
> > Last week I started some prototype work on what could be a new way
> > to
> > install the Undercloud. The driving force behind this was some of
> > the
> > recent "composable services" work we've done in TripleO so
> > initially I
> > called in composable undercloud. There is an etherpad here with
> > links
> > to some of the patches already posted upstream (many of which stand
> > as
> > general imporovements on their own outside the scope of what I'm
> > talking about here).
> > 
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > 
> > The idea in short is that we could spin up a small single process
> > all-
> > in-one heat-all (engine and API) and thereby avoid things like
> > Rabbit,
> > and MySQL.
> I saw those patches coming, I'm interested in the all-in-one
> approach,
> if only for testing purpose. I hope to be able to propose a solution
> with broker-less RPC instead of fake RPC at some point, but it's a
> good first step.
> 
> I'm a bit more intrigued by the no-auth patch. It seems that Heat
> would rely heavily on Keystone interactions even after initial
> authentication, so I wonder how that work. As it seems you would need
> to push the same approach to Ironic, have you considered starting
> Keystone instead? It's a simple WSGI service, and can work with
> SQLite
> as well I believe.

You are correct. Noauth wasn't enough. I had to add a bit more to make
OS::Heat::SoftwareDeployments happy to get the templates I showed in
the demo working. Surprisingly though if I avoid Heat
OS::Heat::SoftwareDeployments and only used OS:Heat::SoftwareConfig's
in my templates no extra keystone auth was needed. This is because heat
only creates the extra Keystone user, trust, etc. when realizing the
software deployments I think.

I started with this which should work for multiple projects besides
just Heat: https://review.openstack.org/#/c/351351/2/tripleoclient/fake
_keystone.py

I'd be happy to swap in full Keystone if people prefer but that would
be more memory, and setup. Keystone dropped it's eventlet runner
recently so we'd have to fork another WSGI process to run it I think
somewhere in an out of the way (non-default ports, etc) fashion. I was
trying to keep the project list minimal so I went and stubbed in only
what was functionally needed for this here with an eye that we'd
actually (at some point) make heat support true noauth again.

Dan

> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/05/2016 01:34 PM, Dan Prince wrote:

On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:


Last week I started some prototype work on what could be a new way
to
install the Undercloud. The driving force behind this was some of
the
recent "composable services" work we've done in TripleO so
initially I
called in composable undercloud. There is an etherpad here with
links
to some of the patches already posted upstream (many of which stand
as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process
all-
in-one heat-all (engine and API) and thereby avoid things like
Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.

I don't want to sound rude, but please no. The fact that you have a
hammer does not mean everything around is nails :( What problem are
you
trying to solve by doing it?


Several problems I think.

One is TripleO has gradually moved away from elements. And while we
still use DIB elements for some things we no longer favor that tool and
instead rely on Heat and config management tooling to do our stepwise
deployment ordering. This leaves us using instack-undercloud a tool
built specifically to install elements locally as a means to create our
undercloud. It works... and I do think we've packaged it nicely but it
isn't the best architectural fit for where we are going I think. I
actually think that from an end/user contribution standpoint using t-h-
t could be quite nice for adding features to the Undercloud.


I don't quite get how it is better than finally moving to puppet only 
and stop using elements.




Second would be re-use. We just spent a huge amount of time in Newton
(and some in Mitaka) refactoring t-h-t around composable services. So
say you add a new composable service for Barbican in the Overcloud...
wouldn't it be nice to be able to consume the same thing in your
Undercloud as well? Right now you can't, you have to do some of the
work twice and in quite different formats I think. Sure, there is some
amount of shared puppet work but that is only part of the picture I
think.


I've already responded to Steve's email, so a tl;dr here: I'm not sure 
why you want to add random services to undercloud. Have you seen an 
installer ever benefiting from e.g. adding a FileSystem-as-a-Service or 
Database-as-a-Service solution?




There are new features to think about here too. Once upon a time
TripleO supported multi-node underclouds. When we switched to instack-
undercloud we moved away from that. By switching back to tripleo-heat-
templates we could structure our templates around abstractions like
resource groups and the new 'deployed-server' trick that allow you to
create machines either locally or perhaps via Ironic too. We could
avoid Ironic entirely and always install the Undercloud on existing
servers via 'deployed-server' as well.


A side note: if we do use Ironic for this purpose, I would expect some 
help with pushing the Ironic composable service through. And the 
ironic-inspector's one, which I haven't even started.


I'm still struggling to understand what entity is going to install this 
bootstrapping Heat instance. Are we bringing back seed?




Lastly, there is container work ongoing for the Overcloud. Again, I'd
like to see us adopt a format that would allow it to be used in the
Undercloud as well as opposed to having to re-implement features in the
Over and Under clouds all the time.



Undercloud installation is already sometimes fragile, but it's
probably
the least fragile part right now (at least from my experience) And
at
the very least it's pretty obviously debuggable in most cases. THT
is
hard to understand and often impossible to debug. I'd prefer we move
away from THT completely rather than trying to fix it in one more
place
where heat does not fit..


What tool did you have in mind. FWIW I started with heat because by
using just Heat I was able to take the initial steps to prototype this.

In my mind Mistral might be next here and in fact it already supports
the single process launching idea thing. Keeping the undercloud
installer as light as possible would be ideal though.


I don't have a really huge experience with both, but for me Mistral 
seems much cleaner and easier to understand. That, of course, won't 
allow you to use reuse the existing heat templates (which may be good or 
bad depending on your point of view).




Dan






I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more
in
Barcelona?

Dan Prince (dprince)


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/05/2016 01:21 PM, Steven Hardy wrote:

On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a hammer
does not mean everything around is nails :( What problem are you trying to
solve by doing it?


I think Dan explains it pretty well in his video, and your comment
indicates a fundamental misunderstanding around the entire TripleO vision,
which is about symmetry and reuse between deployment tooling and the
deployed cloud.


Well, except for you need some non-openstack starting point, because 
unlike with e.g. ansible installing any openstack service(s) does not 
end at "dnf install ".




The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


I'm not against reusing puppet bits, I'm against building the same heavy 
abstraction layer with heat around it.




2. Better modularity, far easier to enable/disable services


Why? Do you expect enabling/disabling Nova, for example? In this regard 
undercloud is fundamentally different from overcloud: for the former we 
have a list of required services and a pretty light list of optional 
services.




3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud


I would love a defined debugging workflow for the overcloud first..



5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet


If you mean instack-undercloud element, we're getting rid of them 
anyway, no?




6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.


Again, why? A service won't integrate itself into the deployment. And to 
be honest, the amount of options TripleO has already cases real world 
problems. I would rather see a well defined set of functionality for it..




7. Potential for much easier implementation of a multi-node undercloud


Ideally, I would love to see:

 for node in nodes:
   ssh $node puppet apply blah-blah

Maybe we're not there, but it only means we have to improve our puppet 
modules.





Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to
understand and often impossible to debug. I'd prefer we move away from THT
completely rather than trying to fix it in one more place where heat does
not fit..


These are some strong but unqualified assertions, so it's hard to really
respond.


We'll talk about "unqualified" assertions the next time I'll try to get 
answers on #tripleo after seeing error messages like "controller_step42 
failed with code 1" ;)



Yes, there is complexity, but it's a set of abstractions which
actually work pretty well for us, so there is value in having just one set
of abstractions used everywhere vs special-casing the undercloud.


There should be a point where we stop. What entity is going to install 
heat to install undercloud (did I just say "seed")? What will provide HA 
for it? Authentication, templates storage and versioning? How do you 
reuse the same abstractions (that's the whole point after all)?




Re moving away from THT completely, this is not a useful statement -
yes, there are alternative tools, but if you were to remove THT and just
use some other tool with Ironic, the result would simply not be TripleO.
There would be zero migration/upgrade path for existing users and all
third-party integrations (and our API/UI) would break.


I don't agree it would not be TripleO. OpenStack does not end on heat 
templates, some deployments don't even use heat.




FWIW I think this prototyping work is very interesting, and I'm certainly
keen to get wider (more constructive) feedback and see where it leads.

Thanks,

Steve

__
OpenStack 

Re: [openstack-dev] [magnum]

2016-08-05 Thread Chinmaya Bharadwaj
Hi,

You can run `tox -egenconfig` under magnum directory, to generate sample
conf.

#Chinmay

On 5 August 2016 at 16:51, Yasemin DEMİRAL (BİLGEM BTE) <
yasemin.demi...@tubitak.gov.tr> wrote:

> Hi
>
> I try to magnum on devstack, in the manual  Configure magnum: section
> has sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command,
> but there is no magnum.conf.
>  What should i do ?
>
> Thanks
>
> Yasemin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Thomas Herve
On Thu, Aug 4, 2016 at 11:48 PM, Dan Prince  wrote:
> Last week I started some prototype work on what could be a new way to
> install the Undercloud. The driving force behind this was some of the
> recent "composable services" work we've done in TripleO so initially I
> called in composable undercloud. There is an etherpad here with links
> to some of the patches already posted upstream (many of which stand as
> general imporovements on their own outside the scope of what I'm
> talking about here).
>
> https://etherpad.openstack.org/p/tripleo-composable-undercloud
>
> The idea in short is that we could spin up a small single process all-
> in-one heat-all (engine and API) and thereby avoid things like Rabbit,
> and MySQL.

I saw those patches coming, I'm interested in the all-in-one approach,
if only for testing purpose. I hope to be able to propose a solution
with broker-less RPC instead of fake RPC at some point, but it's a
good first step.

I'm a bit more intrigued by the no-auth patch. It seems that Heat
would rely heavily on Keystone interactions even after initial
authentication, so I wonder how that work. As it seems you would need
to push the same approach to Ironic, have you considered starting
Keystone instead? It's a simple WSGI service, and can work with SQLite
as well I believe.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:
> On 08/04/2016 11:48 PM, Dan Prince wrote:
> > 
> > Last week I started some prototype work on what could be a new way
> > to
> > install the Undercloud. The driving force behind this was some of
> > the
> > recent "composable services" work we've done in TripleO so
> > initially I
> > called in composable undercloud. There is an etherpad here with
> > links
> > to some of the patches already posted upstream (many of which stand
> > as
> > general imporovements on their own outside the scope of what I'm
> > talking about here).
> > 
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > 
> > The idea in short is that we could spin up a small single process
> > all-
> > in-one heat-all (engine and API) and thereby avoid things like
> > Rabbit,
> > and MySQL. Then we can use Heat templates to drive the Undercloud
> > deployment just like we do in the Overcloud.
> I don't want to sound rude, but please no. The fact that you have a 
> hammer does not mean everything around is nails :( What problem are
> you 
> trying to solve by doing it?

Several problems I think.

One is TripleO has gradually moved away from elements. And while we
still use DIB elements for some things we no longer favor that tool and
instead rely on Heat and config management tooling to do our stepwise
deployment ordering. This leaves us using instack-undercloud a tool
built specifically to install elements locally as a means to create our
undercloud. It works... and I do think we've packaged it nicely but it
isn't the best architectural fit for where we are going I think. I
actually think that from an end/user contribution standpoint using t-h-
t could be quite nice for adding features to the Undercloud.

Second would be re-use. We just spent a huge amount of time in Newton
(and some in Mitaka) refactoring t-h-t around composable services. So
say you add a new composable service for Barbican in the Overcloud...
wouldn't it be nice to be able to consume the same thing in your
Undercloud as well? Right now you can't, you have to do some of the
work twice and in quite different formats I think. Sure, there is some
amount of shared puppet work but that is only part of the picture I
think.

There are new features to think about here too. Once upon a time
TripleO supported multi-node underclouds. When we switched to instack-
undercloud we moved away from that. By switching back to tripleo-heat-
templates we could structure our templates around abstractions like
resource groups and the new 'deployed-server' trick that allow you to
create machines either locally or perhaps via Ironic too. We could
avoid Ironic entirely and always install the Undercloud on existing
servers via 'deployed-server' as well.

Lastly, there is container work ongoing for the Overcloud. Again, I'd
like to see us adopt a format that would allow it to be used in the
Undercloud as well as opposed to having to re-implement features in the
Over and Under clouds all the time.

> 
> Undercloud installation is already sometimes fragile, but it's
> probably 
> the least fragile part right now (at least from my experience) And
> at 
> the very least it's pretty obviously debuggable in most cases. THT
> is 
> hard to understand and often impossible to debug. I'd prefer we move 
> away from THT completely rather than trying to fix it in one more
> place 
> where heat does not fit..

What tool did you have in mind. FWIW I started with heat because by
using just Heat I was able to take the initial steps to prototype this.

In my mind Mistral might be next here and in fact it already supports
the single process launching idea thing. Keeping the undercloud
installer as light as possible would be ideal though.

Dan

> 
> > 
> > 
> > I created a short video demonstration which goes over some of the
> > history behind the approach, and shows a live demo of all of this
> > working with the patches above:
> > 
> > https://www.youtube.com/watch?v=y1qMDLAf26Q
> > 
> > Thoughts? Would it be cool to have a session to discuss this more
> > in
> > Barcelona?
> > 
> > Dan Prince (dprince)
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread BİLGEM BTE

Hi 

I try to magnum on devstack, in the manual Configure magnum: section has sudo 
cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command, but there is 
no magnum.conf. 
What should i do ? 

Thanks 

Yasemin 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Steven Hardy
On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:
> On 08/04/2016 11:48 PM, Dan Prince wrote:
> > Last week I started some prototype work on what could be a new way to
> > install the Undercloud. The driving force behind this was some of the
> > recent "composable services" work we've done in TripleO so initially I
> > called in composable undercloud. There is an etherpad here with links
> > to some of the patches already posted upstream (many of which stand as
> > general imporovements on their own outside the scope of what I'm
> > talking about here).
> > 
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > 
> > The idea in short is that we could spin up a small single process all-
> > in-one heat-all (engine and API) and thereby avoid things like Rabbit,
> > and MySQL. Then we can use Heat templates to drive the Undercloud
> > deployment just like we do in the Overcloud.
> 
> I don't want to sound rude, but please no. The fact that you have a hammer
> does not mean everything around is nails :( What problem are you trying to
> solve by doing it?

I think Dan explains it pretty well in his video, and your comment
indicates a fundamental misunderstanding around the entire TripleO vision,
which is about symmetry and reuse between deployment tooling and the
deployed cloud.

The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)

2. Better modularity, far easier to enable/disable services

3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud

5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet

6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.

7. Potential for much easier implementation of a multi-node undercloud

> Undercloud installation is already sometimes fragile, but it's probably the
> least fragile part right now (at least from my experience) And at the very
> least it's pretty obviously debuggable in most cases. THT is hard to
> understand and often impossible to debug. I'd prefer we move away from THT
> completely rather than trying to fix it in one more place where heat does
> not fit..

These are some strong but unqualified assertions, so it's hard to really
respond.  Yes, there is complexity, but it's a set of abstractions which
actually work pretty well for us, so there is value in having just one set
of abstractions used everywhere vs special-casing the undercloud.

Re moving away from THT completely, this is not a useful statement -
yes, there are alternative tools, but if you were to remove THT and just
use some other tool with Ironic, the result would simply not be TripleO.
There would be zero migration/upgrade path for existing users and all
third-party integrations (and our API/UI) would break.

FWIW I think this prototyping work is very interesting, and I'm certainly
keen to get wider (more constructive) feedback and see where it leads.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]

2016-08-05 Thread BİLGEM BTE
Hi 

I try to magnum on devstack, in the manual Configure magnum: section has sudo 
cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command, but there is 
no magnum.conf. 
What should i do ? 

Thanks 

Yasemin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-05 Thread Jim Rollenhagen
On Wed, Aug 03, 2016 at 08:54:07PM -0400, Andrew Laski wrote:
> I've brought some of these thoughts up a few times in conversations
> where the Nova team is trying to decide if a particular change warrants
> a microversion. I'm sure I've annoyed some people by this point because
> it wasn't germane to those discussions. So I'll lay this out in it's own
> thread.
> 
> I am a fan of microversions. I think they work wonderfully to express
> when a resource representation changes, or when different data is
> required in a request. This allows clients to make the same request
> across multiple clouds and expect the exact same response format,
> assuming those clouds support that particular microversion. I also think
> they work well to express that a new resource is available. However I do
> think think they have some shortcomings in expressing that a resource
> has been removed. But in short I think microversions work great for
> expressing that there have been changes to the structure and format of
> the API.
> 
> I think microversions are being overused as a signal for other types of
> changes in the API because they are the only tool we have available. The
> most recent example is a proposal to allow the revert_resize API call to
> work when a resizing instance ends up in an error state. I consider
> microversions to be problematic for changes like that because we end up
> in one of two situations:
> 
> 1. The microversion is a signal that the API now supports this action,
> but users can perform the action at any microversion. What this really
> indicates is that the deployment being queried has upgraded to a certain
> point and has a new capability. The structure and format of the API have
> not changed so an API microversion is the wrong tool here. And the
> expected use of a microversion, in my opinion, is to demarcate that the
> API is now different at this particular point.

+1. Microversions as a concept was created (and communicated) on the
basis that the API should always behave the same if the client always
sends the same microversion. Clients pinned to a particular version
likely won't notice the new microversion until they need to (probably
when behavior changes when they weren't expecting it).

> 2. The microversion is a signal that the API now supports this action,
> and users are restricted to using it only on or after that microversion.
> In many cases this is an artificial constraint placed just to satisfy
> the expectation that the API does not change before the microversion.
> But the reality is that if the API change was exposed to every
> microversion it does not affect the ability I lauded above of a client
> being able to send the same request and receive the same response from
> disparate clouds. In other words exposing the new action for all
> microversions does not affect the interoperability story of Nova which
> is the real use case for microversions. I do recognize that the
> situation may be more nuanced and constraining the action to specific
> microversions may be necessary, but that's not always true.

I actually do disagree here. While adding a field to a resource, or a
new action, probably won't break any clients, I do think it's a good
signal here. If clients wish to always have all the new features, they
should be looking for new versions often and doing the work to move up.
They probably can't use the new thing without a code change, anyway, so
I don't think it's a major problem to need to bump the version to get a
new thing.

That said, it is annoying as a developer to need to deal with all the
versioning things to simply add a field. I know plenty of folks that
agree with you here, and can understand why. :)

> In case 1 above I think we could find a better way to do this. And I
> don't think we should do case 2, though there may be special cases that
> warrant it.
> 
> As possible alternate signalling methods I would like to propose the
> following for consideration:
> 
> Exposing capabilities that a user is allowed to use. This has been
> discussed before and there is general agreement that this is something
> we would like in Nova. Capabilities will programatically inform users
> that a new action has been added or an existing action can be performed
> in more cases, like revert_resize. With that in place we can avoid the
> ambiguous use of microversions to do that. In the meantime I would like
> the team to consider not using microversions for this case. We have
> enough of them being added that I think for now we could just wait for
> the next microversion after a capability is added and document the new
> capability there.

I do agree we should advertise capabilities. However, those should
*also* be added behind a microversion (and not exposed in earlier
microversions), IMO.

> Secondly we could consider some indicator that exposes how new the code
> in a deployment is. Rather than using microversions as a proxy to
> indicate that a deployment has hit a certain 

Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-05 Thread Jim Rollenhagen
On Thu, Aug 04, 2016 at 04:31:00PM -0400, Jay Pipes wrote:
> On 08/04/2016 01:17 PM, Chris Friesen wrote:
> >On 08/04/2016 09:28 AM, Edward Leafe wrote:
> >
> >>The idea that by specifying a distinct microversion would somehow
> >>guarantee
> >>an immutable behavior, though, is simply not the case. We discussed
> >>this at
> >>length at the midcycle regarding the dropping of the nova-network
> >>code; once
> >>that's dropped, there won't be any way to get that behavior no matter
> >>what
> >>microversion you specify. It's gone. We signal this with deprecation
> >>notices,
> >>release notes, etc., and it's up to individuals to move away from
> >>using that
> >>behavior during this deprecation period. A new microversion will never
> >>help
> >>anyone who doesn't follow these signals.
> >
> >I was unable to attend the midcycle, but that seems to violate the
> >original description of how microversions were supposed to work.  As I
> >recall, the original intent was something like this:
> >
> >At time T, we remove an API via microversion X.  We keep the code around
> >to support it when using microversions less than X.
> >
> >At some later time T+i, we bump the minimum microversion up to X.  At
> >this point nobody can ever request the older microversions, so we can
> >safely remove the server-side code.
> >
> >Have we given up on this?  Or is nova-network a special-case?
> 
> This is how Ironic works with microversions today, yes. However, in Nova
> we've unfortunately taken the policy that we will probably *never* bump the
> minimum microversion.

Well, ironic has taken the same policy so far. However, we are thinking
about breaking that, to be able to clean up technical debt (and frankly,
terrible APIs).

If we had an equivalent of nova-network to get rid of, we would drop it
in a microversion, (finally) figure out how to signal that we're finally
going to bump the minimum, and do it (with a reasonable timeframe for
folks to adjust). Or at least, that's how I envision it, I can't
speak for other ironic devs.

I do agree that using a microversion to signal that something is going
away in all versions is not cool. I spoke up about this at the nova
midcycle. If we're being real, someone that has accepted microversions
as a thing is going to assume that pinning to a given version will mean
their client will always work the same (because that's what we've
communicated). They won't read the API docs because they don't need to; the
API won't change, right? Then we drop a thing across all versions and
suddenly they're broken. With a client dev hat on, I'd much rather see
an error message of "this version no longer exists" as opposed to a
random 404. The former has a much more concrete action to take.

Again, if we're being real, there probably aren't very many nova client
applications that are complex enough that moving up a cycle or two worth
of microversions is a huge pain point. These versions spread across a
fairly large number of REST resources, and most apps don't use them all.
The changes for each version are also (or should be) fairly small,
and so it shouldn't be painful to upgrade across them. I tend to think
the worst case breakage is a couple hours worth of work.

That said, I do think we *should* be very careful about raising
minimums. Deprecation warnings should be very in-your-face, the choice to
do so should be very deliberate, and it should happen over multiple
cycles. But we need to be able to do it, otherwise we'll find ourselves
10 years from now supporting code that barely makes sense in the real
world.

So, I think what I'm saying is I somewhat agree with Jay here. :)

// jim

> 
> I personally find this extremely distasteful as I hate all the crap that
> needs to sit around along with all the conditionals in the code that have to
> do with continuing to support old behaviour.
> 
> If it were up to me, the Nova project would just say to operators and
> library/SDK develpers: if you want feature foo, then the tradeoff is that
> the minimum microversion is going up to X. Operators can choose to continue
> on the old code or indicate to their users that they are running a minimum
> newer version of the Compute API and users will need to use a library that
> passes that minimum version header at least.
> 
> IMHO we have gone out of our way to cater to mythical users who under no
> circumstances should ever be affected by changes in an API. Enough is
> enough. It's time we took back some control and cleaned up a bunch of
> technical debt and poor API design vestigial tails by raising the minimum
> microversion of the Compute API.
> 
> And no, the above isn't saying "to hell with our users". It's a simple
> statement that we cannot be beholden to a small minority of users, however
> vocal, that wish that nothing would ever change. These users can continue to
> deploy CentOS4 or Ubuntu 10.04 or libvirt 0.9.8 [1] if they wish and not
> upgrade OpenStack, but that shouldn't mean that we as a project 

Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dmitry Tantsur

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a 
hammer does not mean everything around is nails :( What problem are you 
trying to solve by doing it?


Undercloud installation is already sometimes fragile, but it's probably 
the least fragile part right now (at least from my experience) And at 
the very least it's pretty obviously debuggable in most cases. THT is 
hard to understand and often impossible to debug. I'd prefer we move 
away from THT completely rather than trying to fix it in one more place 
where heat does not fit..




I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more in
Barcelona?

Dan Prince (dprince)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystoneauth1 2.11.0 release (newton)

2016-08-05 Thread no-reply
We are eager to announce the release of:

keystoneauth1 2.11.0: Authentication Library for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

Changes in keystoneauth1 2.10.0..2.11.0
---

85822f3 Add tests for YamlJsonSerializer
a8ccbb1 Updated from global requirements
8202c6a Don't include openstack/common in flake8 exclude list
82804f6 Improve authentication plugins documentation
31796b3 Add missing class name to tuple of public objects
2e227b9 Correctly report available for ADFS plugin
1982b23 Updated from global requirements
dacbc5f Fix arguments to _auth_required()
313006a Fix the doc error in "using-session"
e9bbca7 Use assertEqual() instead of assertDictEqual()


Diffstat (except docs and test files)
-

keystoneauth1/extras/_saml2/__init__.py|  1 +
keystoneauth1/extras/_saml2/_loading.py|  4 ++
keystoneauth1/extras/_saml2/v3/__init__.py |  4 +-
keystoneauth1/extras/_saml2/v3/adfs.py |  6 ++-
keystoneauth1/extras/_saml2/v3/base.py |  5 +-
keystoneauth1/fixture/serializer.py| 10 ++--
keystoneauth1/identity/v3/oidc.py  |  1 +
keystoneauth1/session.py   |  3 +-
test-requirements.txt  |  4 +-
tox.ini|  2 +-
17 files changed, 116 insertions(+), 26 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index a0a91cc..8568319 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -13 +13 @@ mock>=2.0 # BSD
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-05 Thread Kevin Benton
Sorry I didn't elaborate a bit more, I was replying from my phone. The
agent has logic that calculates the required flows for ports when it starts
up and then reconciles that with the current flows in OVS so it doesn't
disrupt traffic on every restart. The tests for that run constant pings in
the background while constantly calling the restart logic to ensure no
packets are lost.

On Thu, Aug 4, 2016 at 2:14 PM, Kevin Benton  wrote:

> Hitless restart logic in the agent.
>
> On Aug 4, 2016 14:07, "Rick Jones"  wrote:
>
>> On 08/04/2016 01:39 PM, Kevin Benton wrote:
>>
>>> Yep. Some tests are making sure there are no packets lost. Some are
>>> making sure that stuff starts working eventually.
>>>
>>
>> Not to be pedantic, but what sort of requirement exists that no packets
>> be lost?
>>
>> rick
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Ricardo Rocha
Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
requests should be higher but we had some internal issues. We have a
submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to
deploy the cluster) on heat when it seems to be crunching the kube_minions
nested stacks. Once it's done, it still adds new stacks gradually, so it
doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this
working.

This delay is already visible in clusters of 512 nodes, but 40% of the time
in 1000 nodes seems like something we could improve. Any hints on Heat
configuration optimizations for large stacks very welcome.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol  wrote:

> Thanks Ricardo! This is very exciting progress!
>
> --Brad
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
> PM---Thanks Ricardo for sharing the data, this is really encouraging! T]Ton
> Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this is
> really encouraging! Ton,
>
> From: Ton Ngo/Watson/IBM@IBMUS
> To: "OpenStack Development Mailing List \(not for usage questions\)" <
> openstack-dev@lists.openstack.org>
> Date: 06/17/2016 12:10 PM
> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
>
> --
>
>
>
> Thanks Ricardo for sharing the data, this is really encouraging!
> Ton,
>
> [image: Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
> AM---Hi. Just thought the Magnum team would be happy to hear :)]Ricardo
> Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
> happy to hear :)
>
> From: Ricardo Rocha 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 06/17/2016 08:16 AM
> Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes
> --
>
>
>
> Hi.
>
> Just thought the Magnum team would be happy to hear :)
>
> We had access to some hardware the last couple days, and tried some
> tests with Magnum and Kubernetes - following an original blog post
> from the kubernetes team.
>
> Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
> sec.
>
> Check here for some details:
>
> *https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html*
> 
>
> We'll try bigger in a couple weeks, also using the Rally work from
> Winnie, Ton and Spyros to see where it breaks. Already identified a
> couple issues, will add bugs or push patches for those. If you have
> ideas or suggestions for the next tests let us know.
>
> Magnum is looking pretty good!
>
> Cheers,
> Ricardo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-05 Thread Jim Rollenhagen

>> On Aug 4, 2016, at 10:48, Jay Pipes  wrote:
>> 
>>> On 08/04/2016 10:31 AM, Jim Rollenhagen wrote:
>>> On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
>>> Hi Novas and anyone interested in how to represent capabilities in a
>>> consistent fashion.
>>> 
>>> I spent an hour creating a new os-capabilities Python library this evening:
>>> 
>>> http://github.com/jaypipes/os-capabilities
>>> 
>>> Please see the README for examples of how the library works and how I'm
>>> thinking of structuring these capability strings and symbols. I intend
>>> os-capabilities to be the place where the OpenStack community catalogs and
>>> collates standardized features for hardware, devices, networks, storage,
>>> hypervisors, etc.
>>> 
>>> Let me know what you think about the structure of the library and whether
>>> you would be interested in owning additions to the library of constants in
>>> your area of expertise.
>>> 
>>> Next steps for the library include:
>>> 
>>> * Bringing in other top-level namespaces like disk: or net: and working with
>>> contributors to fill in the capability strings and symbols.
>>> * Adding constraints functionality to the library. For instance, building in
>>> information to the os-capabilities interface that would allow a set of
>>> capabilities to be cross-checked for set violations. As an example, a
>>> resource provider having DISK_GB inventory cannot have *both* the disk:ssd
>>> *and* the disk:hdd capability strings associated with it -- clearly the disk
>>> storage is either SSD or spinning disk.
>> 
>> Well, if we constrain ourselves to VMs, yes. :)
> 
> I wasn't constraining ourselves to VMs :)
> 
>> One of the issues with running ironic behind nova is that there isn't
>> any way to express that a flavor (or instance) has (or should have)
>> multiple physical disks. It would certainly be possible to boot a
>> baremetal machine that does have SSD and spinning rust.
>> 
>> I don't have a solution in mind here, just wanted to point out that we
>> need to keep more than VMs in mind when talking about capabilities. :)
> 
> Note that in the above, I am explicit that the disk:hdd and disk:ssd 
> capabilities should not be provided by a resource provider **that has an 
> inventory of DISK_GB resources** :)
> 
> Ironic baremetal nodes do not have an inventory record of DISK_GB. Instead, 
> the resource class is dynamic -- e.g. IRON_SILVER. The constraint of not 
> having disk:hdd and disk:ssd wouldn't apply in that case.

Touché. I would like to be able to express that some baremetal resource class 
can have disk:ssd and disk:hdd capabilities, but it sounds like that's covered. 
Thanks for clearing that up for me. :)

// jim

> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Flavio Percoco

On 04/08/16 15:46 +, Alexander Tivelkov wrote:

I am the one who started the initiative 2.5 years ago, and was always
advocating the "let's stay in Glance" approach during numerous discussions
on "where should it belong" for all these years.
Now I believe that it is time to move forward indeed. Some things remain to
be defined (first of all the differences and responsibility sharing between
Images and Artifacts APIs), but I am fully supportive of this move and
strongly believe it is a step in a right direction. Thanks Mike, Nikhil,
Flavio, Erno, Stuart, Brian and all others who helped Glare on this rough
path.



Thank you all for putting up with the Glance team changes of priorities. I
appreaciate all the effort you have put on this.

Flavio




On Thu, Aug 4, 2016 at 6:29 PM Mikhail Fedosin 
wrote:


Hi all,
after 6 months of Glare v1 API development we have decided to continue our
work in a separate project in the "openstack" namespace with its own core
team (me, Kairat Kushaev, Darja Shkhray and the original creator -
Alexander Tivelkov). We want to thank Glance community for their support
during the incubation period, valuable advice and suggestions - this time
was really productive for us. I believe that this step will allow the Glare
project to concentrate on feature development and move forward faster.
Having the independent service also removes inconsistencies in
understanding what Glance project is: it seems that a single project cannot
own two different APIs with partially overlapping functionality. So with
the separation of Glare into a new project, Glance may continue its work on
the OpenStack Images API, while Glare will become the reference
implementation of the new OpenStack Artifacts API.

Nevertheless, Glare team would like to continue to collaborate with the
Glance team in a new - cross-project - format. We still have lots in
common, both in code and usage scenarios, so we are looking forward for
fruitful work with the rest of the Glance team. Those of you guys who are
interested in Glare and the future of Artifacts API are also welcome to
join the Glare team: we have a lot of really exciting tasks and will always
welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I
will continue to be part of the Glance team and for sure I'm going to
contribute in Glance, because I am interested in this project and want to
help it be successful.

We'll have the formal patches pushed to project-config earlier next week,
appropriate repositories, wiki and launchpad space will be created soon as
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC
Mondays in #openstack-meeting-alt, it will just become a Glare project
meeting instead of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still
be called "Glare", but since we are on our own now this acronym becomes
recursive: GLARE now stands for "GLare Artifact REpository" :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Regards,
Alexander Tivelkov



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Flavio Percoco

On 04/08/16 13:47 -0500, Ian Cordasco wrote:

 

-Original Message-
From: Tim Bell 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 4, 2016 at 13:19:02
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project



> On 04 Aug 2016, at 19:34, Erno Kuvaja wrote:
>
> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum wrote:
>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>>
>>> On 04 Aug 2016, at 17:27, Mikhail Fedosin >
wrote:

 Hi all,
>> after 6 months of Glare v1 API development we have decided to continue
 our work in a separate project in the "openstack" namespace with its own
 core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
 Alexander Tivelkov). We want to thank Glance community for their support
 during the incubation period, valuable advice and suggestions - this time
 was really productive for us. I believe that this step will allow the
 Glare project to concentrate on feature development and move forward
 faster. Having the independent service also removes inconsistencies
 in understanding what Glance project is: it seems that a single project
 cannot own two different APIs with partially overlapping functionality. So
 with the separation of Glare into a new project, Glance may continue its
 work on the OpenStack Images API, while Glare will become the reference
 implementation of the new OpenStack Artifacts API.

>>>
>>> I would suggest looking at more than just the development process when
>>> reflecting on this choice.
>>> While it may allow more rapid development, doing on your own will increase
>>> costs for end users and operators in areas like packaging, configuration,
>>> monitoring, quota … gaining critical mass in production for Glare will
>>> be much more difficult if you are not building on the Glance install base.
>>
>> I have to agree with Tim here. I respect that it's difficult to build on
>> top of Glance's API, rather than just start fresh. But, for operators,
>> it's more services, more API's to audit, and more complexity. For users,
>> they'll now have two ways to upload software to their clouds, which is
>> likely to result in a large portion just ignoring Glare even when it
>> would be useful for them.
>>
>> What I'd hoped when Glare and Glance combined, was that there would be
>> a single API that could be used for any software upload and listing. Is
>> there any kind of retrospective or documentation somewhere that explains
>> why that wasn't possible?
>>
>
> I was planning to leave this branch on it's own, but I have to correct
> something here. This split is not introducing new API, it's moving the
> new Artifact API under it's own project, there was no shared API in
> first place. Glare was to be it's own service already within Glance
> project. Also the Artifacts API turned out to be fundamentally
> incompatible with the Images APIs v1 & v2 due to the totally different
> requirements. And even the option was discussed in the community I
> personally think replicating Images API and carrying the cost it being
> in two services that are fundamentally different would have been huge
> mistake we would have paid for long time. I'm not saying that it would
> have been impossible, but there is lots of burden in Images APIs that
> Glare really does not need to carry, we just can't get rid of it and
> likely no-one would have been happy to see Images API v3 around the
> time when we are working super hard to get the v1 users moving to v2.
>
> Packaging glance-api, glance-registry and glare-api from glance repo
> would not change the effort too much compared from 2 repos either.
> Likely it just makes it easier when the logical split it clear from
> the beginning.
>
> What comes to Tim's statement, I do not see how Glare in it's own
> service with it's own API could ride on the Glance install base apart
> from the quite false mental image these two thing being the same and
> based on the same code.
>

To give a concrete use case, CERN have Glance deployed for images. We are 
interested in
the ecosystem
around Murano and are actively using Heat. We deploy using RDO with RPM 
packages, Puppet-OpenStack
for configuration, a set of machines serving Glance in an HA set up across 
multiple data
centres and various open source monitoring tools.

The multitude of projects and the day two maintenance scenarios with 11 
independent
projects is a cost and adding further to this cost for the production 
deployments of OpenStack
should not be ignored.

By Glare choosing to go their own way, does this mean that

- Can the existing RPM packaging for Glance be used to deploy Glare ? If there 
needs to be
new packages defined, this is additional cost for the RDO 

Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-05 Thread Alex Xu
2016-08-05 0:43 GMT+08:00 Andrew Laski :

>
>
> On Thu, Aug 4, 2016, at 11:40 AM, John Garbutt wrote:
> > On 4 August 2016 at 16:28, Edward Leafe  wrote:
> > > On Aug 4, 2016, at 8:18 AM, Andrew Laski  wrote:
> > >
> > >> This gets to the point I'm trying to make. We don't guarantee old
> > >> behavior in all cases at which point users can no longer rely on
> > >> microversions to signal non breaking changes. And where we do
> guarantee
> > >> old behavior sometimes we do it artificially because the only signal
> we
> > >> have is microversions and that's the contract we're trying to adhere
> to.
> > >
> > > I've always understood microversions to be a way to prevent breaking
> an automated tool when we change either the input or output of our API. Its
> benefit was less clear for the case of adding a new API, since there is no
> chance of breaking something that would never call it. We also accept that
> a bug fix doesn't require a microversion bump, as users should *never* be
> expecting a 5xx response, so not only does fixing that not need a bump, but
> such fixes can be backported to affect all microversions.
> > >
> > > The idea that by specifying a distinct microversion would somehow
> guarantee an immutable behavior, though, is simply not the case. We
> discussed this at length at the midcycle regarding the dropping of the
> nova-network code; once that's dropped, there won't be any way to get that
> behavior no matter what microversion you specify. It's gone. We signal this
> with deprecation notices, release notes, etc., and it's up to individuals
> to move away from using that behavior during this deprecation period. A new
> microversion will never help anyone who doesn't follow these signals.
> > >
> > > In the case that triggered this thread [0], the change was completely
> on the server side of things; no change to either the request or response
> of the API. It simply allowed a failed resize to be recovered more easily.
> That's a behavior change, not an API change, and frankly, I can't imagine
> anyone who would ever *want* the old behavior of leaving an instance in an
> error state. To me, that's not very different than fixing a 5xx response,
> as it is correcting an error on the server side.
> > >
> >
> > The problem is was thinking about is, how do you know if a cloud
> > supports that new behaviour? For me, a microversion does help to
> > advertise that. Its probably a good example of where its not important
> > enough to add a new capability to tell people thats possible.
>
> I do see this as a capability though. I've been thinking of capabilities
> as an answer to the question of "what can I do with this resource?" So a
> capability query to an instance that errored during resize might
> currently just return ['delete', 'call admin(joking)'] and assuming we
> relax the restriction it would return ['delete', 'revert_resize'].
>

Ah, I see now, I stuck at the capability discovery API is for the supported
feature in the cloud. The higher level API is resolved all my question, the
capability discovery API is for the thing I can do for now.


>
> >
> > That triggers the follow up question, of is that important in this
> > case, could you just make the call and see if it works?
>
> Sure. Until we have more discoverability in the API this is the reality
> of what users need to do due to things like policy checks.
>
> What I'm aiming for is discoverability that works well for users. The
> current situation is a new microversion means go check the docs or
> release notes and where I'd like to be is a new microversion means check
> the provided API schemas, and a new/removed capability expresses a
> change in behavior. And if there are other types of changes users should
> be aware of that we think about the right mechanism for exposing it. All
> I'm saying is all we have is a hammer, is everything we're using it on
> really a nail? :)
>
> >
> > Thanks,
> > johnthetubaguy
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] entity graph layout

2016-08-05 Thread Yujun Zhang
forgot to attach a screenshot. See below[image: Screen Shot 2016-08-05 at
2.28.56 PM.png]
×
On Fri, Aug 5, 2016 at 2:32 PM Yujun Zhang  wrote:

> Hi, all,
>
> I'm building a demo of vitrage. The dynamic entity graph looks
> interesting.
>
> But when more entities are added, things becomes crowded and the links
> screw over each other. Dragging the items will not help much.
>
> Is it possible to adjust the layout so I can get a more regular/stable
> tree view of the entities?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] entity graph layout

2016-08-05 Thread Yujun Zhang
Hi, all,

I'm building a demo of vitrage. The dynamic entity graph looks interesting.

But when more entities are added, things becomes crowded and the links
screw over each other. Dragging the items will not help much.

Is it possible to adjust the layout so I can get a more regular/stable tree
view of the entities?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Tim Bell

> On 05 Aug 2016, at 01:02, Jay Pipes  wrote:
> 
> On 08/04/2016 06:40 PM, Clint Byrum wrote:
>> Excerpts from Jay Pipes's message of 2016-08-04 18:14:46 -0400:
>>> On 08/04/2016 05:30 PM, Clint Byrum wrote:
 Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
> I disagree. I see glare as a superset of the needs of the image api and 
> one feature I need thats image related was specifically shot down as "the 
> artefact api will solve that".
> 
> You have all the same needs to version/catalog/store images. They are not 
> more special then a versioned/cataloged/stored heat templates, murano 
> apps, tuskar workflows, etc. I've heard multiple times, members of the 
> glance team saying  that once glare is fully mature, they could stub out 
> the v1/v2 glance apis on top of glare. What is the benefit to splitting 
> if the end goal is to recombine/make one project irrelevant?
> 
> This feels like to me, another case of an established, original tent 
> project not wanting to deal with something that needs to be dealt with, 
> and instead pushing it out to another project with the hope that it just 
> goes away. With all the traction non original tent projects have gotten 
> since the big tent was established, that might be an accurate conclusion, 
> but really bad for users/operators of OpenStack.
> 
> I really would like glance/glare to reconsider this stance. OpenStack 
> continuously budding off projects is not a good pattern.
> 
 
 So very this.
>>> 
>>> Honestly, operators need to move past the "oh, not another service to
>>> install/configure" thing.
>>> 
>>> With the whole "microservice the world" movement, that ship has long
>>> since sailed, and frankly, the cost of adding another microservice into
>>> the deployment at this point is tiny -- it should be nothing more than a
>>> few lines in a Puppet manifest, Chef module, Ansible playbook, or Salt
>>> state file.
>>> 
>>> If you're doing deployment right, adding new services to the
>>> microservice architecture that OpenStack projects are being pushed
>>> towards should not be an issue.
>>> 
>>> I find it odd that certain folks are pushing hard for the
>>> shared-nothing, microservice-it-all software architecture and yet
>>> support this mentality that adding another couple (dozen if need be)
>>> lines of configuration data to a deployment script is beyond the pale to
>>> ask of operators.
>>> 
>> 
>> Agreed, deployment isn't that big of a deal. I actually thought Kevin's
>> point was that the lack of focus was the problem. I think the point in
>> bringing up deployment is simply that it isn't free, not that it's the
>> reason to combine the two.
> 
> My above statement was more directed to Kevin and Tim, both of whom indicated 
> that adding another service to the deployment was a major problem.
> 

The difficulty I have with additional projects is that there are often major 
parts missing in order to deploy in production. Packaging, Configuration 
management manifests, Monitoring etc. are not part
of the standard deliverables but are left to other teams. Having had to fill in 
these gaps for 4 OpenStack projects so far already, they are not trivial to do 
and I feel the effort required
for this was not considered as part of the split decision.

 It's clear there's been a disconnect in expectations between the outside
 and inside of development.
 
 The hope from the outside was that we'd end up with a user friendly
 frontend API to artifacts, that included more capability for cataloging
 images.  It sounds like the two teams never actually shared that vision
 and remained two teams, instead of combining into one under a shared
 vision.
 
 Thanks for all your hard work, Glance and Glare teams. I don't think
 any of us can push a vision on you. But, as Kevin says above: consider
 addressing the lack of vision and cooperation head on, rather than
 turning your backs on each-other. The users will sing your praises if
 you can get it done.
>>> 
>>> It's been three years, two pre-big-tent TC graduation reviews (one for a
>>> split out murano app catalog, one for the combined project team being
>>> all things artifact), and over that three years, the original Glance
>>> project has at times crawled to a near total stop from a contribution
>>> perspective and not indicated much desire to incorporate the generic
>>> artifacts API or code. Time for this cooperation came and went with
>>> ample opportunities.
>>> 
>>> The Glare project is moving on.
>> 
>> The point is that this should be reconsidered, and that these internal
>> problems, now surfaced, seem surmountable if there's actually a reason
>> to get past them. Since it seems from the start, Glare and Glance never
>> actually intended to converge on a generic artifacts API, but rather
>> to simply tolerate one