Re: [openstack-dev] [tc] project-navigator-data repo live - two choices need input

2017-04-11 Thread Monty Taylor
Sweet - so - each repo should have release and component (if I 
understand those right) - is there additional info that we should include?


Like - in the multi-file approach, it's a set of directories with files 
of the form:


release/component.json -> contains versions[]

And the single file, it's:

release/{release}.json -> contains services[] each with versions[]

On 04/11/2017 07:40 AM, Sebastian Marcet wrote:

Monty thx so much
basically we have following structure

Release ->Component ->Version

so i think that we could consume this pretty easily, only caveat is that
we still to add from our side, Release and Component data, but i guess
that is doable

thx u so much !!! i will take a look to both formats

cheers

2017-04-11 8:44 GMT-03:00 Monty Taylor >:

Hey all,

We've got the project-navigator-data repo created and there are two
proposals up for what the content should look like.

Could TC folks please add openstack/project-navigator-data to your
watch lists, and go review:

https://review.openstack.org/#/c/454688

https://review.openstack.org/#/c/454691


so we can come to an agreement on which version we prefer? Maybe
just +2 the one you prefer (or both if you don't care) and only -1
if you specifically dislike one over the other?

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Sebastian Marcet
https://ar.linkedin.com/in/smarcet
SKYPE: sebastian.marcet


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Ops Meetups Team Meeting Reminder

2017-04-11 Thread xico loco
GREAT SUCCESS !!!

2017-04-11 10:51 GMT-03:00 Melvin Hillsman :

> Hey everyone,
>
> Just a reminder that the meetups team will be meeting today 4/11 at 1500UTC
>
> Agenda: https://etherpad.openstack.org/p/ops-meetups-team
>
>
> https://www.worldtimebuddy.com/?qm=1=100,1816670,
> 2147714,4699066=100=2017-4-11=15-16
> https://www.worldtimebuddy.com/?qm=1=100,2643743,
> 5391959,2950159=100=2017-4-11=15-16
>
> --
> Kind regards,
>
> Melvin Hillsman
> Ops Technical Lead
> OpenStack Innovation Center
>
> mrhills...@gmail.com
> phone: (210) 312-1267
> mobile: (210) 413-1659
> http://osic.org
>
> Learner | Ideation | Belief | Responsibility | Command
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ocata - Ubuntu 16.04 - OVN does not work with DPDK

2017-04-11 Thread Russell Bryant
On Mon, Apr 10, 2017 at 4:49 PM, Martinx - ジェームズ 
wrote:

>
>
> On 8 April 2017 at 00:37, Martinx - ジェームズ 
> wrote:
>
>> Guys,
>>
>>  I manage to deploy Ocata on Ubuntu 16.04 with OVN for the first time
>> ever, today!
>>
>>  It looks very, very good... OVN L3 Router is working, OVN DHCP
>> working... bridge mappings "br-ex" on each compute node... All good!
>>
>>  Then, I've said: time for DPDK!
>>
>>  I manage to use OVS with DPDK easily on top of Ubuntu (plus Ocata Cloud
>> Archive) with plain KVM, no OpenStack, so, I have experience about how to
>> setup DPDK, OVS+DPDK, Libvirt vhostuser, KVM and etc...
>>
>>  After configuring DPDK on a compute node, I tried the following
>> instructions:
>>
>>  https://docs.openstack.org/developer/networking-ovn/dpdk.html
>>
>>  It looks quite simple!
>>
>>  To make things even simpler, I have just 1 controller, and 1 compute
>> node, to begin with, before enabling DPDK at the compute node and changing
>> the "br-int" datapath, I deleted all OVN Routers and all Neutron Networks
>> and Subnets, that was previously working with regular OVS (no DPDK).
>>
>>  Then, after enabling DPDK and updating the "br-int" and the "br-ex"
>> interfaces, right after connecting the "OVN L3 Router" into the "ext-net /
>> br-ex" network, the following errors appeared on OpenvSwitch logs of the
>> related compute node (OpenFlow error):
>>
>>
>>  * After connecting OVN L3 Router against the "ext-net / br-ex" Flat /
>> VLAN Network:
>>
>>  ovs-vswitchd.log:
>>
>>  http://paste.openstack.org/show/605800/
>>
>>  ovn-controller.log:
>>
>>  http://paste.openstack.org/show/605801/
>>
>>
>>  Also, after connecting the OVN L3 Router into the local (GENEVE)
>> network, very similar error messages appeared on OpenvSwitch logs...
>>
>>
>>  * After connecting OVN L3 Router on a "local" GENEVE Network:
>>
>>  ovs-vswitchd.log:
>>
>>  http://paste.openstack.org/show/605804/
>>
>>  ovn-controller.log:
>>
>>  http://paste.openstack.org/show/605805/
>>
>>
>>  * Output of "ovs-vsctl show" at the single compute node, after plugging
>> the OVN L3 Router against the two networks (external / GENEVE):
>>
>>  http://paste.openstack.org/show/605806/
>>
>>
>>  Then, I tried to launch an Instance anyway and, for my surprise, the
>> Instance was created! Using vhostuser OVS+DPDK socket!
>>
>>  Also, the Instance got its IP! Which is great!
>>
>>  However, the Instance can not ping its OVN L3 Router (its default
>> gateway), with or without any kind of security groups applied, no deal...
>> :-(
>>
>>  BTW, the Instance did not received the ARP stuff of the OVN L3 Router, I
>> mean, for the instance, the gateway IP on "arp -an" shows "".
>>
>>
>>  * The ovs-vswitchd.log after launching an Instance on top of
>> OVN/OVS+DPDK:
>>
>>  http://paste.openstack.org/show/605807/
>>
>>  * The output of "ovs-vsctl show" after launching the above instance:
>>
>>  http://paste.openstack.org/show/605809/ - Line 33 is the dpdkvhostuser
>>
>>
>>  Just to give another try, I started a second Instance, to see if the
>> Instances can ping each other... Also did not worked, the Instances can not
>> ping each other.
>>
>>
>>  So, from what I'm seeing, OVN on top of DPDK does not work.
>>
>>  Any tips?
>>
>>
>>  NOTE:
>>
>>  I tried to enable "hugepages" on my OpenStack's flavor, just in case...
>> Then, I found another bug, it doesn't even boot the Instance:
>>
>>  https://bugs.launchpad.net/cloud-archive/+bug/1680956
>>
>>
>>  For now, I'll deploy Ocata with regular OVN, no DPDK, but, my goal with
>> this cloud is for high performance networks, so, I need DPDK, and I also
>> need GENEVE and Provider Networks, everything on top of DPDK.
>>
>> ---
>>  After researching more about this "high perf networks", I found this:
>>
>>  * DPDK-like performance in Linux kernel with XDP !
>>
>>  http://openvswitch.org/support/ovscon2016/7/0930-pettit.pdf
>>
>>  https://www.iovisor.org/technology/xdp
>>  https://www.iovisor.org/technology/ebpf
>>
>>  https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
>>
>>  But I have no idea about how to use OpenvSwitch with this thing,
>> however, if I can achieve DPDK-Like performance, without DPDK, using just
>> plain Linux, I'm a 100% sure that I'll prefer it!
>>
>>  I'm okay to give OpenvSwitch + DPDK another try, even knowing that it
>> [OVS] STILL have serious problems (https://bugs.launchpad.net/ub
>> untu/+source/openvswitch/+bug/1577088)...
>> ---
>>
>>  OpenStack on Ubuntu rocks!   :-D
>>
>> Thanks!
>> Thiago
>>
>
> I just realized how cool IO Visor is!
>
> Sorry about mixing subjects, let's keep this one clear for OVN on top of
> DPDK.
>
> I found a opened bug on RedHat's Bugzilla, I updated it with the info from
> this e-mail:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1410565
>
> Looks like that, OVN doc say that it is supported on top of any
> OpenvSwitch Datapath but, it is not the case... Right?
>
> I would be happy to be able 

Re: [openstack-dev] [tc] project-navigator-data repo live - two choices need input

2017-04-11 Thread Sebastian Marcet
i think that we the format proposed on
https://review.openstack.org/#/c/454691/1
we would be ok
giving my +1 there

cheers


2017-04-11 10:45 GMT-03:00 Monty Taylor :

> Sweet - so - each repo should have release and component (if I understand
> those right) - is there additional info that we should include?
>
> Like - in the multi-file approach, it's a set of directories with files of
> the form:
>
> release/component.json -> contains versions[]
>
> And the single file, it's:
>
> release/{release}.json -> contains services[] each with versions[]
>
> On 04/11/2017 07:40 AM, Sebastian Marcet wrote:
>
>> Monty thx so much
>> basically we have following structure
>>
>> Release ->Component ->Version
>>
>> so i think that we could consume this pretty easily, only caveat is that
>> we still to add from our side, Release and Component data, but i guess
>> that is doable
>>
>> thx u so much !!! i will take a look to both formats
>>
>> cheers
>>
>> 2017-04-11 8:44 GMT-03:00 Monty Taylor > >:
>>
>> Hey all,
>>
>> We've got the project-navigator-data repo created and there are two
>> proposals up for what the content should look like.
>>
>> Could TC folks please add openstack/project-navigator-data to your
>> watch lists, and go review:
>>
>> https://review.openstack.org/#/c/454688
>> 
>> https://review.openstack.org/#/c/454691
>> 
>>
>> so we can come to an agreement on which version we prefer? Maybe
>> just +2 the one you prefer (or both if you don't care) and only -1
>> if you specifically dislike one over the other?
>>
>> Thanks!
>> Monty
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Sebastian Marcet
>> https://ar.linkedin.com/in/smarcet
>> SKYPE: sebastian.marcet
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sebastian Marcet
https://ar.linkedin.com/in/smarcet
SKYPE: sebastian.marcet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Unknown provider resource-type-registry.service test error

2017-04-11 Thread Beth Elwell
Hi!

Just as a follow up to my email yesterday, tests are now passing. Many thanks 
Rob for helping me out with this on IRC!

If anyone else comes across this error message developing panels in horizon, my 
solution was that I wasn’t making the correct modules available with just 
beforeEach(module(‘horizon.app.core.network_qos')); Weirdly even adding 
beforeEach(module(‘horizon.app.core.openstack-service-api')); didn’t fix the 
errors, yet injecting a broader scope of 
beforeEach(module(‘horizon.app.core')); works in this case.

Thanks again,
Beth
IRC: betherly

> On 10 Apr 2017, at 17:45, Beth Elwell  wrote:
> 
> Hi all,
> 
> Thanks very much in advance for any help you are able to offer.
> 
> I have been working on developing the QoS Policies panel for horizon and have 
> the panel working, however I am struggling with the qos.service.spec.js file. 
> In advance of going into my findings and issues so far, the related patch is 
> located at https://review.openstack.org/#/c/418828/ 
>  
> 
> The error log is as follows: 
> 
> http://paste.openstack.org/show/606058/ 
> 
> 
>  From that error log it looks to me 
> like the resource-type-registry for neutron is not being defined, and the 
> error appears with the link to the network_qos module as per line 19 
> https://review.openstack.org/#/c/418828/28/openstack_dashboard/static/app/core/network_qos/qos.service.spec.js
>  
> 
>  
> 
> However the resource-type-registry is definitely defined and I cannot see why 
> this should be throwing errors.
> 
> It could well be I have just looked at this code for too long and my brain is 
> skipping over something, but any help anyone can give on this would be 
> massively appreciated.
> 
> Many thanks,
> Beth
> IRC: betherly

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Ganesha deployment

2017-04-11 Thread Jan Provaznik
On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec  wrote:
> I'm not really an expert on composable roles so I'll leave that to someone
> else, but see my thoughts inline on the networking aspect.
>
> On 04/10/2017 03:22 AM, Jan Provaznik wrote:
>>
>> 2) define a new VIP (for IP failover) and 2 networks for NfsStorage role:
>> a) a frontend network between users and ganesha servers (e.g.
>> NfsNetwork name), used by tenants to mount nfs shares - this network
>> should be accessible from user instances.
>
>
> Adding a new network is non-trivial today, so I think we want to avoid that
> if possible.  Is there a reason the Storage network couldn't be used for
> this?  That is already present on compute nodes by default so it would be
> available to user instances, and it seems like the intended use of the
> Storage network matches this use case.  In a Ceph deployment today that's
> the network which exposes data to user instances.
>

Access to the ceph public network (StorageNetwork) is a big privilege
(from discussing this with ceph team), bigger than accessing only
ganesha nfs servers, so StorageNetwork should be exposed only when
really necessary.

>> b) a backend network between ganesha servers ans ceph cluster -
>> this could just map to the existing StorageNetwork I think.
>
>
> This actually sounds like a better fit for StorageMgmt to me.  It's
> non-user-facing storage communication, which is what StorageMgmt is used for
> in the vanilla Ceph case.
>

If StorageMgmt is used for replication and internal ceph nodes
communication, I wonder if it's not too permissive access? Ganesha
servers should need access ti ceph public network only.

>> What i'm not sure at all is how network definition should look like.
>> There are following Overcloud deployment options:
>> 1) no network isolation is used - then both direct ceph mount and
>> mount through ganesha should work because StorageNetwork and
>> NfsNetwork are accessible from user instances (there is no restriction
>> in accessing other networks it seems).
>
>
> There are no other networks without network-isolation.  Everything runs over
> the provisioning network.  The network-isolation templates should mostly
> handle this for you though.
>
>> 2) network isolation is used:
>> a) ceph is used directly - user instances need access to the ceph
>> public network (which is StorageNetwork in Overcloud) - how should I
>> enable access to this network? I filled a bug for this deployment
>> variant here [3]
>
>
> So does this mean that the current manila implementation is completely
> broken in network-isolation?  If so, that's rather concerning.
>

This affects deployments of manila with internal (=deployed by
TripleO) ceph backend.

> If I'm understanding correctly, it sounds like what needs to happen is to
> make the Storage network routable so it's available from user instances.
> That's not actually something TripleO can do, it's an underlying
> infrastructure thing.  I'm not sure what the security implications of it are
> either.
>
> Well, on second thought it might be possible to make the Storage network
> only routable within overcloud Neutron by adding a bridge mapping for the
> Storage network and having the admin configure a shared Neutron network for
> it.  That would be somewhat more secure since it wouldn't require the
> Storage network to be routable by the world.  I also think this would work
> today in TripleO with no changes.
>

This sounds interesting, I was searching for more info how bridge
mapping should be done in this case and how specific setup steps
should look like, but the process is still not clear to me, I would be
grateful for more details/guidance with this.

> Alternatively I guess you could use ServiceNetMap to move the public Ceph
> traffic to the public network, which has to be routable.  That seems like it
> might have a detrimental effect on the public network's capacity, but it
> might be okay in some instances.
>

I would rather avoid this option (both because of network traffic and
because of exposing ceph public network to everybody).

>> b) ceph is used through ganesha - user instances need access to
>> ganesha servers (NfsNetwork in previous paragraph) - how should I
>> enable access to this network?
>
>
> I think the answer here will be the same as for vanilla Ceph.  You need to
> make the network routable to instances, and you'd have the same options as I
> discussed above.
>

Yes, it seems that using the mapping to provider network would solve
the existing problem when using ceph directly and when using ganesha
servers in future (it would be just matter of to which network is
exposed).

>>
>> The ultimate (and future) plan is to deploy ganesha-nfs in VMs (which
>> will run in Overcloud, probably managed by manila ceph driver), in
>> this deployment mode a user should have access to ganesha servers and
>> only ganesha server VMs should have access to ceph public network.
>> Ganesha VMs 

Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Jay Faulkner

> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal  
> wrote:
> 
> Hi John,
> 
> >With ironic I thought everything is "passed through" by default,
> >because there is no virtualization in the way. (I am possibly
> >incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> >devices dynamically.)
> 
> Yes with ironic everything is passed through by default. 
> 
> >So I am assuming this is purely a scheduling concern. If so, why are
> >the new custom resource classes not good enough? "ironic_blue" could
> >mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> >and one 1Gb nic, etc.
> >Or is there something else that needs addressing here? Trying to
> >describe what you get with each flavor to end users?
> Yes this is purely from scheduling perspective. 
> Currently how ironic works is we discover server attributes and populate them 
> into node object. These attributes are then used for further scheduling of 
> the node from nova scheduler using ComputeCapabilities filter. So this is 
> something automated on ironic side, like we do inspection of the node 
> properties/attributes and user need to create the flavor of their choice and 
> the node which meets the user need is scheduled for ironic deploy.
> With resource class name in place in ironic, we ask user to do a manual step 
> i.e. create a resource class name based on the hardware attributes and this 
> need to be done on per node basis. For this user need to know the server 
> hardware properties in advance before assigning the resource class name to 
> the node(s) and then assign the resource class name manually to the node. 
> In a broad way if i say, if we want to support scheduling based on quantity 
> for ironic nodes there is no way we can do it through current resource class 
> structure(actually just a tag) in ironic. A  user may want to schedule ironic 
> nodes on different resources and each resource should be a different resource 
> class (IMO). 
> 
> >Are you needing to aggregating similar hardware in a different way to the 
> >above
> >resource class approach?
> i guess no but the above resource class approach takes away the automation on 
> the ironic side and the whole purpose of inspection is defeated.
> 

I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.

-
Jay Faulkner
OSIC

> Regards
> Nisha
> 
> 
> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:
> On 10 April 2017 at 11:31,   wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
> 
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
> 
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
> 
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
> 
> Thanks,
> johnthetubaguy
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> The Secret Of Success is learning how to use pain and pleasure, instead
> of having pain and pleasure use you. If You do that you are in control
> of your life. If you don't life controls you.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread Sridar Kandaswamy (skandasw)
Hi All:

>From and FWaaS perspective - we also think (a)  would be ideal.

Thanks

Sridar

From: Kevin Benton >
Reply-To: OpenStack List 
>
Date: Monday, April 10, 2017 at 4:20 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

I think 'a' is probably the way to go since we can mainly rely on existing 
horizon guides for creating new dashboard repos.

On Apr 10, 2017 08:11, "Akihiro Motoki" 
> wrote:
Hi neutrinos (and horizoners),

As the title says, where would we like to have horizon dashboard for
neutron stadium projects?
There are several projects under neutron stadium and they are trying
to add dashboard support.

I would like to raise this topic again. No dashboard support lands since then.
Also Horizon team would like to move in-tree neutron stadium dashboard
(VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.

Possible approaches


Several possible options in my mind:
(a) dashboard repository per project
(b) dashboard code in individual project
(c) a single dashboard repository for all neutron stadium projects

Which one sounds better?

Pros and Cons


(a) dashboard repository per project
  example, networking-sfc-dashboard repository for networking-sfc
  Pros
   - Can use existing horizon related project convention and knowledge
 (directory structure, testing, translation support)
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons
   - An additional repository is needed.

(b) dashboard code in individual project
  example, dashboard module for networking-sfc
  Pros:
   - No additional repository
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons:
   - Requires extra efforts to support neutron and horizon codes in a
single repository
 for testing and translation supports. Each project needs to
explore the way.

(c) a single dashboard repository for all neutron stadium projects
   (something like neutron-advanced-dashboard)
  Pros:
- No additional repository per project
  Each project do not need a basic setup for dashboard and
possible makes things simple.
  Cons:
- Inclusion criteria depending on the neutron stadium inclusion/exclusion
  (Similar discussion happens as for neutronclient OSC plugin)
  Project before neutron stadium inclusion may need another implementation.


My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).

Note that as dashboard supports for feature in the main neutron repository
are implemented in the horizon repository as we discussed several months ago.
As an example, trunk support is being development in the horizon repo.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ops Meetups Team Meeting Reminder

2017-04-11 Thread Melvin Hillsman
Hey everyone,

Just a reminder that the meetups team will be meeting today 4/11 at 1500UTC

Agenda: https://etherpad.openstack.org/p/ops-meetups-team


https://www.worldtimebuddy.com/?qm=1=100,1816670,2147714,4699066=100=2017-4-11=15-16
https://www.worldtimebuddy.com/?qm=1=100,2643743,5391959,2950159=100=2017-4-11=15-16

-- 
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread Henry Fourie
Akihiro,
Option (a) would have my vote.
 - Louis

-Original Message-
From: Akihiro Motoki [mailto:amot...@gmail.com] 
Sent: Monday, April 10, 2017 8:09 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we 
like to have horizon dashboard for neutron stadium projects?

Hi neutrinos (and horizoners),

As the title says, where would we like to have horizon dashboard for neutron 
stadium projects?
There are several projects under neutron stadium and they are trying to add 
dashboard support.

I would like to raise this topic again. No dashboard support lands since then.
Also Horizon team would like to move in-tree neutron stadium dashboard (VPNaaS 
and FWaaS v1 dashboard) to outside of horizon repo.

Possible approaches


Several possible options in my mind:
(a) dashboard repository per project
(b) dashboard code in individual project
(c) a single dashboard repository for all neutron stadium projects

Which one sounds better?

Pros and Cons


(a) dashboard repository per project
  example, networking-sfc-dashboard repository for networking-sfc
  Pros
   - Can use existing horizon related project convention and knowledge
 (directory structure, testing, translation support)
   - Not related to the neutron stadium inclusion. Each project can provide its 
dashboard
 support regardless of neutron stadium inclusion.
 Cons
   - An additional repository is needed.

(b) dashboard code in individual project
  example, dashboard module for networking-sfc
  Pros:
   - No additional repository
   - Not related to the neutron stadium inclusion. Each project can provide its 
dashboard
 support regardless of neutron stadium inclusion.
 Cons:
   - Requires extra efforts to support neutron and horizon codes in a single 
repository
 for testing and translation supports. Each project needs to explore the 
way.

(c) a single dashboard repository for all neutron stadium projects
   (something like neutron-advanced-dashboard)
  Pros:
- No additional repository per project
  Each project do not need a basic setup for dashboard and possible makes 
things simple.
  Cons:
- Inclusion criteria depending on the neutron stadium inclusion/exclusion
  (Similar discussion happens as for neutronclient OSC plugin)
  Project before neutron stadium inclusion may need another implementation.


My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).

Note that as dashboard supports for feature in the main neutron repository are 
implemented in the horizon repository as we discussed several months ago.
As an example, trunk support is being development in the horizon repo.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Dmitry Tantsur

On 04/11/2017 05:28 PM, Jay Faulkner wrote:



On Apr 11, 2017, at 12:54 AM, Nisha Agarwal  wrote:

Hi John,


With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)


Yes with ironic everything is passed through by default.


So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.
Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users?

Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate them 
into node object. These attributes are then used for further scheduling of the 
node from nova scheduler using ComputeCapabilities filter. So this is something 
automated on ironic side, like we do inspection of the node 
properties/attributes and user need to create the flavor of their choice and 
the node which meets the user need is scheduled for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual step 
i.e. create a resource class name based on the hardware attributes and this 
need to be done on per node basis. For this user need to know the server 
hardware properties in advance before assigning the resource class name to the 
node(s) and then assign the resource class name manually to the node.
In a broad way if i say, if we want to support scheduling based on quantity for 
ironic nodes there is no way we can do it through current resource class 
structure(actually just a tag) in ironic. A  user may want to schedule ironic 
nodes on different resources and each resource should be a different resource 
class (IMO).


Are you needing to aggregating similar hardware in a different way to the above
resource class approach?

i guess no but the above resource class approach takes away the automation on 
the ironic side and the whole purpose of inspection is defeated.



I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.


Also ironic-inspector is useful for automatically defining resource classes on 
nodes, so I'm not sure about this purpose being defeated as well.


/me makes a note to provide a few examples of such approach in ironic-inspector 
docs

Not sure about OOB inspection though.



-
Jay Faulkner
OSIC


Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:
On 10 April 2017 at 11:31,   wrote:

On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:

Hi team,

Please could you pour in your suggestions on the mail?

I raised a blueprint in Nova for this https://blueprints.launchpad.ne
t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
pad.net/ironic/+bug/1681320 for the discussion topic.


If I understand you correctly, you want to be able to filter ironic
hosts by available PCI device, correct? Barring any possibility that
resource providers could do this for you yet, extending the nova ironic
driver to use the PCI passthrough filter sounds like the way to go.


With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)

So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.

Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users? Are you needing
to aggregating similar hardware in a different way to the above
resource class approach?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo][manila] Ganesha deployment

2017-04-11 Thread Giulio Fidente
On Tue, 2017-04-11 at 16:50 +0200, Jan Provaznik wrote:
> On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec 
> wrote:
> > On 04/10/2017 03:22 AM, Jan Provaznik wrote:
> > Well, on second thought it might be possible to make the Storage
> > network
> > only routable within overcloud Neutron by adding a bridge mapping
> > for the
> > Storage network and having the admin configure a shared Neutron
> > network for
> > it.  That would be somewhat more secure since it wouldn't require
> > the
> > Storage network to be routable by the world.  I also think this
> > would work
> > today in TripleO with no changes.
> > 
> 
> This sounds interesting, I was searching for more info how bridge
> mapping should be done in this case and how specific setup steps
> should look like, but the process is still not clear to me, I would
> be
> grateful for more details/guidance with this.

I think this will be represented in neutron as a provider network,
which has to be created by the overcloud admin, after the overcloud
deployment is finished

While based on Kilo, this was one of the best docs I could find and it
includes config examples [1]

It assumes that the operator created a bridge mapping for it when
deploying the overcloud

> > I think the answer here will be the same as for vanilla Ceph.  You
> > need to
> > make the network routable to instances, and you'd have the same
> > options as I
> > discussed above.
> > 
> 
> Yes, it seems that using the mapping to provider network would solve
> the existing problem when using ceph directly and when using ganesha
> servers in future (it would be just matter of to which network is
> exposed).

+1

regarding the composability questions, I think this represents a
"composable HA" scenario where we want to manage a remote service with
pacemaker using pacemaker-remote

yet at this stage I think we want to add support for new services by
running them in containers first (only?) and pacemaker+containers is
still a work in progress so there aren't easy answers

containers will have access to the host networks though, so the case
for a provider network in the overcloud remains valid

1. https://docs.openstack.org/kilo/networking-guide/scenario_provider_o
vs.html
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] quota-class-show not sync to quota-show

2017-04-11 Thread Lance Bragstad
On Tue, Apr 11, 2017 at 1:21 PM, Matt Riedemann  wrote:

> On 4/11/2017 2:52 AM, Alex Xu wrote:
>
>> We talked about remove the quota-class API for multiple times
>> (http://lists.openstack.org/pipermail/openstack-dev/2016-July/099218.html
>> )
>>
>> I guess we can deprecate the entire quota-class API directly.
>>
>>
> I had a spec proposed to deprecate the os-quota-class-sets API [1] but it
> was abandoned since we discussed it at the Pike PTG and decided we would
> just leave it alone until Nova was getting limits information from Keystone
> [2].
>

FWIW - in addition to merging the conceptual document [0], Sean recently
proposed the limits interface [1] for the keystone bits.

[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-limits.htm
[1] https://review.openstack.org/#/c/455709/


>
> I think the reason we probably missed this API was because of the really
> roundabout way that the information is provided in the response. It calls
> the quota engine driver [3] to get the class quotas. For the DB driver if
> nothing is overridden then nothing comes back here [4]. And the resources
> in the quota driver have a default property which is based on the config
> options [5]. So we'll return quotas on floating_ips and other proxy
> resources simply because of how abstract this all is.
>
> To fix it, the os-quota-class-sets API would have to maintain a blacklist
> of resources to exclude from the response, like what we do for limits [6].
>
> So yeah, I guess we'd need a new spec and microversion for this.
>
> [1] https://review.openstack.org/#/c/411035/
> [2] https://review.openstack.org/#/c/440815/
> [3] https://github.com/openstack/nova/blob/15.0.0/nova/api/opens
> tack/compute/quota_classes.py#L67
> [4] https://github.com/openstack/nova/blob/15.0.0/nova/quota.py#L92
> [5] https://github.com/openstack/nova/blob/15.0.0/nova/quota.py#L1069
> [6] https://github.com/openstack/nova/blob/15.0.0/nova/api/opens
> tack/compute/views/limits.py#L20
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] quota-class-show not sync to quota-show

2017-04-11 Thread Matt Riedemann

On 4/11/2017 2:52 AM, Alex Xu wrote:

We talked about remove the quota-class API for multiple times
(http://lists.openstack.org/pipermail/openstack-dev/2016-July/099218.html)

I guess we can deprecate the entire quota-class API directly.



I had a spec proposed to deprecate the os-quota-class-sets API [1] but 
it was abandoned since we discussed it at the Pike PTG and decided we 
would just leave it alone until Nova was getting limits information from 
Keystone [2].


I think the reason we probably missed this API was because of the really 
roundabout way that the information is provided in the response. It 
calls the quota engine driver [3] to get the class quotas. For the DB 
driver if nothing is overridden then nothing comes back here [4]. And 
the resources in the quota driver have a default property which is based 
on the config options [5]. So we'll return quotas on floating_ips and 
other proxy resources simply because of how abstract this all is.


To fix it, the os-quota-class-sets API would have to maintain a 
blacklist of resources to exclude from the response, like what we do for 
limits [6].


So yeah, I guess we'd need a new spec and microversion for this.

[1] https://review.openstack.org/#/c/411035/
[2] https://review.openstack.org/#/c/440815/
[3] 
https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/quota_classes.py#L67

[4] https://github.com/openstack/nova/blob/15.0.0/nova/quota.py#L92
[5] https://github.com/openstack/nova/blob/15.0.0/nova/quota.py#L1069
[6] 
https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/views/limits.py#L20


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova][defcore] Removal of Compute Baremetal GET nodes tests from Tempest

2017-04-11 Thread Matt Riedemann

On 4/11/2017 4:29 AM, Ghanshyam Mann wrote:

Hi All,

There is tempest tests for compute baremetal GET nodes tests[1]. This
tests involve ironic and nova. Ironic to create baremetal nodes and
then GET nodes using nova APIs.
Nova only provides GET APIs for baremetal nodes which are already
deprecated [2].

As nova baremetal APIs are deprecated and test needs Ironic to be
present and so ironic baremetal service client,  we propose to remove
this test from tempest[3]. We have coverage of that feature/API in
ironic tempest plugin for node GET/POST and nova API in nova
functional tests[4].

I have been objecting this in past but now I feel its not worth to
test this in Tempest due to its complexity of Ironic requirement.
This is part of tempest tests removal standard, feel free to let us
know in case of any objection.


..1 
https://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_baremetal_nodes.py
..2 
https://developer.openstack.org/api-ref/compute/#bare-metal-nodes-os-baremetal-nodes-deprecated
..3 https://review.openstack.org/#/c/449158/
..4 
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api_sample_tests/test_baremetal_nodes.py

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1 on anything that relies on deprecated proxy APIs in the compute endpoint.

I'm not entirely sure what the defcore process is for this though, i.e. 
if these are already part of the interop guidelines, then I'd think the 
deprecated proxy APIs need to be dropped from the guidelines in the next 
revision and then you could drop them from Tempest - but what does that 
mean for the older defcore / refstack guidelines? Are clouds/products 
just tested against the latest? Or can refstack pin to tagged versions 
of Tempest for older guidelines?


Probably need to talk with Chris Hoge about this.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread Tim Bell
Are there any implications for the end user experience by going to different 
repos (such as requiring dedicated menu items)?

Tim

From: "Sridar Kandaswamy (skandasw)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 11 April 2017 at 17:01
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

Hi All:

From and FWaaS perspective – we also think (a)  would be ideal.

Thanks

Sridar

From: Kevin Benton >
Reply-To: OpenStack List 
>
Date: Monday, April 10, 2017 at 4:20 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

I think 'a' is probably the way to go since we can mainly rely on existing 
horizon guides for creating new dashboard repos.

On Apr 10, 2017 08:11, "Akihiro Motoki" 
> wrote:
Hi neutrinos (and horizoners),

As the title says, where would we like to have horizon dashboard for
neutron stadium projects?
There are several projects under neutron stadium and they are trying
to add dashboard support.

I would like to raise this topic again. No dashboard support lands since then.
Also Horizon team would like to move in-tree neutron stadium dashboard
(VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.

Possible approaches


Several possible options in my mind:
(a) dashboard repository per project
(b) dashboard code in individual project
(c) a single dashboard repository for all neutron stadium projects

Which one sounds better?

Pros and Cons


(a) dashboard repository per project
  example, networking-sfc-dashboard repository for networking-sfc
  Pros
   - Can use existing horizon related project convention and knowledge
 (directory structure, testing, translation support)
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons
   - An additional repository is needed.

(b) dashboard code in individual project
  example, dashboard module for networking-sfc
  Pros:
   - No additional repository
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons:
   - Requires extra efforts to support neutron and horizon codes in a
single repository
 for testing and translation supports. Each project needs to
explore the way.

(c) a single dashboard repository for all neutron stadium projects
   (something like neutron-advanced-dashboard)
  Pros:
- No additional repository per project
  Each project do not need a basic setup for dashboard and
possible makes things simple.
  Cons:
- Inclusion criteria depending on the neutron stadium inclusion/exclusion
  (Similar discussion happens as for neutronclient OSC plugin)
  Project before neutron stadium inclusion may need another implementation.


My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).

Note that as dashboard supports for feature in the main neutron repository
are implemented in the horizon repository as we discussed several months ago.
As an example, trunk support is being development in the horizon repo.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Nisha Agarwal
Hi Jay, Dmitry,

>I strongly challenge the assertion made here that inspection is only
useful in scheduling contexts.
Ok i agree that scheduling is not the only purpose of inspection but it is
one of the main aspect of inspection.

>There are users who simply want to know about their hardware, and read the
results as posted to swift.
This is true only for ironic-inspector. If we say all the features of
ironic-inspector is "OK" for ironic, then why OOB inspection not allowed to
discover same things or do same things what ironic-inspector already does.
Ironic-inspector already discovers the pci-device data in the format nova
supports. Why the features supported by ironic-inspector doesnt has to go
through ironic review for capabilities review etc. ironic-inspector does
has its own review process but doesnt centralize its approach(atleast
fields/attributes names) for ironic which is and should be a common thing
between inband inspection and out-of-band inspection.

All above is said just to emphasize that ironic-inspector is not the only
way of inspection in ironic.

> Inspection also handles discovery of new nodes when given basic
information about them.
Applies only to ironic-inspector.

> Also ironic-inspector is useful for automatically defining resource
classes on nodes, so I'm not sure about this purpose being defeated as well.
I wasnt aware that the creation of custom resource class is already
automated by ironic-inspector. If it is already there , it should be done
by ironic instead of ironic-inspector because thats required even by OOB
inspection. If the solution is there in ironic OOB inspection can also use
that for scheduling.

Regards
Nisha

On Tue, Apr 11, 2017 at 9:34 PM, Dmitry Tantsur  wrote:

> On 04/11/2017 05:28 PM, Jay Faulkner wrote:
>
>>
>> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal 
>>> wrote:
>>>
>>> Hi John,
>>>
>>> With ironic I thought everything is "passed through" by default,
 because there is no virtualization in the way. (I am possibly
 incorrectly assuming no BIOS tricks to turn off or re-assign PCI
 devices dynamically.)

>>>
>>> Yes with ironic everything is passed through by default.
>>>
>>> So I am assuming this is purely a scheduling concern. If so, why are
 the new custom resource classes not good enough? "ironic_blue" could
 mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
 and one 1Gb nic, etc.
 Or is there something else that needs addressing here? Trying to
 describe what you get with each flavor to end users?

>>> Yes this is purely from scheduling perspective.
>>> Currently how ironic works is we discover server attributes and populate
>>> them into node object. These attributes are then used for further
>>> scheduling of the node from nova scheduler using ComputeCapabilities
>>> filter. So this is something automated on ironic side, like we do
>>> inspection of the node properties/attributes and user need to create the
>>> flavor of their choice and the node which meets the user need is scheduled
>>> for ironic deploy.
>>> With resource class name in place in ironic, we ask user to do a manual
>>> step i.e. create a resource class name based on the hardware attributes and
>>> this need to be done on per node basis. For this user need to know the
>>> server hardware properties in advance before assigning the resource class
>>> name to the node(s) and then assign the resource class name manually to the
>>> node.
>>> In a broad way if i say, if we want to support scheduling based on
>>> quantity for ironic nodes there is no way we can do it through current
>>> resource class structure(actually just a tag) in ironic. A  user may want
>>> to schedule ironic nodes on different resources and each resource should be
>>> a different resource class (IMO).
>>>
>>> Are you needing to aggregating similar hardware in a different way to
 the above
 resource class approach?

>>> i guess no but the above resource class approach takes away the
>>> automation on the ironic side and the whole purpose of inspection is
>>> defeated.
>>>
>>>
>> I strongly challenge the assertion made here that inspection is only
>> useful in scheduling contexts. There are users who simply want to know
>> about their hardware, and read the results as posted to swift. Inspection
>> also handles discovery of new nodes when given basic information about them.
>>
>
> Also ironic-inspector is useful for automatically defining resource
> classes on nodes, so I'm not sure about this purpose being defeated as well.
>
> /me makes a note to provide a few examples of such approach in
> ironic-inspector docs
>
> Not sure about OOB inspection though.
>
>
>
>> -
>> Jay Faulkner
>> OSIC
>>
>> Regards
>>> Nisha
>>>
>>>
>>> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt 
>>> wrote:
>>> On 10 April 2017 at 11:31,   wrote:
>>>
 On Mon, 2017-04-10 at 11:50 

Re: [openstack-dev] [tripleo][manila] Ganesha deployment

2017-04-11 Thread Ben Nemec



On 04/11/2017 02:00 PM, Giulio Fidente wrote:

On Tue, 2017-04-11 at 16:50 +0200, Jan Provaznik wrote:

On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec 
wrote:

On 04/10/2017 03:22 AM, Jan Provaznik wrote:
Well, on second thought it might be possible to make the Storage
network
only routable within overcloud Neutron by adding a bridge mapping
for the
Storage network and having the admin configure a shared Neutron
network for
it.  That would be somewhat more secure since it wouldn't require
the
Storage network to be routable by the world.  I also think this
would work
today in TripleO with no changes.



This sounds interesting, I was searching for more info how bridge
mapping should be done in this case and how specific setup steps
should look like, but the process is still not clear to me, I would
be
grateful for more details/guidance with this.


I think this will be represented in neutron as a provider network,
which has to be created by the overcloud admin, after the overcloud
deployment is finished

While based on Kilo, this was one of the best docs I could find and it
includes config examples [1]

It assumes that the operator created a bridge mapping for it when
deploying the overcloud


I think the answer here will be the same as for vanilla Ceph.  You
need to
make the network routable to instances, and you'd have the same
options as I
discussed above.



Yes, it seems that using the mapping to provider network would solve
the existing problem when using ceph directly and when using ganesha
servers in future (it would be just matter of to which network is
exposed).


+1

regarding the composability questions, I think this represents a
"composable HA" scenario where we want to manage a remote service with
pacemaker using pacemaker-remote

yet at this stage I think we want to add support for new services by
running them in containers first (only?) and pacemaker+containers is
still a work in progress so there aren't easy answers

containers will have access to the host networks though, so the case
for a provider network in the overcloud remains valid

1. https://docs.openstack.org/kilo/networking-guide/scenario_provider_o
vs.html



I think there are three major pieces that would need to be in place to 
have a storage provider network:


1) The storage network must be bridged in the net-iso templates.  I 
don't think our default net-iso templates do that, but there are 
examples of bridged networks in them: 
https://github.com/openstack/tripleo-heat-templates/blob/master/network/config/multiple-nics/compute.yaml#L121 
 For the rest of the steps I will assume the bridge was named br-storage.


2) Specify a bridge mapping when deploying the overcloud.  The 
environment file would look something like this (datacentre is the 
default value, so I'm including it too):


parameter_defaults:
  NeutronBridgeMappings: 'datacentre:br-ex,storage:br-storage'

3) Create a provider network after deployment as described in the link 
Giulio provided.  The specific command will depend on the network 
architecture, but it would need to include "--provider:physical_network 
storage".


We might need to add the ability to do 3 as part of the deployment, 
depending what is needed for the Ganesha deployment itself.  We've 
typically avoided creating network resources like this in the deployment 
because of the huge variations in what people want, but this might be an 
exceptional case since the network will be a required part of the overcloud.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Ed Leafe
On Apr 11, 2017, at 12:48 PM, Jay Pipes  wrote:

>>  The IaaS parts
>> (and yes, I know that just which parts these are is a whole debate in
>> itself) should be rock-solid, slow-moving, and, well, boring.
> 
> A fine idea. Unfortunately, what the majority of end users keep asking for is 
> yet more features, especially features that expose more and more internals of 
> the infrastructure and hardware to the power user (admin or orchestrator).

And they always will. But my point was that those things should be added 
without causing the existing system to become unstable. Stability of the 
foundation allows for much more interesting things to be built on top.

> This is *precisely* what the Big Tent was all about:

Totally agree! The key word, unfortunately, is "was". I don't believe that we 
have seen the results to the degree that we envisioned.

I would add that it isn't simply the choice of tools or language, but also 
their API. Monasca had an uphill struggle because some of their API overlapped 
with other projects, especially Ceilometer. Never mind that it might offer a 
better solution than something like Ceilometer; since Ceilometer was there 
first, Monasca  I would prefer to see these projects be free to develop good, 
solid APIs, and if there is functional overlap or API overlap, so be it. This 
avoids the "first one wins" hurdle. Let each project stand on their own merits. 
If it isn't better than what already exists, no one will use it and it will 
wither away. But if it is better, then that makes our ecosystem that much 
richer.

>> > Such projects will also have to accept a tag such as
>> 'experimental' or 'untested' until they can demonstrate otherwise.
> 
> This already exists, in a much more fine-grained fashion, as originally 
> designed into the concept of the Big Tent:
> 
> https://governance.openstack.org/tc/reference/tags/index.html
> 
> Are you just recommending that the TC controls more of those tags?


Yes! One of the thing that was painful to watch in the tag development process 
was the strong aversion to saying anything that might be considered negative 
about a project, as if stating that, say, a project wasn't fully tested would 
hurt someone's feelings. A more open development environment will require such 
technical evaluations, and not all will be positive. The TC should definitely 
work on making this happen.

>> This can also serve to encourage the development of additional
>> testing resources around, say, Golang projects, so that the Golang
>> community can all pitch in and develop the sort of infrastructure
>> needed to adequately test their products, both alone and in
>> conjunction with the IaaS core of OpenStack. The only thing that
>> should be absolute is a project's commitment to the Four Opens. There
>> will be winners, and there will be losers, and that's not only OK,
>> it's how it should be.
> 
> I'm not grasping how the above doesn't already represent the state of the 
> OpenStack governance world?

It's a matter of emphasis. There wasn't any governance that said that a 
project's API can't overlap with an existing project, but that was the message 
that the TC sent to Monasca. This should be changed to a positive statement 
about encouraging competition and new approaches. I'd like the Four Opens to 
remain an absolute, but that's about it. Duplicated effort? Solving the same or 
a similar problem as another project? Those things shouldn't matter (outside of 
the IaaS core projects). Teams should be free to decide if working with another 
team is the best approach, or going it alone, for example.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ocata - Ubuntu 16.04 - OVN does not work with DPDK

2017-04-11 Thread Martinx - ジェームズ
On 11 April 2017 at 11:08, Russell Bryant  wrote:

>
>
> On Mon, Apr 10, 2017 at 4:49 PM, Martinx - ジェームズ <
> thiagocmarti...@gmail.com> wrote:
>
>>
>>
>> On 8 April 2017 at 00:37, Martinx - ジェームズ 
>> wrote:
>>
>>> Guys,
>>>
>>>  I manage to deploy Ocata on Ubuntu 16.04 with OVN for the first time
>>> ever, today!
>>>
>>>  It looks very, very good... OVN L3 Router is working, OVN DHCP
>>> working... bridge mappings "br-ex" on each compute node... All good!
>>>
>>>  Then, I've said: time for DPDK!
>>>
>>>  I manage to use OVS with DPDK easily on top of Ubuntu (plus Ocata Cloud
>>> Archive) with plain KVM, no OpenStack, so, I have experience about how to
>>> setup DPDK, OVS+DPDK, Libvirt vhostuser, KVM and etc...
>>>
>>>  After configuring DPDK on a compute node, I tried the following
>>> instructions:
>>>
>>>  https://docs.openstack.org/developer/networking-ovn/dpdk.html
>>>
>>>  It looks quite simple!
>>>
>>>  To make things even simpler, I have just 1 controller, and 1 compute
>>> node, to begin with, before enabling DPDK at the compute node and changing
>>> the "br-int" datapath, I deleted all OVN Routers and all Neutron Networks
>>> and Subnets, that was previously working with regular OVS (no DPDK).
>>>
>>>  Then, after enabling DPDK and updating the "br-int" and the "br-ex"
>>> interfaces, right after connecting the "OVN L3 Router" into the "ext-net /
>>> br-ex" network, the following errors appeared on OpenvSwitch logs of the
>>> related compute node (OpenFlow error):
>>>
>>>
>>>  * After connecting OVN L3 Router against the "ext-net / br-ex" Flat /
>>> VLAN Network:
>>>
>>>  ovs-vswitchd.log:
>>>
>>>  http://paste.openstack.org/show/605800/
>>>
>>>  ovn-controller.log:
>>>
>>>  http://paste.openstack.org/show/605801/
>>>
>>>
>>>  Also, after connecting the OVN L3 Router into the local (GENEVE)
>>> network, very similar error messages appeared on OpenvSwitch logs...
>>>
>>>
>>>  * After connecting OVN L3 Router on a "local" GENEVE Network:
>>>
>>>  ovs-vswitchd.log:
>>>
>>>  http://paste.openstack.org/show/605804/
>>>
>>>  ovn-controller.log:
>>>
>>>  http://paste.openstack.org/show/605805/
>>>
>>>
>>>  * Output of "ovs-vsctl show" at the single compute node, after plugging
>>> the OVN L3 Router against the two networks (external / GENEVE):
>>>
>>>  http://paste.openstack.org/show/605806/
>>>
>>>
>>>  Then, I tried to launch an Instance anyway and, for my surprise, the
>>> Instance was created! Using vhostuser OVS+DPDK socket!
>>>
>>>  Also, the Instance got its IP! Which is great!
>>>
>>>  However, the Instance can not ping its OVN L3 Router (its default
>>> gateway), with or without any kind of security groups applied, no deal...
>>> :-(
>>>
>>>  BTW, the Instance did not received the ARP stuff of the OVN L3 Router,
>>> I mean, for the instance, the gateway IP on "arp -an" shows "".
>>>
>>>
>>>  * The ovs-vswitchd.log after launching an Instance on top of
>>> OVN/OVS+DPDK:
>>>
>>>  http://paste.openstack.org/show/605807/
>>>
>>>  * The output of "ovs-vsctl show" after launching the above instance:
>>>
>>>  http://paste.openstack.org/show/605809/ - Line 33 is the dpdkvhostuser
>>>
>>>
>>>  Just to give another try, I started a second Instance, to see if the
>>> Instances can ping each other... Also did not worked, the Instances can not
>>> ping each other.
>>>
>>>
>>>  So, from what I'm seeing, OVN on top of DPDK does not work.
>>>
>>>  Any tips?
>>>
>>>
>>>  NOTE:
>>>
>>>  I tried to enable "hugepages" on my OpenStack's flavor, just in case...
>>> Then, I found another bug, it doesn't even boot the Instance:
>>>
>>>  https://bugs.launchpad.net/cloud-archive/+bug/1680956
>>>
>>>
>>>  For now, I'll deploy Ocata with regular OVN, no DPDK, but, my goal with
>>> this cloud is for high performance networks, so, I need DPDK, and I also
>>> need GENEVE and Provider Networks, everything on top of DPDK.
>>>
>>> ---
>>>  After researching more about this "high perf networks", I found this:
>>>
>>>  * DPDK-like performance in Linux kernel with XDP !
>>>
>>>  http://openvswitch.org/support/ovscon2016/7/0930-pettit.pdf
>>>
>>>  https://www.iovisor.org/technology/xdp
>>>  https://www.iovisor.org/technology/ebpf
>>>
>>>  https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
>>>
>>>  But I have no idea about how to use OpenvSwitch with this thing,
>>> however, if I can achieve DPDK-Like performance, without DPDK, using just
>>> plain Linux, I'm a 100% sure that I'll prefer it!
>>>
>>>  I'm okay to give OpenvSwitch + DPDK another try, even knowing that it
>>> [OVS] STILL have serious problems (https://bugs.launchpad.net/ub
>>> untu/+source/openvswitch/+bug/1577088)...
>>> ---
>>>
>>>  OpenStack on Ubuntu rocks!   :-D
>>>
>>> Thanks!
>>> Thiago
>>>
>>
>> I just realized how cool IO Visor is!
>>
>> Sorry about mixing subjects, let's keep this one clear for OVN on top of
>> DPDK.
>>
>> I found a opened bug on RedHat's Bugzilla, I updated it with the info
>> 

Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Jay Pipes

On 04/10/2017 02:26 PM, Ed Leafe wrote:

On Apr 10, 2017, at 4:16 AM, Thierry Carrez 
wrote:

If there was ONE thing, one initiative, one change you will
actively push in the six months between this election round and the
next, what would it be ?


Just one? If I had to choose, I would like to see a clear separation
between the core services that provide IaaS, and the products that
then build on that core. They occupy very different places in the
OpenStack picture, and should be treated differently.


Agreed.

>  The IaaS parts

(and yes, I know that just which parts these are is a whole debate in
itself) should be rock-solid, slow-moving, and, well, boring.


A fine idea. Unfortunately, what the majority of end users keep asking 
for is yet more features, especially features that expose more and more 
internals of the infrastructure and hardware to the power user (admin or 
orchestrator).



Reliability is the key for them. But for the services and
applications that are built on top of this base? I'd like to see
allowing them a much more open approach: let them develop in whatever
language they like, release when they feel the timing is right, and
define their own CI testing. In other words, if you want to develop
in a language other than Python, go for it! If you want to use a
particular NoSQL database, sure thing! However, the price of that
freedom is that the burden will be on the project to ensure that it
is adequately tested, instead of laying that responsibility on our
infra team.


This is *precisely* what the Big Tent was all about: opening up the 
"what is an OpenStack project" idea to more newcomers and competing 
implementations with the condition that the shared cross-project teams 
like docs and infra would be enablers and not doers. Instead of creating 
infrastructure for all the new project teams, the infra team would 
transition to providing guidance for how the project teams should set up 
gate jobs for themselves. Instead of writing documentation for the 
project teams, the docs team would instead provide guidance to new teams 
on how to write docs that integrate effectively with the existing docs 
tooling.


The TC and the Big Tent didn't stop Freezer from making ElasticSearch 
its only metadata storage solution. Nobody stopped Gluon and other 
projects from using etcd for control plane communication and event 
notification. And nobody should be stopping these projects from 
innovating and experimenting.


As for the Python versus other languages bit, sure, the TC took some 
time to formulate its opinion regarding the impact another language 
would have on the OpenStack shared team workload but the Big Tent and 
the structure of the TC was not an impediment to discussion of other 
languages in the OpenStack ecosystem. Rather, preference for other 
(non-Gerrit) workflows and (non-IRC) communication methods continue to 
be the primary influencing factors for non-Python projects in the cloud 
space.


> Such projects will also have to accept a tag such as

'experimental' or 'untested' until they can demonstrate otherwise.


This already exists, in a much more fine-grained fashion, as originally 
designed into the concept of the Big Tent:


https://governance.openstack.org/tc/reference/tags/index.html

Are you just recommending that the TC controls more of those tags?


This can also serve to encourage the development of additional
testing resources around, say, Golang projects, so that the Golang
community can all pitch in and develop the sort of infrastructure
needed to adequately test their products, both alone and in
conjunction with the IaaS core of OpenStack. The only thing that
should be absolute is a project's commitment to the Four Opens. There
will be winners, and there will be losers, and that's not only OK,
it's how it should be.


I'm not grasping how the above doesn't already represent the state of 
the OpenStack governance world?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Monty Taylor

On 04/10/2017 04:41 PM, Chris Dent wrote:

On Mon, 10 Apr 2017, Matt Riedemann wrote:


This might also tie back in with what cdent was mentioning, and if the
flurry of conversation during a TC meeting throws people off, maybe
the minutes should be digested after the meeting in the mailing list.
I know the meeting is logged, but it can be hard to read through that
without one's eyes glazing over due to the cross-talk and locker-room
towel whipping going on.


Aw, you beat me to it. This is part of what I was going to say in
response to your earlier message. I think there are at least three
things to do, all of which you've touched on:

* Alternating the meetings, despite the issues with quorum, probably
  ought to happen. If the issues with quorum are insurmountable that
  may say something important about the TC's choice to be dependent
  on IRC meetings. Is it habit? Doesn't most of the real voting
  happen in gerrit? Can more of the discussion happen in email? I
  think we (by we I mean all of OpenStack) can and should rely on
  email more than we do expressly for the purpose of enabling people
  to include themselves according to their own schedules and their
  own speeds of comprehension.


Oh god. I feel like I'm going to start a vi-vs-emacs here ...

(Before I do - I agree with alternating meetings)

Email has similar but opposite problems- in that in email the lag is 
often too long, rather than too short. This can lead to:


- person A says a thing, then goes to sleep, because it's 1AM in their 
timezone.
- 1000 people start a flame war based on a poor choice of phrase in the 
original email while person A sleeps
- person A wakes up and is horrified to see what their simple sentence 
has done, begins day drinking


Now, as you might imagine the specifics might vary slightly - but I say 
the above to actually suggest that rather than it being an either/or - 
_both_ are important, and must be balanced over time.


Email allows someone to compose an actual structured narrative, and for 
replies to do the same. Some of us are loquatious and I imagine can be 
hard to follow even with time to read.


IRC allows someone to respond quickly, and for someone to be like "yo, 
totes sorry, I didn't mean that at all LOL" and to walk things back 
before a pile of people become mortally insulted.


Like now - hopefully you'll give me a smiley in IRC ... but you might 
not, and I'm stuck worrying that my tone came across wrong. Then if you 
just don't respond because ZOMG-EMAIL, I might start day drinking.



  Email and writing in general is by no means a panacea. We don't
  want any of email, IRC, voice or expensive international
  gatherings to be the sole mode of interaction.


Blast. I argued against the first part of your email before I realized 
there was a second part that agreed with me already.



* People who are participating in the TC meetings can be much more
  considerate, at least during critical parts of the meeting, about
  who has the speaking stick and what the current topic happens to
  be. Sometimes the cross-talk and the towel whipping is exactly
  what needs to be happening, but much of the time it is not and
  makes it all very hard to follow and frustrating. We see a lot of
  behavior in the channel that if we were in person or on the phone
  would be completely unacceptable. Each communication medium
  affords different behaviors, but we still want to manage to
  understand one another. As you say, Alex does a great job of
  making the nova api subteam meeting work so there's probably
  something we can learn from there.


While I'm blathering - I'll just go ahead and apologize for my frequent 
distribution of pies, wet animals and other similar virtual gifts. I do 
have a problem shutting my mouth, even when I'm not using it. If it's 
any consolation, I also make inopportune jokes in serious business 
meetings in person too.



* Digested minutes of the meeting and any pending business
  in gerrit can give an additional way to stay in the loop but they
  are more about providing an invitation or encouragement to
  participate. It shouldn't be a substitute that's there because the
  real grind of participation is inaccessible. Participation needs
  to be more accessible.


+1000

At the heart of all of this I could not possibly be more in support of 
making all of these things accessible.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Stop enabling EPEL mirror by default

2017-04-11 Thread Paul Belanger
On Tue, Apr 04, 2017 at 01:02:59PM -0400, Paul Belanger wrote:
> Greetings,
> 
> Recently we've been running into some issues keeping our EPEL mirror properly
> sync'd. We are working to fix this, however we'd also like to do the 
> following:
> 
>   Stop enabling EPEL mirror by default
>   https://review.openstack.org/#/c/453222/
> 
> For the most part, we enable EPEL for our image build process, this to install
> haveged.  However, it is also likely the majority of centos-7 projects don't
> actually need EPEL.  I know specifically both RDO and TripleO avoid using the
> EPEL repository because of how unstable it is.
> 
> Since it is possible this could be a breaking change, jobs will still be able 
> to
> use EPEL, but it will be an opt-in process. Your jobs will need to be updated 
> to
> do:
> 
>   $ sudo yum-config-manager --enable epel
> 
> Feel free to join us in openstack-infra if you have any questions or concerns.
> 
> -PB
> 
Just a heads up, I believe we'll be landing this change tomorrow. It's been a
week so far and no major outrage. :)

As always, you can find us in #openstack-infra if you have problems / questions.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread SUZUKI, Kazuhiro
Hi,

I think (a) is also good from TaaS dashboard.

Regards,
Kaz


From: Akihiro Motoki 
Subject: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we 
like to have horizon dashboard for neutron stadium projects?
Date: Tue, 11 Apr 2017 00:09:10 +0900

> Hi neutrinos (and horizoners),
> 
> As the title says, where would we like to have horizon dashboard for
> neutron stadium projects?
> There are several projects under neutron stadium and they are trying
> to add dashboard support.
> 
> I would like to raise this topic again. No dashboard support lands since then.
> Also Horizon team would like to move in-tree neutron stadium dashboard
> (VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.
> 
> Possible approaches
> 
> 
> Several possible options in my mind:
> (a) dashboard repository per project
> (b) dashboard code in individual project
> (c) a single dashboard repository for all neutron stadium projects
> 
> Which one sounds better?
> 
> Pros and Cons
> 
> 
> (a) dashboard repository per project
>   example, networking-sfc-dashboard repository for networking-sfc
>   Pros
>- Can use existing horizon related project convention and knowledge
>  (directory structure, testing, translation support)
>- Not related to the neutron stadium inclusion. Each project can
> provide its dashboard
>  support regardless of neutron stadium inclusion.
>  Cons
>- An additional repository is needed.
> 
> (b) dashboard code in individual project
>   example, dashboard module for networking-sfc
>   Pros:
>- No additional repository
>- Not related to the neutron stadium inclusion. Each project can
> provide its dashboard
>  support regardless of neutron stadium inclusion.
>  Cons:
>- Requires extra efforts to support neutron and horizon codes in a
> single repository
>  for testing and translation supports. Each project needs to
> explore the way.
> 
> (c) a single dashboard repository for all neutron stadium projects
>(something like neutron-advanced-dashboard)
>   Pros:
> - No additional repository per project
>   Each project do not need a basic setup for dashboard and
> possible makes things simple.
>   Cons:
> - Inclusion criteria depending on the neutron stadium inclusion/exclusion
>   (Similar discussion happens as for neutronclient OSC plugin)
>   Project before neutron stadium inclusion may need another 
> implementation.
> 
> 
> My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).
> 
> Note that as dashboard supports for feature in the main neutron repository
> are implemented in the horizon repository as we discussed several months ago.
> As an example, trunk support is being development in the horizon repo.
> 
> Thanks,
> Akihiro
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][elections]questions about one platform vision

2017-04-11 Thread joehuang
Hello,

I heard the one platform vision in OpenStack now and then: One platform for 
virtual machines, containers and bare metal.

I also learned that there are some groups working on making Kubernets being 
able to manage virtual machines. Except running containers in virtual machine, 
there is also the need for running containers in bare metal.

There are several projects connecting OpenStack to container world: Zun, 
Magnum, Kuryr, Fuxi... Projects to deal with bare metal like Ironic, Morgan, ...

Can all these efforts lead us to one platform vision? We have to think over the 
question.

What's the one platform will be in your own words? What's your proposal and 
your focus to help one platform vision being achieved?


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-11 Thread Sean McGinnis
On Wed, Apr 12, 2017 at 02:54:30AM +, joehuang wrote:
> Hello,
> 
> I heard the one platform vision in OpenStack now and then: One platform for 
> virtual machines, containers and bare metal.
> 
> I also learned that there are some groups working on making Kubernets being 
> able to manage virtual machines. Except running containers in virtual 
> machine, there is also the need for running containers in bare metal.
> 
> There are several projects connecting OpenStack to container world: Zun, 
> Magnum, Kuryr, Fuxi... Projects to deal with bare metal like Ironic, Morgan, 
> ...
> 
> Can all these efforts lead us to one platform vision? We have to think over 
> the question.
> 
> What's the one platform will be in your own words? What's your proposal and 
> your focus to help one platform vision being achieved?
> 

I can't claim to have a proposal to get us there. I do think it is something
that will require plenty more discussion and a whole lot of collaboration.

There is certainly overlap is the goals of these efforts. I think it all
comes down to giving the end user the ability to run their workload in the
cloud without requiring (or at least minimizing as much as possible) the
knowledge they need to know about the specific platform implementation.

I think the best way we can get closer to this goal is related to one of the
points I brought up in my candidacy message. I think we need to be open to
collaborating with other open source projects outside of OpenStack to come
up with the best solutions, and to make sure that these parallel or
complementary solutions can work well together.

> 
> Best Regards
> Chaoyi Huang (joehuang)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Can we deprecate the os-hosts API?

2017-04-11 Thread Matt Riedemann

On 4/1/2017 6:07 PM, Kevin Bringard (kevinbri) wrote:

I agree with Mr. Pipes. I’ve not used that API ever and I’ve no recollection of 
anyone asking about it nor am I aware of anyone who actually uses it. I vote to 
deprecate.



Thanks for the feedback Kevin.

Here is the spec:

https://review.openstack.org/#/c/455896/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-bagpipe][networking-bgpvpn][networking-midonet][networking-odl][networking-ovn][networking-sfc][neutron-dynamic-routing][neutron-fwaas] Pike-1 release

2017-04-11 Thread Dariusz Śmigiel
Hey neutrinos,
It is the time. We need to show something to others. That's why, it's
called [1] Milestone.
I've prepared release [2] for all neutron deliverables. This time,
thanks to synchronized release schedule, and gathering most of
networking projects under neutron umbrella, we have pretty huge scope
of changes.
I've applied pike-1 tag to latest changes from git. If there are any
concerns or discrepancies for release, feel free to review mentioned
change.

[1] https://releases.openstack.org/pike/schedule.html#p-1
[2] https://review.openstack.org/#/c/455844/

Thanks,
dasm, Dariusz Śmigiel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] tempest gate job failure

2017-04-11 Thread Masahito MUROI

Hi blazar team,

Tempest gate job is now activated by the patch[1], but now it's always  
fails because of syntax error in the tempest job. I'm sorry to brake  
reviews.


I'm also pushing the patch[2] to fix the problem. If you have time,  
please take a look for the patch.


1. https://review.openstack.org/#/c/452635/
2. https://review.openstack.org/#/c/455683/

best regards,
Masahito



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova][defcore] Removal of Compute Baremetal GET nodes tests from Tempest

2017-04-11 Thread Ghanshyam Mann
On Wed, Apr 12, 2017 at 3:24 AM, Matt Riedemann  wrote:
> On 4/11/2017 4:29 AM, Ghanshyam Mann wrote:
>>
>> Hi All,
>>
>> There is tempest tests for compute baremetal GET nodes tests[1]. This
>> tests involve ironic and nova. Ironic to create baremetal nodes and
>> then GET nodes using nova APIs.
>> Nova only provides GET APIs for baremetal nodes which are already
>> deprecated [2].
>>
>> As nova baremetal APIs are deprecated and test needs Ironic to be
>> present and so ironic baremetal service client,  we propose to remove
>> this test from tempest[3]. We have coverage of that feature/API in
>> ironic tempest plugin for node GET/POST and nova API in nova
>> functional tests[4].
>>
>> I have been objecting this in past but now I feel its not worth to
>> test this in Tempest due to its complexity of Ironic requirement.
>> This is part of tempest tests removal standard, feel free to let us
>> know in case of any objection.
>>
>>
>> ..1
>> https://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_baremetal_nodes.py
>> ..2
>> https://developer.openstack.org/api-ref/compute/#bare-metal-nodes-os-baremetal-nodes-deprecated
>> ..3 https://review.openstack.org/#/c/449158/
>> ..4
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api_sample_tests/test_baremetal_nodes.py
>>
>> -gmann
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> +1 on anything that relies on deprecated proxy APIs in the compute endpoint.
>
> I'm not entirely sure what the defcore process is for this though, i.e. if
> these are already part of the interop guidelines, then I'd think the
> deprecated proxy APIs need to be dropped from the guidelines in the next
> revision and then you could drop them from Tempest - but what does that mean
> for the older defcore / refstack guidelines? Are clouds/products just tested
> against the latest? Or can refstack pin to tagged versions of Tempest for
> older guidelines?
>
> Probably need to talk with Chris Hoge about this.

For this case, we are fine as this tests is not being used by defcore
(because this is admin test). For other cases, we cannot remove tests
from Tempest till defcore stop using that. Latest defcore guidelines
do not use the deprecated APIs (cinder v1, image v1 etc) but not sure
about old guidelines.

>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] pike-1 was released - thank you

2017-04-11 Thread Emilien Macchi
Just a heads-up (while it was mentioned during the TripleO weekly meeting):

We managed to release TripleO pike-1 early this week. This is really
awesome to see the progress we have made and that we continue to do on
the release side. We are continuously improving ourselves and I think
we can be proud of that.
We implemented 8 blueprints and fixed 145 bugs in ~40 days.

I just wanted to thank *you* for this outstanding work and I'm looking
forward to ship the next version of TripleO.

Note: I also want to mention that it wouldn't be possible to achieve
this goal without the external contributors outside TripleO (Infra,
Release management, and other projects in OpenStack). Kudos to them
:-)
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Can all non-Ironic compute drivers handle volume size extension?

2017-04-11 Thread Matt Riedemann
I'm reading through mgagne's spec to support attached volume size 
extension callbacks in Nova [1] and the question that comes up is what 
happens when the backend compute does not support this, either because 
it's too old (Ocata) or the virt driver does not support the event?


The spec is targeted at libvirt to use os-brick, but the hyper-v driver 
is also using os-brick since Ocata, and the Windows connector support 
the extend_volume operation, so that should work.


I don't know about powervm, vmware or xen though.

This is not discoverable at the moment, for the end user or cinder, so 
I'm trying to figure out what the failure mode looks like.


This all starts on the cinder side to extend the size of the attached 
volume. Cinder is going to have to see if Nova is new enough to handle 
this (via the available API versions) before accepting the request and 
resizing the volume. Then Cinder sends the event to Nova. This is where 
it gets interesting.


On the Nova side, if all of the computes aren't new enough, we could 
just fail the request outright with a 409. What does Cinder do then? 
Rollback the volume resize?


But let's say the computes are new enough, but the instance is on a 
compute that does not support the operation. Then what? Do we register 
an instance fault and put the instance into ERROR state? Then the admin 
would need to intervene.


Are there other ideas? Until we have capabilities (info) exposed out of 
the API we're stuck with questions like this.


[1] https://review.openstack.org/#/c/453272/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Ildiko Vancsa
> 
>> Reliability is the key for them. But for the services and
>> applications that are built on top of this base? I'd like to see
>> allowing them a much more open approach: let them develop in whatever
>> language they like, release when they feel the timing is right, and
>> define their own CI testing. In other words, if you want to develop
>> in a language other than Python, go for it! If you want to use a
>> particular NoSQL database, sure thing! However, the price of that
>> freedom is that the burden will be on the project to ensure that it
>> is adequately tested, instead of laying that responsibility on our
>> infra team.
> 
> This is *precisely* what the Big Tent was all about: opening up the "what is 
> an OpenStack project" idea to more newcomers and competing implementations 
> with the condition that the shared cross-project teams like docs and infra 
> would be enablers and not doers. Instead of creating infrastructure for all 
> the new project teams, the infra team would transition to providing guidance 
> for how the project teams should set up gate jobs for themselves. Instead of 
> writing documentation for the project teams, the docs team would instead 
> provide guidance to new teams on how to write docs that integrate effectively 
> with the existing docs tooling.


I think you raised a very important point by putting emphasis on enablement. In 
my view the fact that we experienced the advantages and disadvantages of being 
more centralized in these areas is an important and useful experience. In my 
view it is crucial to remain a continuously evolving community where we are not 
afraid of making changes even if sometimes they seem to happen slowly.

Being somewhat involved in the documentation team’s activities I’m happy to say 
that the team took many steps towards and is still working on [1] the direction 
you mentioned by moving (back) content to the project repositories and by this 
making it easier for the teams to update and maintain those parts, while still 
having the guidance of the docs team both for wording and tooling.

Thanks,
Ildikó

[1] https://review.openstack.org/#/c/439122/ 
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Ildiko Vancsa
Hi All,

> So my question is the following: if elected, how much time do you think
> you'll be able to dedicate to Technical Committee affairs (reviewing
> proposed changes and pushing your own) ?

As a member of the OpenStack Foundation staff I have 100% of my time dedicated 
to the community let that be working with our ecosystem members, helping new 
contributors to join or keeping contact and accelerate collaboration with 
adjacent communities.

Defining an exact amount of time might be challenging as I see plenty of 
overlapping between my tasks and the TC’s scope, which means many of my 
activities serve multiple purpose. While we all have different load on a weekly 
basis I’m dedicated to secure enough time and focus on average to be a reliable 
and valuable TC member.

> If there was ONE thing, one
> initiative, one change you will actively push in the six months between
> this election round and the next, what would it be ?

As others before me pointed out very well picking one item is very hard, nearly 
impossible. I always search for the relation between items and challenges along 
with observing the different aspects of them. My main focus areas currently are 
Telecom/NFV and the Upstream Institute activities. One of the connection points 
between these two is on-boarding, which I would like to highlight here as the 
ONE.

While we’ve already come a long way with working together with the Telecom 
industry we still have a long journey ahead of us. On-boarding boils down to be 
and remain to be an open and open minded community and a helpful environment. 
In my view accepting new developers and new technologies requires very similar 
skills and mindset and requires us to keep OpenStack a welcoming and also an 
innovative place, where I would like to refer back to the challenges mentioned 
in an earlier discussion on this thread about allowing competition and 
exploring new ideas and ways of doing things.

Being or not being a TC member, my pick is to continue helping with the 
different aspects of on-boarding and make the teams along with the technical 
committee and community to be more open to accept new members with new view 
points and ideas and improve communication and collaboration between the 
related parties. In my opinion this helps to improve and maintain our solid and 
stable basis which makes us being able to deal with the technical and 
collaboration challenges - listed on this thread - together.

Thanks and Best Regards,
Ildikó


> 
> Thanks in advance for your answers !
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Tomasz Pa
On Apr 10, 2017 1:02 PM, "John Garbutt"  wrote:

On 10 April 2017 at 11:31,  .

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)


It's not entirely true. On Intel Rack Scale Design platform you can
attach/detach pci devices on fly.



TP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] quota-class-show not sync to quota-show

2017-04-11 Thread Alex Xu
We talked about remove the quota-class API for multiple times (
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099218.html)

I guess we can deprecate the entire quota-class API directly.

2017-04-07 18:19 GMT+08:00 Chen CH Ji :

> Version 2.35 removed most deprecated output like floating ip etc so we
> won't have following in quota-show output
> | floating_ips | 10 |
> | fixed_ips | -1 |
> | security_groups | 10 |
> | security_group_rules | 20 |
>
> however, quota-class-show still have those output, should we use 2.35 to
> fix this bug or add a new microversion or because os-quota-class-sets is
> about to deprecate, we can let it be ? Thanks
>
> DEBUG (session:347) REQ: curl -g -i -X GET http://192.168.123.10:8774/v2.
> 1/os-quota-class-sets/1 -H "OpenStack-API-Version: compute 2.41" -H
> "User-Agent: python-novaclient" -H "Accept: application/json" -H
> "X-OpenStack-Nova-API-Version: 2.41" -H "X-Auth-Token: {SHA1}
> 5008bb2787a9548d65b063f4db2525b4e3bf7163"
>
> RESP BODY: {"quota_class_set": {"injected_file_content_bytes": 10240,
> "metadata_items": 128, "ram": 51200, "floating_ips": 10, "key_pairs": 100,
> "id": "1", "instances": 10, "security_group_rules": 20, "injected_files":
> 5, "cores": 20, "fixed_ips": -1, "injected_file_path_bytes": 255,
> "security_groups": 10}}
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493 <010%208245%201493>
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Nisha Agarwal
Hi John,

>With ironic I thought everything is "passed through" by default,
>because there is no virtualization in the way. (I am possibly
>incorrectly assuming no BIOS tricks to turn off or re-assign PCI
>devices dynamically.)

Yes with ironic everything is passed through by default.

>So I am assuming this is purely a scheduling concern. If so, why are
>the new custom resource classes not good enough? "ironic_blue" could
>mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
>and one 1Gb nic, etc.
>Or is there something else that needs addressing here? Trying to
>describe what you get with each flavor to end users?
Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate
them into node object. These attributes are then used for further
scheduling of the node from nova scheduler using ComputeCapabilities
filter. So this is something automated on ironic side, like we do
inspection of the node properties/attributes and user need to create the
flavor of their choice and the node which meets the user need is scheduled
for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual
step i.e. create a resource class name based on the hardware attributes and
this need to be done on per node basis. For this user need to know the
server hardware properties in advance before assigning the resource class
name to the node(s) and then assign the resource class name manually to the
node.
In a broad way if i say, if we want to support scheduling based on quantity
for ironic nodes there is no way we can do it through current resource
class structure(actually just a tag) in ironic. A  user may want to
schedule ironic nodes on different resources and each resource should be a
different resource class (IMO).

>Are you needing to aggregating similar hardware in a different way to the
above
>resource class approach?
i guess no but the above resource class approach takes away the automation
on the ironic side and the whole purpose of inspection is defeated.

Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:

> On 10 April 2017 at 11:31,   wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
>
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
>
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
>
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
>
> Thanks,
> johnthetubaguy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Flavio Percoco

On 10/04/17 13:52 -0500, Matt Riedemann wrote:
Lots of projects have alternating meeting times to accommodate 
contributors in different time zones, especially Europe and Asia.


The weekly TC meeting, however, does not.

I have to assume this has come up before and if so, why hasn't the TC 
adopted an alternating meeting schedule?


For example, it's 4am in Beijing when the TC meeting happens. It's 
already hard to get people from Asia into leadership roles within 
projects and especially across the community, in large part because of 
the timezone barrier.


How will the TC grow a diverse membership if it's not even held, at 
least every other week, in a timezone where the other half of the 
world can attend?


Glad you brought this up. John also happen to have hinted on this issue in his
reply to Thierry's question and I've brought it up quite a few times in the 
past.

I'm one of the community members affected by the time of our meetings, probably
not as bad as other members. I've been playing around with the idea of having 2
blocked slots (for alternate meetings) and only having ad-hoc meetings.

The governance process has evolved to the point where most of the discussions
can happen on the reviews themselves and there's no need for meetings for
(most?) of the changes.

Just to extend on your point about diversity, the problem with the TC meetings
is not only the time. Language is a barrier too. Some meetings are chilled but
others have a quite big amount of messages going through. This is a problem for
non native-English speakers because it's hard to follow messages coming from
other 12 members and be quick enough to read them AND reply to them before the
topic is changed. Let's not even talk about the times there are *multiple*
conversations happening.

So, yeah, one thing I've been studying for the last couple of months and that
I'd like to pursue in more depth is the idea of not having TC meetings except
for when we really need to have them and encourage other type of
collaborations/interactions between the TC members and the rest of the
community (emails, governance reviews, 1x1 conversations for mentoring/helping
some members, etc). For those cases when meetings are needed, then one of the 2
alternate times can be picked and the meeting chair will have to do moderate the
meeting in a way where focus is kept and chances are given to everyone. For the
latter we've used a "round table" model a couple of times, which IMHO worked
well enough.

I'll write a more detailed email soon(ish),
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Swift Multipart Upload - Manifest Content Length

2017-04-11 Thread Archana C

Dear List, 
I am trying to do SLO upload of large object. The naming convention of the 
segmentswill include a timestamp. The scenario I tried is described below:1. 
Created container 2. Used CURL command to upload segments.3. Uploaded the 
manifest file. Result looks some what like this
# swift list 
container1myobjectmyobject/slo/1491893023.559000/36700160/33554432/  
--> 32MBmyobject/slo/1491893023.559000/36700160/33554432/0001  --> 3 MB
4. Again uploaded few segments to same container with same object name5. 
Uploaded the manifest file again with newly added segments.
Result# swift list 
container1myobjectmyobject/slo/1491893023.559000/36700160/33554432/myobject/slo/1491893023.559000/36700160/33554432/0001myobject/slo/1491893211.561000/36700160/33554432/
  --> 32MBmyobject/slo/1491893211.561000/36700160/33554432/0001  --> 3 MB
What must be the expected content length of the manifest file ?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova][defcore] Removal of Compute Baremetal GET nodes tests from Tempest

2017-04-11 Thread Andrea Frittoli
Thanks for raising this point!

On Tue, Apr 11, 2017 at 5:34 AM Ghanshyam Mann 
wrote:

> Hi All,
>
> There is tempest tests for compute baremetal GET nodes tests[1]. This
> tests involve ironic and nova. Ironic to create baremetal nodes and
> then GET nodes using nova APIs.
> Nova only provides GET APIs for baremetal nodes which are already
> deprecated [2].
>
> As nova baremetal APIs are deprecated and test needs Ironic to be
> present and so ironic baremetal service client,  we propose to remove
> this test from tempest[3]. We have coverage of that feature/API in
> ironic tempest plugin for node GET/POST and nova API in nova
> functional tests[4].
>
+1

That test requires the ironic plugin to be installed, so it does not run in
any
of the Tempest gates today.

I think Tempest cannot / should not host tests that depend on plugins
(which
in turn depend on Tempest).

andreaf


>
> I have been objecting this in past but now I feel its not worth to
> test this in Tempest due to its complexity of Ironic requirement.
> This is part of tempest tests removal standard, feel free to let us
> know in case of any objection.
>
>
> ..1
> https://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_baremetal_nodes.py
> ..2
> https://developer.openstack.org/api-ref/compute/#bare-metal-nodes-os-baremetal-nodes-deprecated
> ..3 https://review.openstack.org/#/c/449158/
> ..4
> http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api_sample_tests/test_baremetal_nodes.py
>
> -gmann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] virtual meetup planning: date and time poll (action required)

2017-04-11 Thread Dmitry Tantsur

Hi all!

We agreed to proceed with planning of our virtual meetup in the end of April / 
beginning of May. Please vote for days and time slots when you're available: 
https://doodle.com/poll/p6rydi6stinqzfrz. Please do it by FRIDAY, Apr 14. (sorry 
for short notice, but I'm on PTO next week. I'm actually on PTO on Friday, but I 
hope I'll remember about this poll :)


Note that I've excluded Mondays, as we have our IRC meeting there. I've added 
two time slots that should work all over the planet. Please let me know if I did 
something wrong about timezones in the poll.


Please also keep adding potential topics to 
https://etherpad.openstack.org/p/ironic-virtual-meetup. We plan on picking 2 
slots in different time slots, but we may pick more (or less) depending on the 
number of topics.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][nova][defcore] Removal of Compute Baremetal GET nodes tests from Tempest

2017-04-11 Thread Ghanshyam Mann
Hi All,

There is tempest tests for compute baremetal GET nodes tests[1]. This
tests involve ironic and nova. Ironic to create baremetal nodes and
then GET nodes using nova APIs.
Nova only provides GET APIs for baremetal nodes which are already
deprecated [2].

As nova baremetal APIs are deprecated and test needs Ironic to be
present and so ironic baremetal service client,  we propose to remove
this test from tempest[3]. We have coverage of that feature/API in
ironic tempest plugin for node GET/POST and nova API in nova
functional tests[4].

I have been objecting this in past but now I feel its not worth to
test this in Tempest due to its complexity of Ironic requirement.
This is part of tempest tests removal standard, feel free to let us
know in case of any objection.


..1 
https://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_baremetal_nodes.py
..2 
https://developer.openstack.org/api-ref/compute/#bare-metal-nodes-os-baremetal-nodes-deprecated
..3 https://review.openstack.org/#/c/449158/
..4 
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/api_sample_tests/test_baremetal_nodes.py

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] New resource implementation workflow

2017-04-11 Thread Norbert Illés

Hi everyone,

Me and two of my colleagues are working on adding Neutron Trunk support 
to Heat. One of us working on the resource implementation, one on the 
unit tests and one on the functional tests. But all of these looks like 
a big chunk of work so I'm wondering how can we divide them into smaller 
parts.


One idea is to split them along life cycle methods (create, update, 
delete, etc.), for example:
 * Implement the resource creation + the relevant unit tests + the 
relevant functional tests, review and merge these
 * implementing the delete operation + the relevant unit tests + the 
relevant functional tests, review and merge these

 * move on to implementing the update operation + tests... and so on.

Lastly, when the last part of the code and tests merged, we can document 
the new resource, create templates in the heat-templates etc.


Is this workflow sounds feasible?

I mostly concerned about the fact that there will be a time period when 
only a half-done feature merged into the Heat codebase, and I'm not sure 
if this is acceptable?


Has anybody implemented a new resource with a team? I would love to hear 
some experiences about how others have organized this kind of work.


Cheers,
Norbert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] project-navigator-data repo live - two choices need input

2017-04-11 Thread Sebastian Marcet
Monty thx so much
basically we have following structure

Release ->Component ->Version

so i think that we could consume this pretty easily, only caveat is that we
still to add from our side, Release and Component data, but i guess that is
doable

thx u so much !!! i will take a look to both formats

cheers

2017-04-11 8:44 GMT-03:00 Monty Taylor :

> Hey all,
>
> We've got the project-navigator-data repo created and there are two
> proposals up for what the content should look like.
>
> Could TC folks please add openstack/project-navigator-data to your watch
> lists, and go review:
>
> https://review.openstack.org/#/c/454688
> https://review.openstack.org/#/c/454691
>
> so we can come to an agreement on which version we prefer? Maybe just +2
> the one you prefer (or both if you don't care) and only -1 if you
> specifically dislike one over the other?
>
> Thanks!
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sebastian Marcet
https://ar.linkedin.com/in/smarcet
SKYPE: sebastian.marcet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] project-navigator-data repo live - two choices need input

2017-04-11 Thread Monty Taylor

Hey all,

We've got the project-navigator-data repo created and there are two 
proposals up for what the content should look like.


Could TC folks please add openstack/project-navigator-data to your watch 
lists, and go review:


https://review.openstack.org/#/c/454688
https://review.openstack.org/#/c/454691

so we can come to an agreement on which version we prefer? Maybe just +2 
the one you prefer (or both if you don't care) and only -1 if you 
specifically dislike one over the other?


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Thierry Carrez
Matt Riedemann wrote:
> Thanks Chris. This reminded me of something I wanted to ask about, to
> all TC members, or those running for a seat.
> 
> Lots of projects have alternating meeting times to accommodate
> contributors in different time zones, especially Europe and Asia.
> 
> The weekly TC meeting, however, does not.
> 
> I have to assume this has come up before and if so, why hasn't the TC
> adopted an alternating meeting schedule?
> 
> For example, it's 4am in Beijing when the TC meeting happens. It's
> already hard to get people from Asia into leadership roles within
> projects and especially across the community, in large part because of
> the timezone barrier.
> 
> How will the TC grow a diverse membership if it's not even held, at
> least every other week, in a timezone where the other half of the world
> can attend?

The current meeting time is more a consequence of the current membership
composition than a hard rule. There is, however (as you point out) much
chicken-and-egg effect at play here -- it's easier to get involved in
the TC if you can regularly attend meetings, so we can't really wait
until someone is elected to change the time.

Alternating meeting times would certainly improve the situation, but I'm
not sure they are the best solution. Personally I would rather try to
decrease our dependency on meetings. Most of the meeting time is
basically used to force attention to a set of specific proposals, and to
communicate news. A lot of the comments/questions raised and answered at
the meeting could be raised and answered directly on the reviews, and on
specific discussion threads. I don't think there is anything we do in
meetings that we could not do elsewhere in a less... synchronous
environment. Avoiding the timezone constraints, and the noisy IRC
discussion driving most non-native speakers away.

It's not an easy change though. While it's easy to just stop meeting,
the usual result is that without the regular weekly drumbeat forcing all
TC members attention to the TC matters, everything slows down to a halt.
So if we end meetings, we need to replace meetings with some other
efficient synchronization mechanism.

I'm very interested in exploring our options there. The TC meeting is
not the only one which could benefit from such an inclusive approach to
coordination.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-11 Thread Thierry Carrez
Thierry Carrez wrote:
> So my question is the following: if elected, how much time do you think
> you'll be able to dedicate to Technical Committee affairs (reviewing
> proposed changes and pushing your own) ?

Keep track of the technical direction of OpenStack is an important part
of my role at the OpenStack Foundation. I would say I spend around a
third of my work time directly on TC matters. A significant portion of
that time is spent on duties that are linked to being the TC chair:
processing governance change requests, setting the meeting agenda,
preparing the weekly meeting, following-up on actions and handling
communications with the Board and UC.

> If there was ONE thing, one
> initiative, one change you will actively push in the six months between
> this election round and the next, what would it be ?

Reading everyone else's replies, I'm happy to see that a lot of complex
issues that I consider priorities are listed by other candidates. So
I'll raise something simple that wasn't mentioned yet.

I think the Technical Committee needs to be more prescriptive and clear
about what type of contributions is useful, and where they are the most
useful. We can't assume that everyone knows what is a strategic
contribution to OpenStack. We need to come up with an opinionated
help-wanted list of high-value objectives, so that everyone knows where
their contribution will make a difference. Talking with companies and
contributors from Asia, they need guidance on where to apply their
resources. An "official" list would go a long way. Giving more
recognition to the organizations and individuals helping in those
critical areas would also help. If the only yardstick you give people to
measure their contribution is Stackalytics, you get all sorts of waste
of effort. While we could afford such a waste in the past, I don't think
that's a luxury we have anymore.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Manila]share or volume's size unit

2017-04-11 Thread Duncan Thomas
Changing the size to a float creates rounding issues, and means
changes to the db, the api definition, and changes to every single
client and client library out there, for very little gain.

On 10 April 2017 at 04:41, jun zhong  wrote:
> I agree with you extend might be one way to solve the problem.
>
> By the way, How about another way that we could import volume
> size with float value? such as: 2.5G, 3.4G?
>
> Did community consider about it in the begin?
>
>
> 2017-04-07 20:16 GMT+08:00 Duncan Thomas :
>>
>> Cinder will store the volume as 1G in the database (and quota) even if
>> the volume is only 500M. It will stay as 500M when it is attached
>> though. It's a side effect of importing volumes, but that's usually a
>> pretty uncommon thing to do, so shouldn't affect many people or cause
>> a huge amount of trouble.
>>
>> There are also backends that allocate in units greater than 1G, and so
>> sometimes give you slightly bigger volumes than you asked for. Cinder
>> doesn't not go out if its way to support this; again the database and
>> quota will reflect what you asked for, the attached volume will be a
>> slightly different size.
>>
>> In your case, extend might be one way to solve the problem, if you
>> backend supports it. I'm not certain what will happen if you ask
>> cinder to extend to 1G a volume it already thinks is 1G... if it
>> doesn't work, please file a bug.
>>
>> On 7 April 2017 at 09:01, jun zhong  wrote:
>> > Hi guys,
>> >
>> > We know the share's size unit is in gigabiyte in manila, and volume's
>> > size
>> > unit is also in gigabiyte in cinder, But there is a question that the
>> > size
>> > is not exactly after we migrate tradition enviroment to OpenStack.
>> > For example:
>> > 1.There is original volume(vol_1) with 500MB size in tradition
>> > enviroment
>> > 2.We want to use openstack to manage this volume(vol_1)
>> > 3.We can only use 1GB volume to manage the original volume(vol_1),
>> > because
>> > the cinder volume size can not support 500MB.
>> > How to deal with this? Could we set the volume or share's unit to float
>> > or
>> > something else? or add new unit MB? or just extend the original volume
>> > size?
>> >
>> >
>> > Thanks
>> > jun
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] New resource implementation workflow

2017-04-11 Thread Pavlo Shchelokovskyy
Hi Norbert,

my biggest concern with the workflow you've shown is that in the meantime
it would be possible to create undeletable stacks / stacks that leave
resources behind after being deleted. As the biggest challenge is usually
in updates (if it is not UpdateReplace) I'd suggest implementing create and
delete together. To ease development you could start with only basic
properties for the resource if it is possible to figure out their set (with
some sane defaults if those are absent in API) and add more tunable
resource properties later.

I also remember that Heat has smth like 'hidden' in resource plugin
declaration. Usually it is used to hide deprecated resource types so that
new stacks with those can not be created but old ones can be at least
deleted. May be you could use that flag while developing until you think
that resource is already usable, although it might complicate your own
testing of those resources.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Tue, Apr 11, 2017 at 3:33 PM, Norbert Illés  wrote:

> Hi everyone,
>
> Me and two of my colleagues are working on adding Neutron Trunk support to
> Heat. One of us working on the resource implementation, one on the unit
> tests and one on the functional tests. But all of these looks like a big
> chunk of work so I'm wondering how can we divide them into smaller parts.
>
> One idea is to split them along life cycle methods (create, update,
> delete, etc.), for example:
>  * Implement the resource creation + the relevant unit tests + the
> relevant functional tests, review and merge these
>  * implementing the delete operation + the relevant unit tests + the
> relevant functional tests, review and merge these
>  * move on to implementing the update operation + tests... and so on.
>
> Lastly, when the last part of the code and tests merged, we can document
> the new resource, create templates in the heat-templates etc.
>
> Is this workflow sounds feasible?
>
> I mostly concerned about the fact that there will be a time period when
> only a half-done feature merged into the Heat codebase, and I'm not sure if
> this is acceptable?
>
> Has anybody implemented a new resource with a team? I would love to hear
> some experiences about how others have organized this kind of work.
>
> Cheers,
> Norbert
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev