Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,

1.   How many Trunk ports can be created ? Will there be any Active-Standby 
concepts will be there ?



2.   Is it possible to configure multiple IP address configured on these 
ports ?

In case IPv6 there can be multiple primary address configured will this be 
supported ?



3.   If required can these ports can be aggregated into single one 
dynamically ?



4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?



Thanks & Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
mailto:erik@ericsson.com>> wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation that on some clouds passing VLANs works and on 
others it doesn't, and that the tenant can't actually tell in advance which 
sort of cloud they're working on.
Trunk networks are a requirement for some use cases independent of the port 
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the hard 
stuff possible' we can't just say 'no Neutron network passes VLAN tagged 
packets'.  And even if we did, we're evading a problem that exists with exactly 
one sort of network infrastructure - VLAN tagging for network separation - 
while making it hard to use for all of the many other cases in which it would 
work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to use 
VLANs without having to use it (in much the same way that I would like, in 
certain circumstances, not to have to use Openstack's address allocation and 
DHCP - it's nice that I can, but I shouldn't be forced to).
My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since trunk network could not be realized in reality I only had them in the 
model and optimized them away.

Again, this is down to your choice of VLAN tagged networking and/or the OVS ML2 
driver; it doesn't apply to all deployments.

But the L2-gateway + trunk network has a flexible API, what if someone connects 
two VMs to one trunk network, well, hard to optimize away.

That's certainly true, but it wasn't really intended to be optimised away.
Anyway, due to these and other issues, I limited my scope and switched to the 
current trunk port/subport model.

The code that is for review is functional, you can boot a VM with a trunk port 
+ subports (each subport maps to a VLAN). The VM can send/receive VLAN traffic. 
You can add/remove subports on a running VM. You can specify IP address per 
subport and use DHCP to retrieve them etc.

I'm coming to realise that the two solutions address different needs - the VLAN 
port one is much more useful for cases where you know what's going on in the 
network and you want Openstack to help, but it's just not broad enough to solve 
every problem.  It may well be that we want both solutions, in which case we 
just need to agree that 'we shouldn't do trunk networking because VLAN aware 
ports solve this problem' is not a valid argument during spec review.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Li Tianqing
The problem is that it is not at the begining to transmit large file. It is 
after some packets trasmited, then the connection is choked. 
After the connection choked, from the bridge in compute host we can see the 
sender send packets, and the receiver can not get the packets.
If it is the pmtud, then at the very begining, the packet can not transmit from 
the begining.


At 2014-10-28 14:10:09, "Ian Wells"  wrote:

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.


The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--

Ian.




On 27 October 2014 20:14, Li Tianqing  wrote:







--

Best
Li Tianqing



At 2014-10-27 17:42:41, "Ihar Hrachyshka"  wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 27/10/14 02:18, Li Tianqing wrote:
>> Hello, Right now, we test neutron under havana release. We
>> configured network_device_mtu=1450 in neutron.conf, After create
>> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.
>> But if we scp large file between vms then scp display 'stalled'.
>> And iperf is also can not completed. If we configured vm's mtu to
>> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the
>> iperf is ok too. The vms path mtu discovery is set by default. I do
>> not know why the vm whose mtu is 1500 can not send large file.
>
>There is a neutron spec currently in discussion for Kilo to finally
>fix MTU issues due to tunneling, that also tries to propagate MTU

>inside instances: https://review.openstack.org/#/c/105989/


The problem is i do not know why the vm with 1500 mtu can not send large file? 
I found the packet send out all with DF, and is it because the DF set default 
by linux cause the packet
be dropped? And the application do not handle the return back icmp packet with 
the smaller mtu?


>
>/Ihar
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>
>iQEcBAEBCgAGBQJUThORAAoJEC5aWaUY1u571u4H/3EqEVPL1Q9KgymrudLpAdRh
>fwNarwPWT8Ed+0x7WIXAr7OFXX1P90cKRAZKTlAEEI94vOrdr0s608ZX8awMuLeu
>+LB6IA7nMpgJammfDb8zNmYLHuTQGGatXblOinvtm3XXIcNbkNu8840MTV3y/Jdq
>Mndtz69TrjTrjn7r9REJ4bnRIlL4DGo+gufXPD49+yax1y/woefqwZPU13kO6j6R
>Q0+MAy13ptg2NwX26OI+Sb801W0kpDXby6WZjfekXqxqv62fY1/lPQ3oqqJBd95K
>EFe5NuogLV7UGH5vydQJa0eO2jw5lh8qLuHSShGcDEp/N6oQWiDzXYYYoEQdUic=
>=jRQ/
>-END PGP SIGNATURE-
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread A, Keshava
Hi,

Currently OpenStack have any framework to notify the Tennant/Service-VM for 
such kind of notification based on VM’s interest ?
VM may be very much interested for such kind of notification like

1.   Path MTU.

2.   Based on specific incoming Tennant traffic, block/Allow  particular 
traffic flow at infrastructure level itself, instead of at VM.

This may require OpenStack infrastructure notification support to 
Tenant/Service VM.

…
Thanks & regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 11:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] vm can not transport large file under 
neutron ml2 + linux bridge + vxlan

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.
The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--
Ian.

On 27 October 2014 20:14, Li Tianqing mailto:jaze...@163.com>> 
wrote:




--
Best
Li Tianqing


At 2014-10-27 17:42:41, "Ihar Hrachyshka" 
mailto:ihrac...@redhat.com>> wrote:

>-BEGIN PGP SIGNED MESSAGE-

>Hash: SHA512

>

>On 27/10/14 02:18, Li Tianqing wrote:

>> Hello, Right now, we test neutron under havana release. We

>> configured network_device_mtu=1450 in neutron.conf, After create

>> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.

>> But if we scp large file between vms then scp display 'stalled'.

>> And iperf is also can not completed. If we configured vm's mtu to

>> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the

>> iperf is ok too. The vms path mtu discovery is set by default. I do

>> not know why the vm whose mtu is 1500 can not send large file.

>

>There is a neutron spec currently in discussion for Kilo to finally

>fix MTU issues due to tunneling, that also tries to propagate MTU

>inside instances: https://review.openstack.org/#/c/105989/



The problem is i do not know why the vm with 1500 mtu can not send large file?

I found the packet send out all with DF, and is it because the DF set default 
by linux cause the packet

be dropped? And the application do not handle the return back icmp packet with 
the smaller mtu?





>

>/Ihar

>-BEGIN PGP SIGNATURE-

>Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

>

>iQEcBAEBCgAGBQJUThORAAoJEC5aWaUY1u571u4H/3EqEVPL1Q9KgymrudLpAdRh

>fwNarwPWT8Ed+0x7WIXAr7OFXX1P90cKRAZKTlAEEI94vOrdr0s608ZX8awMuLeu

>+LB6IA7nMpgJammfDb8zNmYLHuTQGGatXblOinvtm3XXIcNbkNu8840MTV3y/Jdq

>Mndtz69TrjTrjn7r9REJ4bnRIlL4DGo+gufXPD49+yax1y/woefqwZPU13kO6j6R

>Q0+MAy13ptg2NwX26OI+Sb801W0kpDXby6WZjfekXqxqv62fY1/lPQ3oqqJBd95K

>EFe5NuogLV7UGH5vydQJa0eO2jw5lh8qLuHSShGcDEp/N6oQWiDzXYYYoEQdUic=

>=jRQ/

>-END PGP SIGNATURE-

>

>___

>OpenStack-dev mailing list

>OpenStack-dev@lists.openstack.org

>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] About deployment progress calculation

2014-10-28 Thread Dmitriy Shulyak
Hello everyone,

I want to raise concerns about progress bar, and its usability.
In my opinion current approach has several downsides:
1. No valuable information
2. Very fragile, you need to change code in several places not to break it
3. Will not work with plugable code

Log parsing works under one basic assumption - that we are in control of
all tasks,
so we can use mappings to logs with certain pattern.
 It wont work with plugable architecture, and i am talking not about
fuel-plugins, and the
way it will be done in 6.0, but the whole idea of plugable architecture,
and i assume that internal features will be implemented as granular
self-contained plugins,
and it will be possible to accomplish not only with puppet, but with any
other tool that suits you.
Asking person who will provide plugin (extension) to add mappings to logs -
feels like weirdeist thing ever.

*What can be done to improve usability of progress calculation?*
I see here several requirements:
1.Provide valuable information
  - Correct representation of time that task takes to run
  - What is going on target node in any point of the deployment?
2. Plugin friendly, it means that approach we will take should be flexible
and extendable

*Implementation:*
In nearest future deployment will be splitted into tasks, they are will be
big, not granular
(like deploy controller, deploy compute), but this does not matter, because
we can start to estimate them.
Each task will provide estimated time.
At first it will be manually setted by person who develops plugin (tasks),
but it can be improved,
so this information automatically (or semi-auto) will be provided by
fuel-stats application.
It will require orchestrator to report 2 simple entities:
- time delta of the task
- task identity
UI will be able to show percents anyway, but additionally it will show what
is running on target node.

Ofcourse it is not about 6.0, but please take a look, and lets try to agree
on what is right way to solve this task, because log parsing will not work
with data-driven
orchestrator and plugable architecture.
Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Li Tianqing
lan, you are right, the receiver only receive packet that small than 1450. 
Because the sender does not send large packets at the begining, so
tcpdump can catch some small packets. 


Another question about the mtu, what if we clear the  DF in the ip packets?  
Then l2 can split packets into smaller mtu size?

At 2014-10-28 15:15:51, "Li Tianqing"  wrote:

The problem is that it is not at the begining to transmit large file. It is 
after some packets trasmited, then the connection is choked. 
After the connection choked, from the bridge in compute host we can see the 
sender send packets, and the receiver can not get the packets.
If it is the pmtud, then at the very begining, the packet can not transmit from 
the begining.


At 2014-10-28 14:10:09, "Ian Wells"  wrote:

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.


The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--

Ian.




On 27 October 2014 20:14, Li Tianqing  wrote:







--

Best
Li Tianqing



At 2014-10-27 17:42:41, "Ihar Hrachyshka"  wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 27/10/14 02:18, Li Tianqing wrote:
>> Hello, Right now, we test neutron under havana release. We
>> configured network_device_mtu=1450 in neutron.conf, After create
>> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.
>> But if we scp large file between vms then scp display 'stalled'.
>> And iperf is also can not completed. If we configured vm's mtu to
>> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the
>> iperf is ok too. The vms path mtu discovery is set by default. I do
>> not know why the vm whose mtu is 1500 can not send large file.
>
>There is a neutron spec currently in discussion for Kilo to finally
>fix MTU issues due to tunneling, that also tries to propagate MTU

>inside instances: https://review.openstack.org/#/c/105989/


The problem is i do not know why the vm with 1500 mtu can not send large file? 
I found the packet send out all with DF, and is it because the DF set default 
by linux cause the packet
be dropped? And the application do not handle the return back icmp packet with 
the smaller mtu?


>
>/Ihar
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>
>iQEcBAEBCgAGBQJUThORAAoJEC5aWaUY1u571u4H/3EqEVPL1Q9KgymrudLpAdRh
>fwNarwPWT8Ed+0x7WIXAr7OFXX1P90cKRAZKTlAEEI94vOrdr0s608ZX8awMuLeu
>+LB6IA7nMpgJammfDb8zNmYLHuTQGGatXblOinvtm3XXIcNbkNu8840MTV3y/Jdq
>Mndtz69TrjTrjn7r9REJ4bnRIlL4DGo+gufXPD49+yax1y/woefqwZPU13kO6j6R
>Q0+MAy13ptg2NwX26OI+Sb801W0kpDXby6WZjfekXqxqv62fY1/lPQ3oqqJBd95K
>EFe5NuogLV7UGH5vydQJa0eO2jw5lh8qLuHSShGcDEp/N6oQWiDzXYYYoEQdUic=
>=jRQ/
>-END PGP SIGNATURE-
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence prototyping

2014-10-28 Thread Anant Patil
On 23-Oct-14 23:40, Zane Bitter wrote:
> Hi folks,
> I've been looking at the convergence stuff, and become a bit concerned 
> that we're more or less flying blind (or at least I have been) in trying 
> to figure out the design, and also that some of the first implementation 
> efforts seem to be around the stuff that is _most_ expensive to change 
> (e.g. database schemata).
> 
> What we really want is to experiment on stuff that is cheap to change 
> with a view to figuring out the big picture without having to iterate on 
> the expensive stuff. To that end, I started last week to write a little 
> prototype system to demonstrate the concepts of convergence. (Note that 
> none of this code is intended to end up in Heat!) You can find the code 
> here:
> 
> https://github.com/zaneb/heat-convergence-prototype
> 
> Note that this is a *very* early prototype. At the moment it can create 
> resources, and not much else. I plan to continue working on it to 
> implement updates and so forth. My hope is that we can develop a test 
> framework and scenarios around this that can eventually be transplanted 
> into Heat's functional tests. So the prototype code is throwaway, but 
> the tests we might write against it in future should be useful.
> 
> I'd like to encourage anyone who needs to figure out any part of the 
> design of convergence to fork the repo and try out some alternatives - 
> it should be very lightweight to do so. I will also entertain pull 
> requests (though I see my branch primarily as a vehicle for my own 
> learning at this early stage, so if you want to go in a different 
> direction it may be best to do so on your own branch), and the issue 
> tracker is enabled if there is something you want to track.
> 

We are working on PoC for convergence and have some of the patches lined
up for review under convergence-poc topic. We planned for changes to be
incremental to existing design instead of prototyping them separately in
order to make it easier for everyone to understand what we are trying to
achieve and to assess what it takes to do it (in terms of amount of
changes).

The functional tests are going to be great; we all can move with
confidence once they are in place.

> I have learned a bunch of stuff already:
> 
> * The proposed spec for persisting the dependency graph 
> (https://review.openstack.org/#/c/123749/1) is really well done. Kudos 
> to Anant and the other folks who had input to it. I have left comments 
> based on what I learned so far from trying it out.
> 
> 
> * We should isolate the problem of merging two branches of execution 
> (i.e. knowing when to trigger a check on one resource that depends on 
> multiple others). Either in a library (like taskflow) or just a separate 
> database table (like my current prototype). Baking it into the 
> orchestration algorithms (e.g. by marking nodes in the dependency graph) 
> would be a colossal mistake IMHO.
> 
> 
> * Our overarching plan is backwards.
> 
> There are two quite separable parts to this architecture - the worker 
> and the observer. Up until now, we have been assuming that implementing 
> the observer would be the first step. Originally we thought that this 
> would give us the best incremental benefits. At the mid-cycle meetup we 
> came to the conclusion that there were actually no real incremental 
> benefits to be had until everything was close to completion. I am now of 
> the opinion that we had it exactly backwards - the observer 
> implementation should come last. That will allow us to deliver 
> incremental benefits from the observer sooner.
> 
> The problem with the observer is that it requires new plugins. (That 
> sucks BTW, because a lot of the value of Heat is in having all of these 
> tested, working plugins. I'd love it if we could take the opportunity to 
> design a plugin framework such that plugins would require much less 
> custom code, but it looks like a really hard job.) Basically this means 
> that convergence would be stalled until we could rewrite all the 
> plugins. I think it's much better to implement a first stage that can 
> work with existing plugins *or* the new ones we'll eventually have with 
> the observer. That allows us to get some benefits soon and further 
> incremental benefits as we convert plugins one at a time. It should also 
> mean a transition period (possibly with a performance penalty) for 
> existing plugin authors, and for things like HARestarter (can we please 
> please deprecate it now?).
> 
> So the two phases I'm proposing are:
>   1. (Workers) Distribute tasks for individual resources among workers; 
> implement update-during-update (no more locking).
>   2. (Observers) Compare against real-world values instead of template 
> values to determine when updates are needed. Make use of notifications 
> and such.
> 
> I believe it's quite realistic to aim to get #1 done for Kilo. There 
> could also be a phase 1.5, where we use the existing stack-check 
> mechanism to de

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Ian Wells
This all appears to be referring to trunking ports, rather than anything
else, so I've addressed the points in that respect.

On 28 October 2014 00:03, A, Keshava  wrote:

>  Hi,
>
> 1.   How many Trunk ports can be created ?
>
Why would there be a limit?

> Will there be any Active-Standby concepts will be there ?
>
I don't believe active-standby, or any HA concept, is directly relevant.
Did you have something in mind?

>   2.   Is it possible to configure multiple IP address configured on
> these ports ?
>
Yes, in the sense that you can have addresses per port.  The usual
restrictions to ports would apply, and they don't currently allow multiple
IP addresses (with the exception of the address-pair extension).

> In case IPv6 there can be multiple primary address configured will this be
> supported ?
>
No reason why not - we're expecting to re-use the usual port, so you'd
expect the features there to apply (in addition to having multiple sets of
subnet on a trunking port).

>   3.   If required can these ports can be aggregated into single one
> dynamically ?
>
That's not really relevant to trunk ports or networks.

>  4.   Will there be requirement to handle Nested tagged packet on
> such interfaces ?
>
For trunking ports, I don't believe anyone was considering it.


>
>
>
>
>
>
> Thanks & Regards,
>
> Keshava
>
>
>
> *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
> *Sent:* Monday, October 27, 2014 9:45 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> On 25 October 2014 15:36, Erik Moe  wrote:
>
>  Then I tried to just use the trunk network as a plain pipe to the
> L2-gateway and connect to normal Neutron networks. One issue is that the
> L2-gateway will bridge the networks, but the services in the network you
> bridge to is unaware of your existence. This IMO is ok then bridging
> Neutron network to some remote network, but if you have an Neutron VM and
> want to utilize various resources in another Neutron network (since the one
> you sit on does not have any resources), things gets, let’s say non
> streamlined.
>
>
>
> Indeed.  However, non-streamlined is not the end of the world, and I
> wouldn't want to have to tag all VLANs a port is using on the port in
> advance of using it (this works for some use cases, and makes others
> difficult, particularly if you just want a native trunk and are happy for
> Openstack not to have insight into what's going on on the wire).
>
>
>
>   Another issue with trunk network is that it puts new requirements on
> the infrastructure. It needs to be able to handle VLAN tagged frames. For a
> VLAN based network it would be QinQ.
>
>
>
> Yes, and that's the point of the VLAN trunk spec, where we flag a network
> as passing VLAN tagged packets; if the operator-chosen network
> implementation doesn't support trunks, the API can refuse to make a trunk
> network.  Without it we're still in the situation that on some clouds
> passing VLANs works and on others it doesn't, and that the tenant can't
> actually tell in advance which sort of cloud they're working on.
>
> Trunk networks are a requirement for some use cases independent of the
> port awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and
> the hard stuff possible' we can't just say 'no Neutron network passes VLAN
> tagged packets'.  And even if we did, we're evading a problem that exists
> with exactly one sort of network infrastructure - VLAN tagging for network
> separation - while making it hard to use for all of the many other cases in
> which it would work just fine.
>
> In summary, if we did port-based VLAN knowledge I would want to be able to
> use VLANs without having to use it (in much the same way that I would like,
> in certain circumstances, not to have to use Openstack's address allocation
> and DHCP - it's nice that I can, but I shouldn't be forced to).
>
>  My requirements were to have low/no extra cost for VMs using VLAN trunks
> compared to normal ports, no new bottlenecks/single point of failure. Due
> to this and previous issues I implemented the L2 gateway in a distributed
> fashion and since trunk network could not be realized in reality I only had
> them in the model and optimized them away.
>
>
>
> Again, this is down to your choice of VLAN tagged networking and/or the
> OVS ML2 driver; it doesn't apply to all deployments.
>
>
>
>  But the L2-gateway + trunk network has a flexible API, what if someone
> connects two VMs to one trunk network, well, hard to optimize away.
>
>
>
> That's certainly true, but it wasn't really intended to be optimised away.
>
>  Anyway, due to these and other issues, I limited my scope and switched
> to the current trunk port/subport model.
>
>
>
> The code that is for review is functional, you can boot a VM with a trunk
> port + subports (each subport maps to a VLAN). The VM can send/receive VLAN
> traffic. You can add

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi,
Current Open-stack was built as flat network.
With the introduction of the L3 lookup (by inserting the routing table in 
forwarding path) and separate 'VIF Route Type' interface:

At what point of time in the packet processing  decision will be made to lookup 
FIB  during ? For each packet there will additional  FIB lookup ?
How about the  impact on  'inter compute traffic', processed by  DVR  ?

Here thinking  OpenStack cloud as hierarchical network instead of Flat network ?

Thanks & regards,
Keshava

From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
Sent: Monday, October 27, 2014 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi

I'm interested as well in this model. Curious to understand the routing filters 
and their implementation that will enable isolation between tenant networks.
Also, having a BoF session on "Virtual Networking using L3" may be useful to 
get all interested folks together at the Summit.


Thanks
Rohit

From: Kevin Benton mailto:blak...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, October 24, 2014 12:51 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more 
details for that available?
It seems like not providing L2 connectivity between members of the same Neutron 
network conflicts with assumptions ML2 will make about segmentation IDs, etc. 
So I am interested in seeing how exactly the ML2 driver will bind ports, 
segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield 
mailto:cory.benfi...@metaswitch.com>> wrote:
All,

Project Calico [1] is an open source approach to virtual networking based on L3 
routing as opposed to L2 bridging.  In order to accommodate this approach 
within OpenStack, we've just submitted 3 blueprints that cover

-  minor changes to nova to add a new VIF type [2]
-  some changes to neutron to add DHCP support for routed interfaces [3]
-  an ML2 mechanism driver that adds support for Project Calico [4].

We feel that allowing for routed network interfaces is of general use within 
OpenStack, which was our motivation for submitting [2] and [3].  We also 
recognise that there is an open question over the future of 3rd party ML2 
drivers in OpenStack, but until that is finally resolved in Paris, we felt 
submitting our driver spec [4] was appropriate (not least to provide more 
context on the changes proposed in [2] and [3]).

We're extremely keen to hear any and all feedback on these proposals from the 
community.  We'll be around at the Paris summit in a couple of weeks and would 
love to discuss with anyone else who is interested in this direction.

Regards,

Cory Benfield (on behalf of the entire Project Calico team)

[1] http://www.projectcalico.org
[2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
[4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Ian Wells
On 28 October 2014 00:18, A, Keshava  wrote:

>  Hi,
>
>
>
> Currently OpenStack have any framework to notify the Tennant/Service-VM
> for such kind of notification based on VM’s interest ?
>

It's possible to use DHCP or RA to notify a VM of the MTU but there are
limitations (RAs don't let you increase the MTU, only decrease it, and
obviously VMs must support the MTU element of DHCP) and Openstack doesn't
currently use it.  You can statically configure the DHCP MTU number that
DHCP transmits; this is useful to work around problems but not really the
right answer to the problem.


>  VM may be very much interested for such kind of notification like
>
> 1.   Path MTU.
>
This will be correctly discovered from the ICMP PMTU exceeded message, and
Neutron routers should certainly be expected to send that.  (In fact the
namespace implementation of routers would do this if the router ever had
different MTUs on its ports; it's in the kernel network stack.)  There's no
requirement for a special notification, and indeed you couldn't do it that
way anyway.

>  2.   Based on specific incoming Tennant traffic, block/Allow
>  particular traffic flow at infrastructure level itself, instead of at VM.
>
I don't see the relevance; and you appear to be describing security groups.

>  This may require OpenStack infrastructure notification support to
> Tenant/Service VM.
>
Not particularly, as MTU doesn't generally change, and I think we would
forbid changing the MTU of a network after creation.  It's only an initial
configuration thing, therefore.  It might involve better cloud-init support
for network configuration, something that gets discussed periodically.

-- 
Ian.

>
>
> …
>
> Thanks & regards,
>
> Keshava
>
>
>
> *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
> *Sent:* Tuesday, October 28, 2014 11:40 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] vm can not transport large file
> under neutron ml2 + linux bridge + vxlan
>
>
>
> Path MTU discovery works on a path - something with an L3 router in the
> way - where the outbound interface has a smaller MTU than the inbound one.
> You're transmitting across an L2 network - no L3 routers present.  You send
> a 1500 byte packet, the network fabric (which is not L3, has no address,
> and therefore has no means to answer you) does all that it can do with that
> packet - it drops it.  The sender retransmits, assuming congestion, but the
> same thing happens.  Eventually the sender decides there's a network
> problem and times out.
>
> This is a common problem with Openstack deployments, although various
> features of the virtual networking let you get away with it, with some
> configs and not others.  OVS used to fake a PMTU exceeded message from the
> destination if you tried to pass an overlarge packet - not in spec, but it
> hid the problem nicely.  I have a suspicion that some implementations will
> fragment the containing UDP packet, which is also not in spec and also
> solves the problem (albeit with poor performance).
>
> The right answer for you is to set the MTU in your machines to the same
> MTU you've given the network, that is, 1450 bytes.  You can do this by
> setting a DHCP option for MTU, providing your VMs support that option
> (search the web for the solution, I don't have it offhand) or lower the MTU
> by hand or by script when you start your VM.
>
> The right answer for everyone is to properly determine and advertise the
> network MTU to VMs (which, with provider networks, is not even consistent
> from one network to the next) and that's the spec Kyle is referring to.
> We'll be fixing this in Kilo.
> --
>
> Ian.
>
>
>
> On 27 October 2014 20:14, Li Tianqing  wrote:
>
>
>
>
>
>  --
>
> Best
>
> Li Tianqing
>
>
>
>
> At 2014-10-27 17:42:41, "Ihar Hrachyshka"  wrote:
>
> >-BEGIN PGP SIGNED MESSAGE-
>
> >Hash: SHA512
>
> >
>
> >On 27/10/14 02:18, Li Tianqing wrote:
>
> >> Hello, Right now, we test neutron under havana release. We
>
> >> configured network_device_mtu=1450 in neutron.conf, After create
>
> >> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.
>
> >> But if we scp large file between vms then scp display 'stalled'.
>
> >> And iperf is also can not completed. If we configured vm's mtu to
>
> >> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the
>
> >> iperf is ok too. The vms path mtu discovery is set by default. I do
>
> >> not know why the vm whose mtu is 1500 can not send large file.
>
> >
>
> >There is a neutron spec currently in discussion for Kilo to finally
>
> >fix MTU issues due to tunneling, that also tries to propagate MTU
>
>  >inside instances: https://review.openstack.org/#/c/105989/
>
>
>
>  The problem is i do not know why the vm with 1500 mtu can not send large 
> file?
>
>  I found the packet send out all with DF, and is it because the DF set 
> default by linux cause the packet
>
>  be dropped? 

Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Ian Wells
On 28 October 2014 00:30, Li Tianqing  wrote:

> lan, you are right, the receiver only receive packet that small than 1450.
> Because the sender does not send large packets at the begining, so
> tcpdump can catch some small packets.
>
> Another question about the mtu, what if we clear the  DF in the ip
> packets?  Then l2 can split packets into smaller mtu size?
>

Routers can split packets.  L2 networks don't understand IP headers and
therefore can't fragment packets.  DF doesn't change that.

(DF is there to make PMTU discovery work, incidentially; it's what prompts
routers to return PMTU exceeded messages.)
-- 
Ian.

At 2014-10-28 15:15:51, "Li Tianqing"  wrote:
>
> The problem is that it is not at the begining to transmit large file. It
> is after some packets trasmited, then the connection is choked.
> After the connection choked, from the bridge in compute host we can see
> the sender send packets, and the receiver can not get the packets.
> If it is the pmtud, then at the very begining, the packet can not transmit
> from the begining.
>
> At 2014-10-28 14:10:09, "Ian Wells"  wrote:
>
> Path MTU discovery works on a path - something with an L3 router in the
> way - where the outbound interface has a smaller MTU than the inbound one.
> You're transmitting across an L2 network - no L3 routers present.  You send
> a 1500 byte packet, the network fabric (which is not L3, has no address,
> and therefore has no means to answer you) does all that it can do with that
> packet - it drops it.  The sender retransmits, assuming congestion, but the
> same thing happens.  Eventually the sender decides there's a network
> problem and times out.
>
> This is a common problem with Openstack deployments, although various
> features of the virtual networking let you get away with it, with some
> configs and not others.  OVS used to fake a PMTU exceeded message from the
> destination if you tried to pass an overlarge packet - not in spec, but it
> hid the problem nicely.  I have a suspicion that some implementations will
> fragment the containing UDP packet, which is also not in spec and also
> solves the problem (albeit with poor performance).
>
> The right answer for you is to set the MTU in your machines to the same
> MTU you've given the network, that is, 1450 bytes.  You can do this by
> setting a DHCP option for MTU, providing your VMs support that option
> (search the web for the solution, I don't have it offhand) or lower the MTU
> by hand or by script when you start your VM.
>
> The right answer for everyone is to properly determine and advertise the
> network MTU to VMs (which, with provider networks, is not even consistent
> from one network to the next) and that's the spec Kyle is referring to.
> We'll be fixing this in Kilo.
> --
> Ian.
>
>
> On 27 October 2014 20:14, Li Tianqing  wrote:
>
>>
>>
>>
>>
>>
>> --
>> Best
>> Li Tianqing
>>
>>
>> At 2014-10-27 17:42:41, "Ihar Hrachyshka"  wrote:
>> >-BEGIN PGP SIGNED MESSAGE-
>> >Hash: SHA512
>> >
>> >On 27/10/14 02:18, Li Tianqing wrote:
>> >> Hello, Right now, we test neutron under havana release. We
>> >> configured network_device_mtu=1450 in neutron.conf, After create
>> >> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.
>> >> But if we scp large file between vms then scp display 'stalled'.
>> >> And iperf is also can not completed. If we configured vm's mtu to
>> >> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the
>> >> iperf is ok too. The vms path mtu discovery is set by default. I do
>> >> not know why the vm whose mtu is 1500 can not send large file.
>> >
>> >There is a neutron spec currently in discussion for Kilo to finally
>> >fix MTU issues due to tunneling, that also tries to propagate MTU
>> >inside instances: https://review.openstack.org/#/c/105989/
>>
>> The problem is i do not know why the vm with 1500 mtu can not send large 
>> file?
>> I found the packet send out all with DF, and is it because the DF set 
>> default by linux cause the packet
>> be dropped? And the application do not handle the return back icmp packet 
>> with the smaller mtu?
>>
>>  >
>> >/Ihar
>> >-BEGIN PGP SIGNATURE-
>> >Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>> >
>> >iQEcBAEBCgAGBQJUThORAAoJEC5aWaUY1u571u4H/3EqEVPL1Q9KgymrudLpAdRh
>> >fwNarwPWT8Ed+0x7WIXAr7OFXX1P90cKRAZKTlAEEI94vOrdr0s608ZX8awMuLeu
>> >+LB6IA7nMpgJammfDb8zNmYLHuTQGGatXblOinvtm3XXIcNbkNu8840MTV3y/Jdq
>> >Mndtz69TrjTrjn7r9REJ4bnRIlL4DGo+gufXPD49+yax1y/woefqwZPU13kO6j6R
>> >Q0+MAy13ptg2NwX26OI+Sb801W0kpDXby6WZjfekXqxqv62fY1/lPQ3oqqJBd95K
>> >EFe5NuogLV7UGH5vydQJa0eO2jw5lh8qLuHSShGcDEp/N6oQWiDzXYYYoEQdUic=
>> >=jRQ/
>> >-END PGP SIGNATURE-
>> >
>> >___
>> >OpenStack-dev mailing list
>> >OpenStack-dev@lists.openstack.org
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@list

Re: [openstack-dev] [Fuel] Fuel standards

2014-10-28 Thread Dmitriy Shulyak
>
> Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
> standards as is and use them for Fuel. But maybe there are other opinions.
> Let's discuss this and decide what to do. Do we actually need those
> standards at all?
>
> Agree that we can take openstack standarts as example, but lets not simply
copy them and just live with it.

>
> 0) Standard for projects naming.
> Currently most of Fuel projects are named like fuel-whatever or even
> whatever? Is it ok? Or maybe we need some formal rules for naming. For
> example, all OpenStack clients are named python-someclient. Do we need to
> rename fuelclient into python-fuelclient?
>
I dont like that fuel is added into every project that we start, correct me
if I am wrong but:
- shotgun can be self-contained project, and still provide certain value,
actually i think it can be used by jenkins in our and openstack gates
  to copy logs and other info
- same for network verification tool
- fuel_agent (image based provisioning) can work without all other fuel
parts

>
> 1) Standard for an architecture.
> Most of OpenStack services are split into several independent parts
> (raughly service-api, serivce-engine, python-serivceclient) and those parts
> interact with each other via REST and AMQP. python-serivceclient is usually
> located in a separate repository. Do we actually need to do the same for
> Fuel? According to fuelclient it means it should be moved into a separate
> repository. Fortunately, it already uses REST API for interacting with
> nailgun. But it should be possible to use it not only as a CLI tool, but
> also as a library.
>
> 2) Standard for project directory structure (directory names for api, db
> models,  drivers, cli related code, plugins, common code, etc.)
> Do we actually need to standardize a directory structure?
>
> Well, we need some project, agree on that project structure and then just
provide as example during review.
We can choose:
- fuelclient as cli example (but first refactor it)
- fuel-stats as web app example

> 3) Standard for third party libraries
> As far as Fuel is a deployment tool for OpenStack, let's make a decision
> about using OpenStack components wherever it is possible.
> 3.1) oslo.config for configuring.
> 3.2) oslo.db for database layer
> 3.3) oslo.messaging for AMQP layer
> 3.4) cliff for CLI (should we refactor fuelclient so as to make based on
> cliff?)
> 3.5) oslo.log for logging
> 3.6) stevedore for plugins
> etc.
> What about third party components which are not OpenStack related? What
> could be the requirements for an arbitrary PyPi package?
>
In my opinion we should not pick some library just because it is used in
openstack, there should be some research and analys,
for example:
Cli application, there is several popular alternatives to cliff in python
community:
- https://github.com/docopt/docopt
- https://github.com/mitsuhiko/click
I personnaly would prefer to use docopt, but click looks good as well.
Web frameworks is whole different story, in python community we have mature
flask and pyramid,
and i dont see any benefits from using pecan.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-28 Thread Mathieu Rohon
Hi wei,

the agent will be re-worked with modular l2 agent [1]
your proposal could be handle during this work.

[1]https://review.openstack.org/#/c/106189/

Mathieu

On Tue, Oct 28, 2014 at 4:01 AM, Damon Wang  wrote:
> Hi all,
>
> We have suffered a long down time when we upgrade our public cloud's neutron
> into the latest version (close to Juno RC2), for ovs-agent cleaned all flows
> in br-tun when it start.
>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
>
> ( plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent.__init__ ->
> plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_tunnel_br
> :
> self.tun_br.remove_all_flows() )
>
> Do we have any mechanism or ideas to avoid this, or should we rethink
> current design? Welcome comments
>
> Wei Wang
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-28 Thread Mike Spreitzer
Daniel Comnea  wrote on 10/27/2014 07:16:32 AM:

> Yes i did but if you look at this example
> 
> 
https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml

> 

> the flow is simple:

> CPU alarm in Ceilometer triggers the "type: OS::Heat::ScalingPolicy"
> which then triggers the "type: OS::Heat::AutoScalingGroup"

Actually the ScalingPolicy does not "trigger" the ASG.  BTW, 
"ScalingPolicy" is mis-named; it is not a full policy, it is only an 
action (the condition part is missing --- as you noted, that is in the 
Ceilometer alarm).  The so-called ScalingPolicy does the action itself 
when triggered.  But it respects your configured min and max size.

What are you concerned about making your scaling group smaller than your 
configured minimum?  Just checking here that there is not a 
misunderstanding.

As Clint noted, there is a large-scale effort underway to make Heat 
maintain what it creates despite deletion of the underlying resources.

There is also a small-scale effort underway to make ASGs recover from 
members stopping proper functioning for whatever reason.  See 
https://review.openstack.org/#/c/127884/ for a proposed interface and 
initial implementation.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Li Tianqing
One more question.
Vxlan use udp, how can vxlan promise reliability.


在 2014-10-28 15:51:02,"Ian Wells"  写道:

On 28 October 2014 00:30, Li Tianqing  wrote:

lan, you are right, the receiver only receive packet that small than 1450. 
Because the sender does not send large packets at the begining, so
tcpdump can catch some small packets. 


Another question about the mtu, what if we clear the  DF in the ip packets?  
Then l2 can split packets into smaller mtu size?


Routers can split packets.  L2 networks don't understand IP headers and 
therefore can't fragment packets.  DF doesn't change that.


(DF is there to make PMTU discovery work, incidentially; it's what prompts 
routers to return PMTU exceeded messages.)
--

Ian.



At 2014-10-28 15:15:51, "Li Tianqing"  wrote:

The problem is that it is not at the begining to transmit large file. It is 
after some packets trasmited, then the connection is choked. 
After the connection choked, from the bridge in compute host we can see the 
sender send packets, and the receiver can not get the packets.
If it is the pmtud, then at the very begining, the packet can not transmit from 
the begining.


At 2014-10-28 14:10:09, "Ian Wells"  wrote:

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.


The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--

Ian.




On 27 October 2014 20:14, Li Tianqing  wrote:







--

Best
Li Tianqing



At 2014-10-27 17:42:41, "Ihar Hrachyshka"  wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 27/10/14 02:18, Li Tianqing wrote:
>> Hello, Right now, we test neutron under havana release. We
>> configured network_device_mtu=1450 in neutron.conf, After create
>> vm, we found the vm interface's mtu is 1500, the ping, ssh, is ok.
>> But if we scp large file between vms then scp display 'stalled'.
>> And iperf is also can not completed. If we configured vm's mtu to
>> 1450, then iperf, scp all is ok. If we iperf with -M 1300, then the
>> iperf is ok too. The vms path mtu discovery is set by default. I do
>> not know why the vm whose mtu is 1500 can not send large file.
>
>There is a neutron spec currently in discussion for Kilo to finally
>fix MTU issues due to tunneling, that also tries to propagate MTU

>inside instances: https://review.openstack.org/#/c/105989/


The problem is i do not know why the vm with 1500 mtu can not send large file? 
I found the packet send out all with DF, and is it because the DF set default 
by linux cause the packet
be dropped? And the application do not handle the return back icmp packet with 
the smaller mtu?


>
>/Ihar
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>
>iQEcBAEBCgAGBQJUThORAAoJEC5aWaUY1u571u4H/3EqEVPL1Q9KgymrudLpAdRh
>fwNarwPWT8Ed+0x7WIXAr7OFXX1P90cKRAZKTlAEEI94vOrdr0s608ZX8awMuLeu
>+LB6IA7nMpgJammfDb8zNmYLHuTQGGatXblOinvtm3XXIcNbkNu8840MTV3y/Jdq
>Mndtz69TrjTrjn7r9REJ4bnRIlL4DGo+gufXPD49+yax1y/woefqwZPU13kO6j6R
>Q0+MAy13ptg2NwX26OI+Sb801W0kpDXby6WZjfekXqxqv62fY1/lPQ3oqqJBd95K
>EFe5NuogLV7UGH5vydQJa0eO2jw5lh8qLuHSShGcDEp/N6oQWiDzXYYYoEQdUic=
>=jRQ/
>-END PGP SIGNATURE-
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev










__

Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread A, Keshava
Hi,
Pl find my reply .

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] vm can not transport large file under 
neutron ml2 + linux bridge + vxlan

On 28 October 2014 00:18, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,

Currently OpenStack have any framework to notify the Tennant/Service-VM for 
such kind of notification based on VM’s interest ?

It's possible to use DHCP or RA to notify a VM of the MTU but there are 
limitations (RAs don't let you increase the MTU, only decrease it, and 
obviously VMs must support the MTU element of DHCP) and Openstack doesn't 
currently use it.  You can statically configure the DHCP MTU number that DHCP 
transmits; this is useful to work around problems but not really the right 
answer to the problem.

VM may be very much interested for such kind of notification like

1.   Path MTU.
This will be correctly discovered from the ICMP PMTU exceeded message, and 
Neutron routers should certainly be expected to send that.  (In fact the 
namespace implementation of routers would do this if the router ever had 
different MTUs on its ports; it's in the kernel network stack.)  There's no 
requirement for a special notification, and indeed you couldn't do it that way 
anyway.

In the network interface/router going down is  common scenario. In that case  
the packet will take different path which may have different MTU.
In that case the PATH MTU calculated at the source may be different and should 
be notified dynamically to VM. So that  VM can originate the packet with 
requirement MTU size .
If there is no notification mechanism ( as per this reply):
If there is no such dynamic PATH MTU change notification to VM,  how VM  can 
change the  packet size  ?
Or
Do we expect ICMP too big message reaches all the way  to VM ?
Or
VM itself the run the PATH MTU ?


2.   Based on specific incoming Tennant traffic, block/Allow  particular 
traffic flow at infrastructure level itself, instead of at VM.
I don't see the relevance; and you appear to be describing security groups.

This may require OpenStack infrastructure notification support to 
Tenant/Service VM.
Not particularly, as MTU doesn't generally change, and I think we would forbid 
changing the MTU of a network after creation.  It's only an initial 
configuration thing, therefore.  It might involve better cloud-init support for 
network configuration, something that gets discussed periodically.

--
Ian.

…
Thanks & regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 11:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] vm can not transport large file under 
neutron ml2 + linux bridge + vxlan

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.
The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--
Ian.

On 27 October 2014 20:14, Li Tianqing mailto:jaze...@163.com>> 
wrote:



--
Best
Li Tianqing


At 2014-10-27 17:42:41, "Ihar Hrachyshka" 
mailto:ihrac...@redhat.com>> wrote:

>-BEGIN PGP SIGNED MESSAGE-

>Hash: SHA512

>

>On 27/10/14 02:18, Li Tianqing wrote:

>> Hello, Right now, we test neutron under havana release. We

>> configured network_device_mtu=1450 in neutron.conf, After create

>> vm, we foun

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?

Let us know others opinion about this concept.

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.




Thanks & Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
mailto:erik@ericsson.com>> wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation that on some clouds passing VLANs works and on 
others it doesn't, and that the tenant can't actually tell in advance which 
sort of cloud they're working on.
Trunk networks are a requirement for some use cases independent of the port 
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the hard 
stuff possible' we can't just say 'no Neutron network passes VLAN tagged 
packets'.  And even if we did, we're evading a problem that exists with exactly 
one sort of network infrastructure - VLAN tagging for network separation - 
while making it hard to use for all of the many other cases in which it would 
work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to use 
VLANs without having to use it (in much the same way that I would like, in 
certain circumstances, not to have to use Openstack's address allocation and 
DHCP - it's nice that I can, but I shouldn't be forced to).
My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since trunk network could not be real

Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread Cory Benfield
On Tue, Oct 28, 2014 at 08:32:11, Li Tianqing wrote:
> One more question.
> Vxlan use udp, how can vxlan promise reliability.
> 

It can't, but that doesn't matter.

VXLAN emulates a single layer 2 broadcast domain: conceptually, a series of 
machines all plugged into the same Ethernet switch. This kind of network 
*isn't* reliable: you can lose Ethernet frames. There's no reason to require 
reliability from VXLAN, and it would increase VXLAN's overhead to add it in. If 
you need reliability, the encapsulated transport will provide it for you 
exactly as it does on a non-encapsulated network.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
> So this should work and I think its generally good.
> 
> But - I'm curious, you only need a single image for devtest to
> experiment with tuskar - the seed - which should be about the same
> speed (or faster, if you have hot caches) than devstack, and you'll
> get Ironic and nodes registered so that the panels have stuff to show.

TBH it's not so much about speed (although, for me, devstack is faster as
I've not yet mirrored all-the-things locally, I only have a squid cache),
it's about establishing a productive test/debug/hack/re-test workflow.

I've been configuring devstack to create Ironic nodes FWIW, so that works
OK too.

It's entirely possible I'm missing some key information on how to compose
my images to be debug friendly, but here's my devtest frustration:

1. Run devtest to create seed + overcloud
2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
3. Log onto seed VM to debug the issue.  Discover there are no logs.
4. Restart the heat-engine logging somewhere
5. Realize heat-engine isn't quite latest master
6. Git pull heat, discover networking won't allow it
7. scp latest master from my laptop->VM
8. setup.py install, discover the dependencies aren't all there
9. Give up and try to recreate issue on devstack

I'm aware there are probably solutions to all of these problems, but my
point is basically that devstack on my laptop already solves all of them,
so... maybe I can just use that?  That's my thinking, anyway.

E.g here's my tried, tested and comfortable workflow:

1. Run stack.sh on my laptop
2. Do a heat stack-create
3. Hit a problem, look at screen logs
4. Fix problem, restart heat, re-test, git-review, done!

I realize I'm swimming against the tide a bit here, so feel free to educate
me if there's an easier way to reduce the developer friction that exists
with devtest :)

Anyway, that's how I got here, frustration debugging Heat turned into
integrating tuskar with devstack, because I wanted to avoid the same
experience while hacking on tuskar, basically.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][CI] nova-networking or neutron netwokring for CI

2014-10-28 Thread Andreas Scheuring
Hi, 
we're preparing to add a new platform, libvirt-kvm on system z
(https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz), to
the HypervisorSupportMatrix
https://wiki.openstack.org/wiki/HypervisorSupportMatrix .


This matrix also lists a number of networking features (e.g. vlan
networking, routing,..). 

If I interpret the footnotes correctly, these network items refer only
to nova-networking, right? So if I plan to support vlan networking with
neutron, but not with nova-networking it would be an "x" in the related
cell, right?


Now thinking one step further in the direction of a CI system for the
new platform. 

Are current nova CI platforms configured with nova-networking or with
neutron networking? Or is networking in general not even a part of the
nova CI approach?


My current assumption is that any networking is a requirement and most
of the systems would go with nova-networking. Is this true?

If so, does it make sense to run the CI system with neutron networking
instead of nova-networking? What's the best practice in this area?

Not sure about the deprecation plans of nova-networking. But setting up
the CI environment might also take a while. Do you think it's still
worthwhile spending time for the nova-network integration in the CI
system, if it might be deprecated in the future anyhow? 


Any input?

Thanks a lot!


-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Alan Kavanagh
Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK--> We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK--> Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK--> Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK-->Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to having two VM’s instantiated and running, why does 
Neutron need to care about the port state between these two VM’s? Similarly its 
better to just have 2 or more VM’s up and the application will be able to 
address when failover occurs/requires. Lets keep it simple and not mix up with 
what the apps do inside the containment.

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.





Thanks & Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
mailto:erik@ericsson.com>> wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation that on some clouds passing V

Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Robert Collins
On 28 October 2014 22:51, Steven Hardy  wrote:
> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
>> So this should work and I think its generally good.
>>
>> But - I'm curious, you only need a single image for devtest to
>> experiment with tuskar - the seed - which should be about the same
>> speed (or faster, if you have hot caches) than devstack, and you'll
>> get Ironic and nodes registered so that the panels have stuff to show.
>
> TBH it's not so much about speed (although, for me, devstack is faster as
> I've not yet mirrored all-the-things locally, I only have a squid cache),
> it's about establishing a productive test/debug/hack/re-test workflow.

mm, squid-cache should still give pretty good results. If its not, bug
time :). That said..

> I've been configuring devstack to create Ironic nodes FWIW, so that works
> OK too.

Cool.

> It's entirely possible I'm missing some key information on how to compose
> my images to be debug friendly, but here's my devtest frustration:
>
> 1. Run devtest to create seed + overcloud

If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
devtest_seed.sh only. The seed has everything on it, so the rest is
waste (unless you need all the overcloud bits - in which case I'd
still tune things - e.g. I'd degrade to single node, and I'd iterate
on devtest_overcloud.sh, *not* on the full plumbing each time).

> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
> 3. Log onto seed VM to debug the issue.  Discover there are no logs.

We should fix that - is there a bug open? Thats a fairly serious issue
for debugging a deployment.

> 4. Restart the heat-engine logging somewhere
> 5. Realize heat-engine isn't quite latest master
> 6. Git pull heat, discover networking won't allow it

Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
totally fine - I've depended heavily on that to debug various things
over time.

> 7. scp latest master from my laptop->VM
> 8. setup.py install, discover the dependencies aren't all there

This one might be docs: heat is installed in a venv -
/opt/stack/venvs/heat, so the deps be should in that, not in the
global site-packages.

> 9. Give up and try to recreate issue on devstack

:)

> I'm aware there are probably solutions to all of these problems, but my
> point is basically that devstack on my laptop already solves all of them,
> so... maybe I can just use that?  That's my thinking, anyway.

Sure - its fine to use devstack. In fact, we don't *want* devtest to
supplant devstack, they're solving different problems.

> E.g here's my tried, tested and comfortable workflow:
>
> 1. Run stack.sh on my laptop
> 2. Do a heat stack-create
> 3. Hit a problem, look at screen logs
> 4. Fix problem, restart heat, re-test, git-review, done!
>
> I realize I'm swimming against the tide a bit here, so feel free to educate
> me if there's an easier way to reduce the developer friction that exists
> with devtest :)

Quite possibly there isn't. Some of your issues are ones we should not
at all have, and I'd like to see those removed. But they are different
tools for different scenarios, so I'd expect some impedance mismatch
doing single-code-base-dev in a prod-deploy-context, and I only asked
about the specifics to get a better understanding of whats up - I
think its totally appropriate to be doing your main dev with devstack.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl find my reply ..


Regards,
keshava

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK--> We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK--> Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK--> Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK-->Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to having two VM’s instantiated and running, why does 
Neutron need to care about the port state between these two VM’s? Similarly its 
better to just have 2 or more VM’s up and the application will be able to 
address when failover occurs/requires. Lets keep it simple and not mix up with 
what the apps do inside the containment.

Keshava:
Since this is solution is more for Carrier Grade NFV Service VM, I have below 
points to make.
Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP + BGP-VPN’.
When such kind of carrier grade service are running, how to provide the Five-9  
HA ?
In my opinion,
Both (Active,/Standby) Service-VM to hook same underlying 
OpenStack infrastructure stack (br-ext->br-int->qxx-> VMa)
However ‘active VM’ can hooks to  ‘active-port’  and ‘standby VM’ hook to 
‘passive-port’ with in same stack.

Instead if Active and Standby VM hooks to 2 different stack (br-ext1->br-int1 
-->qxx1-> VM-active) and (br-ext2->br-int2->qxx2-> VM-Standby) can those 
Service-VM achieve the 99.9 reliability ?

Yes I may be thinking little  complicated  way from open-stack 
perspective..

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.





Thanks & Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
mailto:erik@ericsson.com>> wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your

Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-28 Thread Lisa

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and I'd 
like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would be 
nice to talk with you and the BLAZAR team about my proposal in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where the 
resource allocation to the user teams (i.e. the projects) can be done 
only by considering fixed quotas which cannot be exceeded even if 
there are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with the 
same problems) and how they could be managed in BLAZAR, by defining a 
new lease type (i.e. fairShare lease) to be considered as extension 
of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
> On 28 October 2014 22:51, Steven Hardy  wrote:
> > On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
> >> So this should work and I think its generally good.
> >>
> >> But - I'm curious, you only need a single image for devtest to
> >> experiment with tuskar - the seed - which should be about the same
> >> speed (or faster, if you have hot caches) than devstack, and you'll
> >> get Ironic and nodes registered so that the panels have stuff to show.
> >
> > TBH it's not so much about speed (although, for me, devstack is faster as
> > I've not yet mirrored all-the-things locally, I only have a squid cache),
> > it's about establishing a productive test/debug/hack/re-test workflow.
> 
> mm, squid-cache should still give pretty good results. If its not, bug
> time :). That said..
> 
> > I've been configuring devstack to create Ironic nodes FWIW, so that works
> > OK too.
> 
> Cool.
> 
> > It's entirely possible I'm missing some key information on how to compose
> > my images to be debug friendly, but here's my devtest frustration:
> >
> > 1. Run devtest to create seed + overcloud
> 
> If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
> devtest_seed.sh only. The seed has everything on it, so the rest is
> waste (unless you need all the overcloud bits - in which case I'd
> still tune things - e.g. I'd degrade to single node, and I'd iterate
> on devtest_overcloud.sh, *not* on the full plumbing each time).

Yup, I went round a few iterations of those, e.g running devtest_overcloud
with -c so I could more quickly re-deploy, until I realized I could drive
heat directly, so I started doing that :)

Most of my investigations atm are around investigating Heat issues, or
testing new tripleo-heat-templates stuff, so I do need to spin up the
overcloud (and update it, which is where the fun really began ref bug 
#1383709 and #1384750 ...)

> > 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
> > 3. Log onto seed VM to debug the issue.  Discover there are no logs.
> 
> We should fix that - is there a bug open? Thats a fairly serious issue
> for debugging a deployment.

I've not yet raised one, as I wasn't sure if it was either by design, or if
I was missing some crucial element from my DiB config.

If you consider it a bug, I'll raise one and look into a fix.

> > 4. Restart the heat-engine logging somewhere
> > 5. Realize heat-engine isn't quite latest master
> > 6. Git pull heat, discover networking won't allow it
> 
> Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
> totally fine - I've depended heavily on that to debug various things
> over time.

Not yet dug into it in a lot of detail tbh, my other VMs can access the
internet fine so it may be something simple, I'll look into it.

> > 7. scp latest master from my laptop->VM
> > 8. setup.py install, discover the dependencies aren't all there
> 
> This one might be docs: heat is installed in a venv -
> /opt/stack/venvs/heat, so the deps be should in that, not in the
> global site-packages.

Aha, I did think that may be the case, but I'd already skipped to step (9)
by that point :D

> > 9. Give up and try to recreate issue on devstack
> 
> :)
> 
> > I'm aware there are probably solutions to all of these problems, but my
> > point is basically that devstack on my laptop already solves all of them,
> > so... maybe I can just use that?  That's my thinking, anyway.
> 
> Sure - its fine to use devstack. In fact, we don't *want* devtest to
> supplant devstack, they're solving different problems.
> 
> > E.g here's my tried, tested and comfortable workflow:
> >
> > 1. Run stack.sh on my laptop
> > 2. Do a heat stack-create
> > 3. Hit a problem, look at screen logs
> > 4. Fix problem, restart heat, re-test, git-review, done!
> >
> > I realize I'm swimming against the tide a bit here, so feel free to educate
> > me if there's an easier way to reduce the developer friction that exists
> > with devtest :)
> 
> Quite possibly there isn't. Some of your issues are ones we should not
> at all have, and I'd like to see those removed. But they are different
> tools for different scenarios, so I'd expect some impedance mismatch
> doing single-code-base-dev in a prod-deploy-context, and I only asked
> about the specifics to get a better understanding of whats up - I
> think its totally appropriate to be doing your main dev with devstack.

Ok, thanks for the confirmation - I'll report back if/when I get the full
overcloud working on devstack, given that it doesn't sound like a totally crazy
thing to spend a bit of time on :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Sean Dague
We're dealing with some issues on devstack pass through with really
complicated config option types, the fixes are breaking other things.

The issue at hand is the fact that the pci pass through device listing
is an olso MultiStrOpt in which each option value is fully valid json
document, which must parse as such. That leads to things like:

pci_passthrough_whitelist = {"address":"*:0a:00.*",
"physical_network":"physnet1"}
pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}

Which, honestly, seems a little weird for configs.

We're talking about a small number of fixed fields here, so the use of a
full json doc seems weird. I'd like to reopen why this was the value
format, and if we could have a more simple one.

We're probably going to revert the attempted devstack support for pass
through of these things anyway, because it's breaking variable
interpolation in other config options. And the complexity added by
trying to add support for things like that in local.conf has shown to be
too much for the current ini parser structure.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Salvatore Orlando
Keshava,

I think the thread is not going a bit off its stated topic - which is to
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at
the data plane level every instance attached to a trunk will be implemented
as a different network stack.

Also, quoting the principle earlier cited in this thread -  "make the easy
stuff easy and the hard stuff possible" - I would say that unless five 9s
is a minimum requirement for a NFV application, we might start worrying
about it once we have the bare minimum set of tools for allowing a NFV
application over a neutron network.

I think Ian has done a good job in explaining that while both approaches
considered here address trunking for NFV use cases, they propose
alternative implementations which can be leveraged in different way by NFV
applications. I do not see now a reason for which we should not allow NFV
apps to leverage a trunk network or create port-aware VLANs (or maybe you
can even have VLAN aware ports which tap into a trunk network?)

We may continue discussing the pros and cons of each approach - but to me
it's now just a matter of choosing the best solution for exposing them at
the API layer. At the control/data plane layer, it seems to me that trunk
networks are pretty much straightforward. VLAN aware ports are instead a
bit more convoluted, but not excessively complicated in my opinion.

Salvatore


On 28 October 2014 11:55, A, Keshava  wrote:

>  Hi,
>
> Pl find my reply ..
>
>
>
>
>
> Regards,
>
> keshava
>
>
>
> *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
> *Sent:* Tuesday, October 28, 2014 3:35 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> Hi
>
> Please find some additions to Ian and responses below.
>
> /Alan
>
>
>
> *From:* A, Keshava [mailto:keshav...@hp.com ]
> *Sent:* October-28-14 9:57 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> *Hi,*
>
> *Pl fine the reply for the same.*
>
>
>
> *Regards,*
>
> *keshava*
>
>
>
> *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk ]
>
> *Sent:* Tuesday, October 28, 2014 1:11 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> This all appears to be referring to trunking ports, rather than anything
> else, so I've addressed the points in that respect.
>
> On 28 October 2014 00:03, A, Keshava  wrote:
>
>   Hi,
>
> 1.   How many Trunk ports can be created ?
>
>  Why would there be a limit?
>
>  Will there be any Active-Standby concepts will be there ?
>
>  I don't believe active-standby, or any HA concept, is directly
> relevant.  Did you have something in mind?
>
> *For the NFV kind of the scenario, it is very much required to run the
> ‘Service VM’ in Active and Standby Mode.*
>
> *AK**à** We have a different view on this, the “application runs as a
> pair” of which the application either runs in active-active or active
> standby…this has nothing to do with HA, its down to the application and how
> its provisioned and configured via Openstack. So agree with Ian on this.*
>
> *Standby is more of passive entity and will not take any action to
> external network. It will be passive consumer of the packet/information.*
>
> *AK**à** Why would we need to care?*
>
> *In that scenario it will be very meaningful to have*
>
> *“Active port – connected to  “Active  Service VM”.*
>
> *“Standby port – connected to ‘Standby Service VM’. Which will turn Active
> when old Active-VM goes down  ?*
>
> *AK**à** Cant you just have two VM’s and then via a controller decide how
> to address MAC+IP_Address control…..FYI…most NFV Apps have that built-in
> today.*
>
> *Let us know others opinion about this concept.*
>
> *AK**à**Perhaps I am miss reading this but I don’t understand what this
> would provide as opposed to having two VM’s instantiated and running, why
> does Neutron need to care about the port state between these two VM’s?
> Similarly its better to just have 2 or more VM’s up and the application
> will be able to address when failover occurs/requires. Lets keep it simple
> and not mix up with what the apps do inside the containment.*
>
>
>
> *Keshava: *
>
> *Since this is solution is more for Carrier Grade NFV Service VM, I have
> below points to make.*
>
> *Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP +
> BGP-VPN’.*
>
> *When such kind of carrier grade service are running, how to provide the
> Five-9  HA ?*
>
> *In my opinion, *
>
> *Both (Active,/Standby) Service-VM to hook same underlying
> OpenStack infrastructure stack (br-ext->br-int->qxx-> VMa) *
>
> *However ‘active VM’ can hooks to  ‘active-port’  and ‘standby VM’ hook to
> ‘passive-port’ 

Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Daniel P. Berrange
On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
> We're dealing with some issues on devstack pass through with really
> complicated config option types, the fixes are breaking other things.
> 
> The issue at hand is the fact that the pci pass through device listing
> is an olso MultiStrOpt in which each option value is fully valid json
> document, which must parse as such. That leads to things like:
> 
> pci_passthrough_whitelist = {"address":"*:0a:00.*",
> "physical_network":"physnet1"}
> pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}
> 
> Which, honestly, seems a little weird for configs.
> 
> We're talking about a small number of fixed fields here, so the use of a
> full json doc seems weird. I'd like to reopen why this was the value
> format, and if we could have a more simple one.

Do you have ant suggestion for an alternative config syntax for specifying
a list of dicts which would be suitable ?

One option would be a more  CSV like syntax eg

   pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
   pci_passthrough_whitelist = vendor_id=1137,product_id=0071

But this gets confusing if we want to specifying multiple sets of data
so might need to use semi-colons as first separator, and comma for list
element separators

   pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
vendor_id=1137;product_id=0071

Overall it isn't clear that inventing a special case language for this PCI
config value is a good idea.

I think it illustrates a gap in oslo.config, which ought to be able to
support a config option type which was a "list of dicts of strings"
so anywhere which needs such a beast will use the same syntax.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Sean Dague
On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
> On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
>> We're dealing with some issues on devstack pass through with really
>> complicated config option types, the fixes are breaking other things.
>>
>> The issue at hand is the fact that the pci pass through device listing
>> is an olso MultiStrOpt in which each option value is fully valid json
>> document, which must parse as such. That leads to things like:
>>
>> pci_passthrough_whitelist = {"address":"*:0a:00.*",
>> "physical_network":"physnet1"}
>> pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}
>>
>> Which, honestly, seems a little weird for configs.
>>
>> We're talking about a small number of fixed fields here, so the use of a
>> full json doc seems weird. I'd like to reopen why this was the value
>> format, and if we could have a more simple one.
> 
> Do you have ant suggestion for an alternative config syntax for specifying
> a list of dicts which would be suitable ?
> 
> One option would be a more  CSV like syntax eg
> 
>pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
>pci_passthrough_whitelist = vendor_id=1137,product_id=0071
> 
> But this gets confusing if we want to specifying multiple sets of data
> so might need to use semi-colons as first separator, and comma for list
> element separators
> 
>pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
> vendor_id=1137;product_id=0071
> 
> Overall it isn't clear that inventing a special case language for this PCI
> config value is a good idea.
> 
> I think it illustrates a gap in oslo.config, which ought to be able to
> support a config option type which was a "list of dicts of strings"
> so anywhere which needs such a beast will use the same syntax.

Mostly, why do we need name= at all. This seems like it would be fine as
an fstab like format (with 'x' as an ignore value).

# vendor_id product_id address
pci_passthrough_whitelist = 8085
pci_passthrough_whitelist = 1137 4fc2
pci_passthrough_whitelist = x 0071
pci_passthrough_whitelist = x x *0a:00.*

Basically going to a full name = value seems incredibly overkill for
something with < 6 fields.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] using released versions of python clients in tests

2014-10-28 Thread Sean Dague
At the beginning of the month we moved through a set of patches for oslo
libs that decoupled them from the integrated gate by testing server
projects with released versions of oslo libraries.

The way it works is that in the base devstack case all the oslo
libraries are pulled from pypi instead of git. There is an override
LIBS_FROM_GIT that lets you specify you want certain libraries from git
instead.

* on a Nova change oslo.config comes from the release pypi version.
* on an olso.config change we test a few devstack configurations with
LIBS_FROM_GIT=oslo.config, so that we can ensure that proposed
olso.config changes won't break everyone.

I believe we should do the same with all the python-*client libraries as
well. That will ensure that servers don't depend on unreleased features
of python client libraries, and will provide the forward testing to
ensure the next version of the python client to be released won't ruin
the world.

This is mostly a heads up that I'm going to start doing this
implementation. If someone wants to raise an objection, now is the time.
However I think breaking this master/master coupling of servers and
clients is important, and makes OpenStack function and upgrade a bit
closer to what people expect.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Daniel P. Berrange
On Tue, Oct 28, 2014 at 08:07:14AM -0400, Sean Dague wrote:
> On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
> > On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
> >> We're dealing with some issues on devstack pass through with really
> >> complicated config option types, the fixes are breaking other things.
> >>
> >> The issue at hand is the fact that the pci pass through device listing
> >> is an olso MultiStrOpt in which each option value is fully valid json
> >> document, which must parse as such. That leads to things like:
> >>
> >> pci_passthrough_whitelist = {"address":"*:0a:00.*",
> >> "physical_network":"physnet1"}
> >> pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}
> >>
> >> Which, honestly, seems a little weird for configs.
> >>
> >> We're talking about a small number of fixed fields here, so the use of a
> >> full json doc seems weird. I'd like to reopen why this was the value
> >> format, and if we could have a more simple one.
> > 
> > Do you have ant suggestion for an alternative config syntax for specifying
> > a list of dicts which would be suitable ?
> > 
> > One option would be a more  CSV like syntax eg
> > 
> >pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
> >pci_passthrough_whitelist = vendor_id=1137,product_id=0071
> > 
> > But this gets confusing if we want to specifying multiple sets of data
> > so might need to use semi-colons as first separator, and comma for list
> > element separators
> > 
> >pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
> > vendor_id=1137;product_id=0071
> > 
> > Overall it isn't clear that inventing a special case language for this PCI
> > config value is a good idea.
> > 
> > I think it illustrates a gap in oslo.config, which ought to be able to
> > support a config option type which was a "list of dicts of strings"
> > so anywhere which needs such a beast will use the same syntax.
> 
> Mostly, why do we need name= at all. This seems like it would be fine as
> an fstab like format (with 'x' as an ignore value).
> 
> # vendor_id product_id address
> pci_passthrough_whitelist = 8085
> pci_passthrough_whitelist = 1137 4fc2
> pci_passthrough_whitelist = x 0071
> pci_passthrough_whitelist = x x *0a:00.*
>
> Basically going to a full name = value seems incredibly overkill for
> something with < 6 fields.

I don't think that is really very extensible for the future to drop the
key name. We've already extended the info we record here at least once,
and I expect we'd want to add more fields later. It is also makes it
less clear to the user - it is very easy to get confused about vendor
vs product IDs if we leave out the name.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Robert Li (baoli)
Sean,

Are you talking about this one: https://review.openstack.org/#/c/128805/?
is it still breaking something after fixing the incompatible awk syntax?

Originally https://review.openstack.org/#/c/123599/ proposed a simple
patch to support that config. But it was abandoned in favor of the
local.conf meta-section.

Thanks,
Robert

On 10/28/14, 8:31 AM, "Daniel P. Berrange"  wrote:

>On Tue, Oct 28, 2014 at 08:07:14AM -0400, Sean Dague wrote:
>> On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
>> > On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
>> >> We're dealing with some issues on devstack pass through with really
>> >> complicated config option types, the fixes are breaking other things.
>> >>
>> >> The issue at hand is the fact that the pci pass through device
>>listing
>> >> is an olso MultiStrOpt in which each option value is fully valid json
>> >> document, which must parse as such. That leads to things like:
>> >>
>> >> pci_passthrough_whitelist = {"address":"*:0a:00.*",
>> >> "physical_network":"physnet1"}
>> >> pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}
>> >>
>> >> Which, honestly, seems a little weird for configs.
>> >>
>> >> We're talking about a small number of fixed fields here, so the use
>>of a
>> >> full json doc seems weird. I'd like to reopen why this was the value
>> >> format, and if we could have a more simple one.
>> > 
>> > Do you have ant suggestion for an alternative config syntax for
>>specifying
>> > a list of dicts which would be suitable ?
>> > 
>> > One option would be a more  CSV like syntax eg
>> > 
>> >pci_passthrough_whitelist =
>>address=*0a:00.*,physical_network=physnet1
>> >pci_passthrough_whitelist = vendor_id=1137,product_id=0071
>> > 
>> > But this gets confusing if we want to specifying multiple sets of data
>> > so might need to use semi-colons as first separator, and comma for
>>list
>> > element separators
>> > 
>> >pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
>>vendor_id=1137;product_id=0071
>> > 
>> > Overall it isn't clear that inventing a special case language for
>>this PCI
>> > config value is a good idea.
>> > 
>> > I think it illustrates a gap in oslo.config, which ought to be able to
>> > support a config option type which was a "list of dicts of strings"
>> > so anywhere which needs such a beast will use the same syntax.
>> 
>> Mostly, why do we need name= at all. This seems like it would be fine as
>> an fstab like format (with 'x' as an ignore value).
>> 
>> # vendor_id product_id address
>> pci_passthrough_whitelist = 8085
>> pci_passthrough_whitelist = 1137 4fc2
>> pci_passthrough_whitelist = x 0071
>> pci_passthrough_whitelist = x x *0a:00.*
>>
>> Basically going to a full name = value seems incredibly overkill for
>> something with < 6 fields.
>
>I don't think that is really very extensible for the future to drop the
>key name. We've already extended the info we record here at least once,
>and I expect we'd want to add more fields later. It is also makes it
>less clear to the user - it is very easy to get confused about vendor
>vs product IDs if we leave out the name.
>
>Regards,
>Daniel
>-- 
>|: http://berrange.com  -o-
>http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o-
>http://virt-manager.org :|
>|: http://autobuild.org   -o-
>http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-
>http://live.gnome.org/gtk-vnc :|
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-28 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 10:01 PM, Damon Wang  wrote:
> Hi all,
>
> We have suffered a long down time when we upgrade our public cloud's neutron
> into the latest version (close to Juno RC2), for ovs-agent cleaned all flows
> in br-tun when it start.
>
This is likely due to this bug [1] which was fixed in Juno. On agent
restart, all flows are reprogrammed. We do this to ensure that
everything is reprogrammed correctly and no stale flows are left.

[1] https://bugs.launchpad.net/tripleo/+bug/1290486
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
>
> ( plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent.__init__ ->
> plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_tunnel_br
> :
> self.tun_br.remove_all_flows() )
>
> Do we have any mechanism or ideas to avoid this, or should we rethink
> current design? Welcome comments
>
Perhaps a way around this would be to add a flag on agent startup
which would have it skip reprogramming flows. This could be used for
the upgrade case.

> Wei Wang
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Sean Dague
On 10/28/2014 08:31 AM, Daniel P. Berrange wrote:
> On Tue, Oct 28, 2014 at 08:07:14AM -0400, Sean Dague wrote:
>> On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
>>> On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
 We're dealing with some issues on devstack pass through with really
 complicated config option types, the fixes are breaking other things.

 The issue at hand is the fact that the pci pass through device listing
 is an olso MultiStrOpt in which each option value is fully valid json
 document, which must parse as such. That leads to things like:

 pci_passthrough_whitelist = {"address":"*:0a:00.*",
 "physical_network":"physnet1"}
 pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}

 Which, honestly, seems a little weird for configs.

 We're talking about a small number of fixed fields here, so the use of a
 full json doc seems weird. I'd like to reopen why this was the value
 format, and if we could have a more simple one.
>>>
>>> Do you have ant suggestion for an alternative config syntax for specifying
>>> a list of dicts which would be suitable ?
>>>
>>> One option would be a more  CSV like syntax eg
>>>
>>>pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
>>>pci_passthrough_whitelist = vendor_id=1137,product_id=0071
>>>
>>> But this gets confusing if we want to specifying multiple sets of data
>>> so might need to use semi-colons as first separator, and comma for list
>>> element separators
>>>
>>>pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
>>> vendor_id=1137;product_id=0071
>>>
>>> Overall it isn't clear that inventing a special case language for this PCI
>>> config value is a good idea.
>>>
>>> I think it illustrates a gap in oslo.config, which ought to be able to
>>> support a config option type which was a "list of dicts of strings"
>>> so anywhere which needs such a beast will use the same syntax.
>>
>> Mostly, why do we need name= at all. This seems like it would be fine as
>> an fstab like format (with 'x' as an ignore value).
>>
>> # vendor_id product_id address
>> pci_passthrough_whitelist = 8085
>> pci_passthrough_whitelist = 1137 4fc2
>> pci_passthrough_whitelist = x 0071
>> pci_passthrough_whitelist = x x *0a:00.*
>>
>> Basically going to a full name = value seems incredibly overkill for
>> something with < 6 fields.
> 
> I don't think that is really very extensible for the future to drop the
> key name. We've already extended the info we record here at least once,
> and I expect we'd want to add more fields later. It is also makes it
> less clear to the user - it is very easy to get confused about vendor
> vs product IDs if we leave out the name.

If we really need that level of arbitrary complexity and future name
values we should then just:

pci_passthrough_cfg = /etc/nova/pci_pass.yaml

And build fully nested structures over there.

Doing multi level nesting inside of .ini format files is just kind of gross.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Alan Kavanagh
Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK--> Agree
Also, quoting the principle earlier cited in this thread -  "make the easy 
stuff easy and the hard stuff possible" - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK--> five 9’s is a 100% must requirement for NFV, but lets ensure we don’t mix 
up what the underlay service needs to guarantee and what openstack needs to do 
to ensure this type of service. Would agree, we should focus more on having the 
right configuration sets for onboarding NFV which is what Openstack needs to 
ensure is exposed then what is used underneath guarantee the 5 9’s is a 
separate matter.
I think Ian has done a good job in explaining that while both approaches 
considered here address trunking for NFV use cases, they propose alternative 
implementations which can be leveraged in different way by NFV applications. I 
do not see now a reason for which we should not allow NFV apps to leverage a 
trunk network or create port-aware VLANs (or maybe you can even have VLAN aware 
ports which tap into a trunk network?)
AK--> Agree, I think we can hammer this out once and for all in Paris…….this 
feature has been lingering too long.
We may continue discussing the pros and cons of each approach - but to me it's 
now just a matter of choosing the best solution for exposing them at the API 
layer. At the control/data plane layer, it seems to me that trunk networks are 
pretty much straightforward. VLAN aware ports are instead a bit more 
convoluted, but not excessively complicated in my opinion.
AK--> My thinking too Salvatore, lets ensure the right elements are exposed at 
API Layer, I would also go a little further to ensure we get those feature sets 
to be supported into the Core API (another can of worms discussion but we need 
to have it).
Salvatore


On 28 October 2014 11:55, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,
Pl find my reply ..


Regards,
keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK--> We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK--> Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK--> Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK-->Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to h

Re: [openstack-dev] [nova][CI] nova-networking or neutron netwokring for CI

2014-10-28 Thread Kyle Mestery
On Tue, Oct 28, 2014 at 4:53 AM, Andreas Scheuring
 wrote:
> Hi,
> we're preparing to add a new platform, libvirt-kvm on system z
> (https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz), to
> the HypervisorSupportMatrix
> https://wiki.openstack.org/wiki/HypervisorSupportMatrix .
>
>
> This matrix also lists a number of networking features (e.g. vlan
> networking, routing,..).
>
> If I interpret the footnotes correctly, these network items refer only
> to nova-networking, right? So if I plan to support vlan networking with
> neutron, but not with nova-networking it would be an "x" in the related
> cell, right?
>
>
> Now thinking one step further in the direction of a CI system for the
> new platform.
>
> Are current nova CI platforms configured with nova-networking or with
> neutron networking? Or is networking in general not even a part of the
> nova CI approach?
>
>
> My current assumption is that any networking is a requirement and most
> of the systems would go with nova-networking. Is this true?
>
> If so, does it make sense to run the CI system with neutron networking
> instead of nova-networking? What's the best practice in this area?
>
> Not sure about the deprecation plans of nova-networking. But setting up
> the CI environment might also take a while. Do you think it's still
> worthwhile spending time for the nova-network integration in the CI
> system, if it might be deprecated in the future anyhow?
>
>
> Any input?
>
Given we're moving towards deprecating nova-network in Kilo, if you
had to choose I'd say to use neutron here. But I'd like to hear what
other folks running CI systems for nova are doing as well, and also
what Michael has to say here.

Thanks,
Kyle

> Thanks a lot!
>
>
> --
> Andreas
> (irc: scheuran)
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] using released versions of python clients in tests

2014-10-28 Thread Kuvaja, Erno
Sean,

Please correct me if I'm wrong, but I think this needs to happen on RCs not on 
CI tests.

Couple of possible problems I personally see with this approach:
1) Extensive pressure to push new client releases (perhaps the released client 
is not as good as intended for just to provide someone tools to get through 
tests).
2) Un-necessary slowing of development. If the needed client functionality is 
merged but not released, the commits using this functionality will fail. This 
IMO fights against the point of having CI as we're still depending on internal 
releases during the development process.
3) More skipped tests "waiting" for client release and not catching the real 
issues.
4) over time LIBS_FROM_GIT just cumulates having all used clients rendering the 
effort useless anyways.

I do agree that we need to catch the scenarios driving you towards this on 
whatever we call Stable, but anything out of that should not be affected just 
because project does not release monthly, weekly or daily client versions.

I might have missed something here, but I just don't see the correlation  of 
unreleased server depending unreleased client being problem.

- Erno

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 28 October 2014 12:29
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [all] using released versions of python clients in
> tests
> 
> At the beginning of the month we moved through a set of patches for oslo
> libs that decoupled them from the integrated gate by testing server projects
> with released versions of oslo libraries.
> 
> The way it works is that in the base devstack case all the oslo libraries are
> pulled from pypi instead of git. There is an override LIBS_FROM_GIT that lets
> you specify you want certain libraries from git instead.
> 
> * on a Nova change oslo.config comes from the release pypi version.
> * on an olso.config change we test a few devstack configurations with
> LIBS_FROM_GIT=oslo.config, so that we can ensure that proposed
> olso.config changes won't break everyone.
> 
> I believe we should do the same with all the python-*client libraries as well.
> That will ensure that servers don't depend on unreleased features of python
> client libraries, and will provide the forward testing to ensure the next
> version of the python client to be released won't ruin the world.
> 
> This is mostly a heads up that I'm going to start doing this implementation. 
> If
> someone wants to raise an objection, now is the time.
> However I think breaking this master/master coupling of servers and clients
> is important, and makes OpenStack function and upgrade a bit closer to what
> people expect.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Mathieu Rohon
Hi,

really interesting, thanks corry.
during l3 meeting we spoke about planning a POD session around bgp use cases.
at least 2 spec have bgp use cases :

https://review.openstack.org/#/c/125401/
https://review.openstack.org/#/c/93329/

It would be interesting that you join this POD, to share your view and
leverage bgp capabilities that will be introduced in kilo for the
calico project.

Mathieu


On Tue, Oct 28, 2014 at 8:44 AM, A, Keshava  wrote:
> Hi,
>
> Current Open-stack was built as flat network.
>
> With the introduction of the L3 lookup (by inserting the routing table in
> forwarding path) and separate ‘VIF Route Type’ interface:
>
>
>
> At what point of time in the packet processing  decision will be made to
> lookup FIB  during ? For each packet there will additional  FIB lookup ?
>
> How about the  impact on  ‘inter compute traffic’, processed by  DVR  ?
>
>
>
> Here thinking  OpenStack cloud as hierarchical network instead of Flat
> network ?
>
>
>
> Thanks & regards,
>
> Keshava
>
>
>
> From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
> Sent: Monday, October 27, 2014 12:36 AM
> To: OpenStack Development Mailing List (not for usage questions)
>
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
>
>
> Hi
>
>
>
> I'm interested as well in this model. Curious to understand the routing
> filters and their implementation that will enable isolation between tenant
> networks.
>
> Also, having a BoF session on "Virtual Networking using L3" may be useful to
> get all interested folks together at the Summit.
>
>
>
>
>
> Thanks
>
> Rohit
>
>
>
> From: Kevin Benton 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Friday, October 24, 2014 12:51 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
>
>
> Hi,
>
>
>
> Thanks for posting this. I am interested in this use case as well.
>
>
>
> I didn't find a link to a review for the ML2 driver. Do you have any more
> details for that available?
>
> It seems like not providing L2 connectivity between members of the same
> Neutron network conflicts with assumptions ML2 will make about segmentation
> IDs, etc. So I am interested in seeing how exactly the ML2 driver will bind
> ports, segments, etc.
>
>
>
>
>
> Cheers,
>
> Kevin Benton
>
>
>
> On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield
>  wrote:
>
> All,
>
> Project Calico [1] is an open source approach to virtual networking based on
> L3 routing as opposed to L2 bridging.  In order to accommodate this approach
> within OpenStack, we've just submitted 3 blueprints that cover
>
> -  minor changes to nova to add a new VIF type [2]
> -  some changes to neutron to add DHCP support for routed interfaces [3]
> -  an ML2 mechanism driver that adds support for Project Calico [4].
>
> We feel that allowing for routed network interfaces is of general use within
> OpenStack, which was our motivation for submitting [2] and [3].  We also
> recognise that there is an open question over the future of 3rd party ML2
> drivers in OpenStack, but until that is finally resolved in Paris, we felt
> submitting our driver spec [4] was appropriate (not least to provide more
> context on the changes proposed in [2] and [3]).
>
> We're extremely keen to hear any and all feedback on these proposals from
> the community.  We'll be around at the Paris summit in a couple of weeks and
> would love to discuss with anyone else who is interested in this direction.
>
> Regards,
>
> Cory Benfield (on behalf of the entire Project Calico team)
>
> [1] http://www.projectcalico.org
> [2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
> [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
> [4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Dan Smith
> If we really need that level of arbitrary complexity and future name
> values we should then just:
>
> pci_passthrough_cfg = /etc/nova/pci_pass.yaml

I hate to have to introduce a new thing like that, but I also think that
JSON-encoded config variable strings are a nightmare. They lead to bugs
like this:

https://bugs.launchpad.net/nova/+bug/1383345

So I'd rather see something that is a little easier to manage. Also,
moving things like this out to a separate file makes it easy to
generate/update that file automatically, which is probably a useful
thing for something like PCI.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Doug Hellmann

On Oct 28, 2014, at 7:34 AM, Sean Dague  wrote:

> We're dealing with some issues on devstack pass through with really
> complicated config option types, the fixes are breaking other things.
> 
> The issue at hand is the fact that the pci pass through device listing
> is an olso MultiStrOpt in which each option value is fully valid json
> document, which must parse as such. That leads to things like:
> 
> pci_passthrough_whitelist = {"address":"*:0a:00.*",
> "physical_network":"physnet1"}
> pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071"}
> 
> Which, honestly, seems a little weird for configs.
> 
> We're talking about a small number of fixed fields here, so the use of a
> full json doc seems weird. I'd like to reopen why this was the value
> format, and if we could have a more simple one.
> 
> We're probably going to revert the attempted devstack support for pass
> through of these things anyway, because it's breaking variable
> interpolation in other config options. And the complexity added by
> trying to add support for things like that in local.conf has shown to be
> too much for the current ini parser structure.

Another way to do this, which has been used in some other projects, is to 
define one option for a list of “names” of things, and use those names to make 
groups with each field in an individual option. This similar to the logging 
config file. For example,

  [DEFAULT]
  pci_passthrough_rules = by_address, by_vendor_id

  [pci_passthrough_by_address]
  address = *:0a:00.*
  physical_network = physnet1

  [pci_passthrough_by_vendor_id]
  vendor_id 1137
  product_id 0071

The options for each “pci_passthrough_*” group can be registered multiple times 
as long as the group name is different. You would access the values as 
cfg.CONF.pci_passthrough_by_address.address, etc., and that places some naming 
restrictions on the groups.

OTOH, oslo.config is not the only way we have to support configuration. This 
looks like a good example of settings that are more complex than what 
oslo.config is meant to handle, and that might be better served in a separate 
file with the location of that file specified in an oslo.config option.

Doug


> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][CI] nova-networking or neutron netwokring for CI

2014-10-28 Thread Dan Smith
> Are current nova CI platforms configured with nova-networking or with
> neutron networking? Or is networking in general not even a part of the
> nova CI approach?

I think we have several that only run on Neutron, so I think it's fine
to just do that.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] using released versions of python clients in tests

2014-10-28 Thread Sean Dague
On 10/28/2014 09:36 AM, Kuvaja, Erno wrote:
> Sean,
> 
> Please correct me if I'm wrong, but I think this needs to happen on RCs not 
> on CI tests.
> 
> Couple of possible problems I personally see with this approach:
> 1) Extensive pressure to push new client releases (perhaps the released 
> client is not as good as intended for just to provide someone tools to get 
> through tests).

This is a good thing. Version numbers are cheap, and sitting on large
chunks of client changes for long periods of time is something we should
avoid.

> 2) Un-necessary slowing of development. If the needed client functionality is 
> merged but not released, the commits using this functionality will fail. This 
> IMO fights against the point of having CI as we're still depending on 
> internal releases during the development process.
> 3) More skipped tests "waiting" for client release and not catching the real 
> issues.

Version numbers are cheap. New client features should trigger a new
client release.

> 4) over time LIBS_FROM_GIT just cumulates having all used clients rendering 
> the effort useless anyways.

No, it doesn't, because we'll use it specifically for just tests for the
specific libraries.

> I do agree that we need to catch the scenarios driving you towards this on 
> whatever we call Stable, but anything out of that should not be affected just 
> because project does not release monthly, weekly or daily client versions.
> 
> I might have missed something here, but I just don't see the correlation  of 
> unreleased server depending unreleased client being problem.

It very much is an issue, we expect that CD environments are going to be
CDing the servers, but pip installing all libraries. We built the
current clients from git testing model to prevent complete breakage of
OpenStack when a client was released. But it turned into a really blunt
instrument that has meant we've often gotten into a place where the
servers can't function with their current listed requirements because
they are using unreleased client features.

This should take us back to testing a more sensible thing.

-Sean

> 
> - Erno
> 
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: 28 October 2014 12:29
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [all] using released versions of python clients in
>> tests
>>
>> At the beginning of the month we moved through a set of patches for oslo
>> libs that decoupled them from the integrated gate by testing server projects
>> with released versions of oslo libraries.
>>
>> The way it works is that in the base devstack case all the oslo libraries are
>> pulled from pypi instead of git. There is an override LIBS_FROM_GIT that lets
>> you specify you want certain libraries from git instead.
>>
>> * on a Nova change oslo.config comes from the release pypi version.
>> * on an olso.config change we test a few devstack configurations with
>> LIBS_FROM_GIT=oslo.config, so that we can ensure that proposed
>> olso.config changes won't break everyone.
>>
>> I believe we should do the same with all the python-*client libraries as 
>> well.
>> That will ensure that servers don't depend on unreleased features of python
>> client libraries, and will provide the forward testing to ensure the next
>> version of the python client to be released won't ruin the world.
>>
>> This is mostly a heads up that I'm going to start doing this implementation. 
>> If
>> someone wants to raise an objection, now is the time.
>> However I think breaking this master/master coupling of servers and clients
>> is important, and makes OpenStack function and upgrade a bit closer to what
>> people expect.
>>
>>  -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Steven Hardy
On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
> On 28 October 2014 22:51, Steven Hardy  wrote:
> > On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:

> > 3. Log onto seed VM to debug the issue.  Discover there are no logs.
> 
> We should fix that - is there a bug open? Thats a fairly serious issue
> for debugging a deployment.

heh, turns out there's already a long-standing bug (raised by you :D):

https://bugs.launchpad.net/tripleo/+bug/1290759

After some further experimentation and IRC discussion, it turns out that,
in theory devtest_seed.sh --debug-logging should do what I want, only atm
it doesn't work.

https://review.openstack.org/#/c/130369 looks like it may solve that in due
course.

The other (now obvious) thing I was missing was that despite all the
services not being configured to log to any file, the console log ends up
in /var/log/messages, so that was just a misunderstanding on my part.  I
was confused by the fact that the service configs (including use_syslog)
are all false/unset.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to connect to a serial port of an instance via websocket?

2014-10-28 Thread Markus Zoeller
The API provides an endpoint for querying the serial console of an 
instance ('os-getSerialConsole'). The nova-client interacts with this
API endpoint via the command `get-serial-console`.

nova get-serial-console myInstance
 
It returns a string like:

ws://127.0.0.1:6083/?token=e2b42240-375d-41fe-a166-367e4bbdce35
 
Q: How is one supposed to connect to such a websocket?

[1] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/consoles.py#L111
[2] 
https://ask.openstack.org/en/question/50671/how-to-connect-to-a-serial-port-of-an-instance-via-websocket/

Regards,
Markus Zoeller
IRC: markus_z


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Jay Pipes

On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:

One option would be a more  CSV like syntax eg

pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
pci_passthrough_whitelist = vendor_id=1137,product_id=0071

But this gets confusing if we want to specifying multiple sets of data
so might need to use semi-colons as first separator, and comma for list
element separators

pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
vendor_id=1137;product_id=0071


What about this instead (with each being a MultiStrOpt, but no comma or 
semicolon delimiters needed...)?


[pci_passthrough_whitelist]
# Any Intel PRO/1000 F Sever Adapter
vendor_id=8086
product_id=1001
address=*
physical_network=*
# Cisco VIC SR-IOV VF only on specified address and physical network
vendor_id=1137
product_id=0071
address=*:0a:00.*
physical_network=physnet1

Either that, or the YAML file that Sean suggested, would be my preference...

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin
Duncan, I don't think it's possible to have multiple volume groups using 
the same physical volume[1]. In fact, counter-intuitively (at least to 
me) the nesting actually goes the other way with multiple physical 
volumes comprising a single volume group. The LVM naming scheme actually 
makes more sense with this hierarchy.


So this brings us back to the original proposal of having separate 
backing files for Cinder and Nova which Dean thought might take too much 
space.


Duncan, could you please elaborate on the pain a single volume group is 
likely to cause for Cinder? Is it a show stopper?


Thank you,
Dan

1. https://wiki.archlinux.org/index.php/LVM#LVM_Building_Blocks


On 10/21/2014 03:10 PM, Duncan Thomas wrote:


Sharing the vg with cinder is likely to cause some pain testing 
proposed features cinder reconciling backend with the cinder db. 
Creating a second vg sharing the same backend pv is easy and avoids 
all such problems.


Duncan Thomas

On Oct 21, 2014 4:07 PM, "Dan Genin" > wrote:


Hello,

I would like to add to DevStack the ability to stand up Nova with
LVM ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in
general are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for
Nova, e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with
LVM ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962),
and to Tempest
testing of new features based on LVM ephemeral storage, such as
LVM ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be
provided a
volume group. Based on an initial discussion with Dean Troyer,
this is best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

 * move code in lib/cinder/cinder_backends/lvm to lib/lvm with
appropriate
   modifications

 * rename the Cinder volume group to something generic, e.g.,
devstack-vg

 * modify the Cinder initialization and cleanup code appropriately
to use
   the new volume group

 * initialize the volume group in stack.sh, shortly before
services are
   launched

 * cleanup the volume group in unstack.sh after the services have been
   shutdown

The question of how large to make the common Nova-Cinder volume
group in order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the
volume group size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Brent Eagles

Hi,

On 28/10/2014 10:39 AM, Sean Dague wrote:

On 10/28/2014 08:31 AM, Daniel P. Berrange wrote:





I don't think that is really very extensible for the future to drop the
key name. We've already extended the info we record here at least once,
and I expect we'd want to add more fields later. It is also makes it
less clear to the user - it is very easy to get confused about vendor
vs product IDs if we leave out the name.


If we really need that level of arbitrary complexity and future name
values we should then just:

pci_passthrough_cfg = /etc/nova/pci_pass.yaml

And build fully nested structures over there.

Doing multi level nesting inside of .ini format files is just kind of gross.

-Sean


The PCI whitelist mechanism needs to be extensible and for the sake of 
expediency the existing whitelist mechanism was modified to add the 
fields it has now. There has been discussion that the current mechanism 
is either insufficient or simply completely undesirable for the PCI 
passthrough use cases and the current approach was an interim solution. 
Unless the current situation is completely untenable and simply must go, 
is this a good opportunity to revisit previous discussions and proposals 
before devising alternatives?


Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Daniel P. Berrange
On Tue, Oct 28, 2014 at 10:18:37AM -0400, Jay Pipes wrote:
> On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
> >One option would be a more  CSV like syntax eg
> >
> >pci_passthrough_whitelist = address=*0a:00.*,physical_network=physnet1
> >pci_passthrough_whitelist = vendor_id=1137,product_id=0071
> >
> >But this gets confusing if we want to specifying multiple sets of data
> >so might need to use semi-colons as first separator, and comma for list
> >element separators
> >
> >pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2, 
> > vendor_id=1137;product_id=0071
> 
> What about this instead (with each being a MultiStrOpt, but no comma or
> semicolon delimiters needed...)?
> 
> [pci_passthrough_whitelist]
> # Any Intel PRO/1000 F Sever Adapter
> vendor_id=8086
> product_id=1001
> address=*
> physical_network=*
> # Cisco VIC SR-IOV VF only on specified address and physical network
> vendor_id=1137
> product_id=0071
> address=*:0a:00.*
> physical_network=physnet1

I think this is reasonable, though do we actually support setting
the same key twice ?

As an alternative we could just append an index for each "element"
in the list, eg like this:

 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1
 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1

Or like this:

 [pci_passthrough]
 whitelist_count=2

 [pci_passthrough_rule.0]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*

 [pci_passthrough_rule.1]
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1

> Either that, or the YAML file that Sean suggested, would be my preference...

I think it is nice to have it all in the same file, not least because it
will be easier for people supporting openstack in the field. ie in bug
reports we cna just ask for nova.conf and know we'll have all the user
config we care about in that one place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2014-10-28 Thread Peter Pouliot
Due to last minute preparation for the summit I'll be cancelling the meeting 
for this week.   We'll resume activity post summit.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Kilo Mid-Cycle Meetup Planning

2014-10-28 Thread Chris Jones
Hi

On 9 October 2014 23:56, James Polley  wrote:

>
> Assuming it's in the US or Europe, Mon-Fri gives me about 3 useful days,
> once you take out the time I lose to jet lag. That's barely worth the 48
> hours or so I spent in transit last time.
>

It may well be reasonable/possible, assuming it's not inconvenient for you,
to add a day or two to the trip, to recover before the meetup starts :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dean Troyer
On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin  wrote:

>  So this brings us back to the original proposal of having separate
> backing files for Cinder and Nova which Dean thought might take too much
> space.
>

Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some
loopback disk images.  DevStack's Swift and Ceph configurations assume
loopback devices and do no sharing.


> Duncan, could you please elaborate on the pain a single volume group is
> likely to cause for Cinder? Is it a show stopper?
>

Back in the day, DevStack was built to configure Cinder (and Nova Volume
before that) to use a specific existing volume group (VOLUME_GROUP_NAME) or
create a loopback file if necessary.  With the help of VOLUME_NAME_PREFIX
and volume_name_template DevStack knew which logical volumes belong to
Cinder and could Do The Right Thing.

With three loopback files being created, all wanting larger and larger
defaults, adding a fourth becomes Just One More Thing.  If Nova's use of
LVM is similar enough to Cinder's (uses deterministic naming for the LVs)
I'm betting we could make it work.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel-library merge policy and Fuel CI

2014-10-28 Thread Aleksandra Fedorova
Hi everyone,

with recent disruption in our CI process I'd like to discuss again the
issues in our merge workflow.

See the summary at the end.


As a starting point, here is the list of patches which were merged
into fuel-library repository without "Verified +1" label from Fuel CI:

https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+NOT+label:Verified%252B1%252Cuser%253Dfuel-ci,n,z

And the list of merged patches with explicit "Verified -1" label:

https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+label:Verified-1%252Cuser%253Dfuel-ci,n,z

There are two common reasons I know why these patchsets exist:

Case 1: "Who cares about CI anyway".

Case 2: These patches can not pass CI because of some real reason,
which makes Fuel CI result irrelevant.

I am not sure, if i need to comment on the first one, but please just
remember: CI is not a devops playground made to disrupt your otherwise
clean and smooth development process. It is an extremely critical
service, providing the clear reference point for all the work we do.
And we all know how important the reference point is [1].

So let's move on to the Case 2 and talk about our CI limitations and
what could possibly make the test result irrelevant.

1) Dependencies.

Let's say you have a chain of dependent patchsets and none of them
could pass the CI on its own. How do you merge it?

Here is the trick: the "leaf", i.e. last, topmost patch in the chain
should pass the CI.

The test we run for this patchset automatically pulls all dependencies
involved. Which makes Fuel CI result for this patchset perfectly
relevant for the whole chain.

2) Environment.

Fuel CI test environment usually uses slightly outdated version of
Fuel iso image and fuel-main code. Therefore it happens that you write
and test your patch against latest code via custom iso builds and it
works, but it can not pass CI. Does it make test results irrelevant?
No. It makes them even more valuable.

CI environment can be broken and can be outdated. This is the part of
the process. To deal with these situations we first need to fix the
environment, then run tests, and then merge the code.

And it helps if you contact devops team in advance  and inform us that
you soon will need the ISO with this particular features.

3) ?

Please add your examples and let's deal with them one by one.


Summary:

I'd like to propose the following merge policy:

1. any individual patchset MUST have +1 from Fuel CI;

2. any chain of dependent patchsets MUST have +1 from Fuel CI for the
topmost patch;

3. for all exceptional cases the person who does the merge MUST
explicitly contact devops team, and make sure that there will be
devops engineer available who will run additional checks before or
right after the merge. The very same person who does the merge also
MUST be available for some time after the merge to help the devops
engineer to deal with the test failures if they appear.



[1] http://www.youtube.com/watch?feature=player_embedded&v=QkCQ_-Id8zI#t=211


-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Jay Dobies

5. API: You can't create or modify roles via the API, or even view the
content of the role after creating it


None of that is in place yet, mostly due to time. The tuskar-load-roles 
was a short-term solution to getting a base set of roles in. 
Conceptually you're on target with I want to see in the coming releases.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Jorge Miramontes
Thanks for the reply Angus,

DDoS attacks are definitely a concern we are trying to address here. My
assumptions are based on a solution that is engineered for this type of
thing. Are you more concerned with network I/O during a DoS attack or
storing the logs? Under the idea I had, I wanted to make the amount of
time logs are stored for configurable so that the operator can choose
whether they want the logs after processing or not. The network I/O of
pumping logs out is a concern of mine, however.

Sampling seems like the go-to solution for gathering usage but I was
looking for something different as sampling can get messy and can be
inaccurate for certain metrics. Depending on the sampling rate, this
solution has the potential to miss spikes in traffic if you are gathering
gauge metrics such as active connections/sessions. Using logs would be
100% accurate in this case. Also, I'm assuming LBaaS will have events so
combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
gets complicated. Combining logs with events is arguably less complicated
as the granularity of logs is high. Due to this granularity, one can split
the logs based on the event times cleanly. Since sampling will have a
fixed cadence you will have to perform a "manual" sample at the time of
the event (i.e. add complexity).

At the end of the day there is no free lunch so more insight is
appreciated. Thanks for the feedback.

Cheers,
--Jorge




On 10/27/14 6:55 PM, "Angus Lees"  wrote:

>On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
>> > I,d like to start a conversation on usage requirements and have a few
>> > suggestions. I advocate that, since we will be using TCP and
>>HTTP/HTTPS
>> > based protocols, we inherently enable connection logging for load
>> 
>> > balancers for several reasons:
>> Just request from the operator side of things:
>> Please think about the scalability when storing all logs.
>> 
>> e.g. we are currently logging http requests to one load balanced
>>application
>> (that would be a fit for LBAAS) It is about 500 requests per second,
>>which
>> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
>> solution is chosen it can cope with machines doing 1000s of requests per
>> second...
>
>And to take this further, what happens during DoS attack (either syn
>flood or 
>full connections)?  How do we ensure that we don't lose our logging
>system 
>and/or amplify the DoS attack?
>
>One solution is sampling, with a tunable knob for the sampling rate -
>perhaps 
>tunable per-vip.  This still increases linearly with attack traffic,
>unless you 
>use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
>
>One of the advantages of (eg) polling the number of current sessions is
>that 
>the cost of that monitoring is essentially fixed regardless of the number
>of 
>connections passing through.  Numerous other metrics (rate of new
>connections, 
>etc) also have this property and could presumably be used for accurate
>billing 
>- without amplifying attacks.
>
>I think we should be careful about whether we want logging or metrics for
>more 
>accurate billing.  Both are useful, but full logging is only really
>required 
>for ad-hoc debugging (important! but different).
>
>-- 
> - Gus
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Duncan Thomas
Hi Dan

You're quite right, the nesting isn't as I thought it was, sorry to mislead you.

It isn't a show stopper, it just makes testing some proposed useful
functionality slightly harder. If nova were to namespace its volumes
(e.g. start all the volume names with nova-*) then that would allow
the problem to be easily worked around in the test, does that sound
reasonable?

On 28 October 2014 14:27, Dan Genin  wrote:
> Duncan, I don't think it's possible to have multiple volume groups using the
> same physical volume[1]. In fact, counter-intuitively (at least to me) the
> nesting actually goes the other way with multiple physical volumes
> comprising a single volume group. The LVM naming scheme actually makes more
> sense with this hierarchy.
>
> So this brings us back to the original proposal of having separate backing
> files for Cinder and Nova which Dean thought might take too much space.
>
> Duncan, could you please elaborate on the pain a single volume group is
> likely to cause for Cinder? Is it a show stopper?
>
> Thank you,
> Dan
>
> 1. https://wiki.archlinux.org/index.php/LVM#LVM_Building_Blocks
>
>
> On 10/21/2014 03:10 PM, Duncan Thomas wrote:
>
> Sharing the vg with cinder is likely to cause some pain testing proposed
> features cinder reconciling backend with the cinder db. Creating a second vg
> sharing the same backend pv is easy and avoids all such problems.
>
> Duncan Thomas
>
> On Oct 21, 2014 4:07 PM, "Dan Genin"  wrote:
>>
>> Hello,
>>
>> I would like to add to DevStack the ability to stand up Nova with LVM
>> ephemeral
>> storage. Below is a draft of the blueprint describing the proposed
>> feature.
>>
>> Suggestions on architecture, implementation and the blueprint in general
>> are very
>> welcome.
>>
>> Best,
>> Dan
>>
>> 
>> Enable LVM ephemeral storage for Nova
>> 
>>
>> Currently DevStack supports only file based ephemeral storage for Nova,
>> e.g.,
>> raw and qcow2. This is an obstacle to Tempest testing of Nova with LVM
>> ephemeral
>> storage, which in the past has been inadvertantly broken
>> (see for example, https://bugs.launchpad.net/nova/+bug/1373962), and to
>> Tempest
>> testing of new features based on LVM ephemeral storage, such as LVM
>> ephemeral
>> storage encryption.
>>
>> To enable Nova to come up with LVM ephemeral storage it must be provided a
>> volume group. Based on an initial discussion with Dean Troyer, this is
>> best
>> achieved by creating a single volume group for all services that
>> potentially
>> need LVM storage; at the moment these are Nova and Cinder.
>>
>> Implementation of this feature will:
>>
>>  * move code in lib/cinder/cinder_backends/lvm to lib/lvm with appropriate
>>modifications
>>
>>  * rename the Cinder volume group to something generic, e.g., devstack-vg
>>
>>  * modify the Cinder initialization and cleanup code appropriately to use
>>the new volume group
>>
>>  * initialize the volume group in stack.sh, shortly before services are
>>launched
>>
>>  * cleanup the volume group in unstack.sh after the services have been
>>shutdown
>>
>> The question of how large to make the common Nova-Cinder volume group in
>> order
>> to enable LVM ephemeral Tempest testing will have to be explored.
>> Although,
>> given the tiny instance disks used in Nova Tempest tests, the current
>> Cinder volume group size may already be adequate.
>>
>> No new configuration options will be necessary, assuming the volume group
>> size
>> will not be made configurable.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Cory Benfield
On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> Hi,
> 
> Current Open-stack was built as flat network.
> 
> With the introduction of the L3 lookup (by inserting the routing table
> in forwarding path) and separate 'VIF Route Type' interface:
> 
> At what point of time in the packet processing  decision will be made
> to lookup FIB  during ? For each packet there will additional  FIB
> lookup ?
> 
> How about the  impact on  'inter compute traffic', processed by  DVR  ?
> Here thinking  OpenStack cloud as hierarchical network instead of Flat
> network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-library merge policy and Fuel CI

2014-10-28 Thread Vitaly Kramskikh
Aleksandra,

As you may know, there is a randomly failing nailgun unit test in fuel-web
repo, which fails for the major part of review requests. It's been
happening for a few days. But I need to merge some stuff and cannot wait
for the fix of this well known issue. So for every request with -1 from
Fuel CI I write an explanation why I decided to merge the request. Are you
ok with this? Here is an example: https://review.openstack.org/#/c/131079/

2014-10-28 23:10 GMT+07:00 Aleksandra Fedorova :

> Hi everyone,
>
> with recent disruption in our CI process I'd like to discuss again the
> issues in our merge workflow.
>
> See the summary at the end.
>
>
> As a starting point, here is the list of patches which were merged
> into fuel-library repository without "Verified +1" label from Fuel CI:
>
>
> https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+NOT+label:Verified%252B1%252Cuser%253Dfuel-ci,n,z
>
> And the list of merged patches with explicit "Verified -1" label:
>
>
> https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+label:Verified-1%252Cuser%253Dfuel-ci,n,z
>
> There are two common reasons I know why these patchsets exist:
>
> Case 1: "Who cares about CI anyway".
>
> Case 2: These patches can not pass CI because of some real reason,
> which makes Fuel CI result irrelevant.
>
> I am not sure, if i need to comment on the first one, but please just
> remember: CI is not a devops playground made to disrupt your otherwise
> clean and smooth development process. It is an extremely critical
> service, providing the clear reference point for all the work we do.
> And we all know how important the reference point is [1].
>
> So let's move on to the Case 2 and talk about our CI limitations and
> what could possibly make the test result irrelevant.
>
> 1) Dependencies.
>
> Let's say you have a chain of dependent patchsets and none of them
> could pass the CI on its own. How do you merge it?
>
> Here is the trick: the "leaf", i.e. last, topmost patch in the chain
> should pass the CI.
>
> The test we run for this patchset automatically pulls all dependencies
> involved. Which makes Fuel CI result for this patchset perfectly
> relevant for the whole chain.
>
> 2) Environment.
>
> Fuel CI test environment usually uses slightly outdated version of
> Fuel iso image and fuel-main code. Therefore it happens that you write
> and test your patch against latest code via custom iso builds and it
> works, but it can not pass CI. Does it make test results irrelevant?
> No. It makes them even more valuable.
>
> CI environment can be broken and can be outdated. This is the part of
> the process. To deal with these situations we first need to fix the
> environment, then run tests, and then merge the code.
>
> And it helps if you contact devops team in advance  and inform us that
> you soon will need the ISO with this particular features.
>
> 3) ?
>
> Please add your examples and let's deal with them one by one.
>
>
> Summary:
>
> I'd like to propose the following merge policy:
>
> 1. any individual patchset MUST have +1 from Fuel CI;
>
> 2. any chain of dependent patchsets MUST have +1 from Fuel CI for the
> topmost patch;
>
> 3. for all exceptional cases the person who does the merge MUST
> explicitly contact devops team, and make sure that there will be
> devops engineer available who will run additional checks before or
> right after the merge. The very same person who does the merge also
> MUST be available for some time after the merge to help the devops
> engineer to deal with the test failures if they appear.
>
>
>
> [1]
> http://www.youtube.com/watch?feature=player_embedded&v=QkCQ_-Id8zI#t=211
>
>
> --
> Aleksandra Fedorova
> Fuel Devops Engineer
> bookwar
>
> --
> You received this message because you are subscribed to the Google Groups
> "fuel-core-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fuel-core-team+unsubscr...@mirantis.com.
> For more options, visit https://groups.google.com/a/mirantis.com/d/optout.
>



-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi Cory,

Yes that is the basic question I have. 

OpenStack cloud  is ready to move away from Flat L2 network ?

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ? 
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?
4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

Others can give their opinion also.

Thanks & Regards,
keshava

-Original Message-
From: Cory Benfield [mailto:cory.benfi...@metaswitch.com] 
Sent: Tuesday, October 28, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> Hi,
> 
> Current Open-stack was built as flat network.
> 
> With the introduction of the L3 lookup (by inserting the routing table 
> in forwarding path) and separate 'VIF Route Type' interface:
> 
> At what point of time in the packet processing  decision will be made 
> to lookup FIB  during ? For each packet there will additional  FIB 
> lookup ?
> 
> How about the  impact on  'inter compute traffic', processed by  DVR  ?
> Here thinking  OpenStack cloud as hierarchical network instead of Flat 
> network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-28 Thread Daniel Comnea
Thanks all for reply.

I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed this
is not possible as of today. Someone else - sorry i forgot his nicname on
IRC suggested to write a Ceilometer query to count the number of instances
but what @ZhiQiang said is true and this is what we have seen via the
instance sample

*@Clint - *that is the case indeed

*@ZhiQiang* - what do you mean by "*count of resource should be queried
from specific service's API*"? Is it related to Ceilometer's event types
configuration?

*@Mike - *my use case is very simple: i have a group of instances and in
case the # of instances reach the minimum number i set, i would like a new
instance to be spun up - think like a cluster where i want to maintain a
minimum number of members

With regards to the proposal you made -
https://review.openstack.org/#/c/127884/ that works but only in a specific
use case hence is not generic because the assumption is that my instances
are hooked behind a LBaaS which is not always the case.

Looking forward to see the 'convergence' in action.


Cheers,
Dani

On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer  wrote:

> Daniel Comnea  wrote on 10/27/2014 07:16:32 AM:
>
> > Yes i did but if you look at this example
> >
> >
> https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
> >
>
> > the flow is simple:
>
> > CPU alarm in Ceilometer triggers the "type: OS::Heat::ScalingPolicy"
> > which then triggers the "type: OS::Heat::AutoScalingGroup"
>
> Actually the ScalingPolicy does not "trigger" the ASG.  BTW,
> "ScalingPolicy" is mis-named; it is not a full policy, it is only an action
> (the condition part is missing --- as you noted, that is in the Ceilometer
> alarm).  The so-called ScalingPolicy does the action itself when
> triggered.  But it respects your configured min and max size.
>
> What are you concerned about making your scaling group smaller than your
> configured minimum?  Just checking here that there is not a
> misunderstanding.
>
> As Clint noted, there is a large-scale effort underway to make Heat
> maintain what it creates despite deletion of the underlying resources.
>
> There is also a small-scale effort underway to make ASGs recover from
> members stopping proper functioning for whatever reason.  See
> https://review.openstack.org/#/c/127884/ for a proposed interface and
> initial implementation.
>
> Regards,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Changing default replacement_policy for Neutron port?

2014-10-28 Thread Steven Hardy
Hi all,

So I've been investigating bug #1383709, which has caused me to run into a
bad update pattern involving OS::Neutron::Port

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Port

I'm not quite clear on the history, but for some reason, we have a
"replacement_policy" property, unlike all other resources, and it defaults
to replacing the resource every time you update, unless you pass "AUTO" to
the property.

I'm sure there's a good reason for this, but on the face of it, it seems to
be a very unsafe and inconvenient default when considering updates?

The problem (which may actually be the cause the bug #1383709) is the UUID
changes, so you don't only replace the port, you replace it and everything
that references it, which makes the Port resource a landmine of
HARestarter-esque proportions ;)

Can anyone (and in particular stevebaker who initally wrote the code) shed
any light on this?  Can we just flip the default to AUTO, as it seems to be
a more desirable default for nearly all users?

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

On 10/28/2014 11:56 AM, Dean Troyer wrote:
On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin > wrote:


So this brings us back to the original proposal of having separate
backing files for Cinder and Nova which Dean thought might take
too much space.


Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some 
loopback disk images.  DevStack's Swift and Ceph configurations assume 
loopback devices and do no sharing.


Duncan, could you please elaborate on the pain a single volume
group is likely to cause for Cinder? Is it a show stopper?


Back in the day, DevStack was built to configure Cinder (and Nova 
Volume before that) to use a specific existing volume group 
(VOLUME_GROUP_NAME) or create a loopback file if necessary.  With the 
help of VOLUME_NAME_PREFIX and volume_name_template DevStack knew 
which logical volumes belong to Cinder and could Do The Right Thing.


With three loopback files being created, all wanting larger and larger 
defaults, adding a fourth becomes Just One More Thing.  If Nova's use 
of LVM is similar enough to Cinder's (uses deterministic naming for 
the LVs) I'm betting we could make it work.


dt
Nova's disk names are of the form _. So 
deterministic but, unfortunately, not necessarily predictable. It sounds 
like Duncan is saying that Cinder needs a fixed prefix for testing its 
functionality. I will be honest, I am not optimistic about convincing 
Nova to change their disk naming scheme for the sake of LVM testing. Far 
more important changes have lingered for months and sometimes longer.


It sounds like you are concerned about two issues with regard to the 
separate volume groups approach: 1) potential loop device shortage and 
2) growing space demand. The second issue, it seems to me, will arise no 
matter which of the two solutions we choose. More space will be required 
for testing Nova's LVM functionality one way or another, although, using 
a shared volume group would permit a more efficient use of the available 
space. The first issue is, indeed, a direct consequence of the choice to 
use distinct volume groups. However, the number of available loop 
devices can be increased by passing the appropriate boot parameter to 
the kernel, which can be easy or hard depending on how the test VMs are 
spun up.


I am not saying that we should necessarily go the way of separate volume 
groups but, assuming for the moment that changing Nova's disk naming 
scheme is not an option, we need to figure out what will bring the least 
amount of pain forcing Cinder tests to work around Nova volumes or 
create separate volume groups.


Let me know what you think.
Dan



--

Dean Troyer
dtro...@gmail.com 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

On 10/28/2014 12:47 PM, Duncan Thomas wrote:

Hi Dan

You're quite right, the nesting isn't as I thought it was, sorry to mislead you.

It isn't a show stopper, it just makes testing some proposed useful
functionality slightly harder. If nova were to namespace its volumes
(e.g. start all the volume names with nova-*) then that would allow
the problem to be easily worked around in the test, does that sound
reasonable?
Changing Nova disk names is a lng shot. It's likely I will be doing 
something else by the time that gets merged:) So we are left with the 
two options of 1) using a shared volume group and, thus, complicating 
life for Cinder or 2) using separate volume groups potentially causing 
headaches for DevStack. I am trying to figure out which of these two is 
the lesser evil. It seems that Dean's concerns can be addressed, though, 
he still has to weigh in on the proposed mitigation approaches. I have 
little understanding of what problems a shared Cinder-Nova volume group 
would cause for Cinder testing. How hard would it be to make the tests 
work with a shared volume group?


Dan

On 28 October 2014 14:27, Dan Genin  wrote:

Duncan, I don't think it's possible to have multiple volume groups using the
same physical volume[1]. In fact, counter-intuitively (at least to me) the
nesting actually goes the other way with multiple physical volumes
comprising a single volume group. The LVM naming scheme actually makes more
sense with this hierarchy.

So this brings us back to the original proposal of having separate backing
files for Cinder and Nova which Dean thought might take too much space.

Duncan, could you please elaborate on the pain a single volume group is
likely to cause for Cinder? Is it a show stopper?

Thank you,
Dan

1. https://wiki.archlinux.org/index.php/LVM#LVM_Building_Blocks


On 10/21/2014 03:10 PM, Duncan Thomas wrote:

Sharing the vg with cinder is likely to cause some pain testing proposed
features cinder reconciling backend with the cinder db. Creating a second vg
sharing the same backend pv is easy and avoids all such problems.

Duncan Thomas

On Oct 21, 2014 4:07 PM, "Dan Genin"  wrote:

Hello,

I would like to add to DevStack the ability to stand up Nova with LVM
ephemeral
storage. Below is a draft of the blueprint describing the proposed
feature.

Suggestions on architecture, implementation and the blueprint in general
are very
welcome.

Best,
Dan


Enable LVM ephemeral storage for Nova


Currently DevStack supports only file based ephemeral storage for Nova,
e.g.,
raw and qcow2. This is an obstacle to Tempest testing of Nova with LVM
ephemeral
storage, which in the past has been inadvertantly broken
(see for example, https://bugs.launchpad.net/nova/+bug/1373962), and to
Tempest
testing of new features based on LVM ephemeral storage, such as LVM
ephemeral
storage encryption.

To enable Nova to come up with LVM ephemeral storage it must be provided a
volume group. Based on an initial discussion with Dean Troyer, this is
best
achieved by creating a single volume group for all services that
potentially
need LVM storage; at the moment these are Nova and Cinder.

Implementation of this feature will:

  * move code in lib/cinder/cinder_backends/lvm to lib/lvm with appropriate
modifications

  * rename the Cinder volume group to something generic, e.g., devstack-vg

  * modify the Cinder initialization and cleanup code appropriately to use
the new volume group

  * initialize the volume group in stack.sh, shortly before services are
launched

  * cleanup the volume group in unstack.sh after the services have been
shutdown

The question of how large to make the common Nova-Cinder volume group in
order
to enable LVM ephemeral Tempest testing will have to be explored.
Although,
given the tiny instance disks used in Nova Tempest tests, the current
Cinder volume group size may already be adequate.

No new configuration options will be necessary, assuming the volume group
size
will not be made configurable.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Armando M.
Sorry for jumping into this thread late...there's lots of details to
process, and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward,
at the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I
think that is sensible to adopt the latest spec system we have been using
to understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor
specific blueprint for now.

When I look at these I clearly see that we jump all the way to
implementations details. From an architectural point of view, this clearly
does not make a lot of sense.

In order to ensure that everyone is on the same page, I would suggest to
have a discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible
interactions that an actor (i.e. the tenant or the admin) can have with the
system (an OpenStack deployment), when these NFV-enabling capabilities are
available? What are the observed outcomes once these interactions have
taken place?

-  Management API: what abstractions do we expose to the tenant or admin
(do we augment the existing resources, or do we create new resources, or do
we do both)? This should obviously driven by a set of use cases, and we
need to identify the minimum set or logical artifacts that would let us
meet the needs of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if
anything, so that we can implement this NFV-enabling constructs
successfully? Are there any changes to the core L2 API? Are there any
changes required to the core framework (scheduling, policy, notifications,
data model etc)?

- Add support to the existing plugin backends: the openvswitch reference
implementation is an obvious candidate, but other plugins may want to
leverage the newly defined capabilities too. Once the above mentioned
points have been fleshed out, it should be fairly straightforward to have
these efforts progress in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't
believe like the core team is in the best position to determine the best
approach forward; I think it's in everyone's interest in making sure that
something cohesive comes out of this; the worst possible outcome is no
progress at all, or even worse, some frankenstein system that no-one really
know what it does, or how it can be used.

I will go over the specs one more time in order to identify some answers to
my points above. I hope someone can help me through the process.


Many thanks,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][CI] nova-networking or neutron netwokring for CI

2014-10-28 Thread Joe Gordon
On Tue, Oct 28, 2014 at 6:44 AM, Dan Smith  wrote:

> > Are current nova CI platforms configured with nova-networking or with
> > neutron networking? Or is networking in general not even a part of the
> > nova CI approach?
>
> I think we have several that only run on Neutron, so I think it's fine
> to just do that.
>

Agreed, neutron should be considered required for all of the reasons listed
above.


>
> --Dan
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Duncan Thomas
Cinder volumes are always (unless you go change the default) in the
form: volume-, and since the string 'volume-' is never a valid
uuid, then I think we can work around nova volumes fine when we come
to write our tests.

Sorry for the repeated circling on this, but I think I'm now happy.

Thanks



On 28 October 2014 17:53, Dan Genin  wrote:
> On 10/28/2014 11:56 AM, Dean Troyer wrote:
>
> On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin  wrote:
>>
>> So this brings us back to the original proposal of having separate backing
>> files for Cinder and Nova which Dean thought might take too much space.
>
>
> Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some loopback
> disk images.  DevStack's Swift and Ceph configurations assume loopback
> devices and do no sharing.
>
>>
>> Duncan, could you please elaborate on the pain a single volume group is
>> likely to cause for Cinder? Is it a show stopper?
>
>
> Back in the day, DevStack was built to configure Cinder (and Nova Volume
> before that) to use a specific existing volume group (VOLUME_GROUP_NAME) or
> create a loopback file if necessary.  With the help of VOLUME_NAME_PREFIX
> and volume_name_template DevStack knew which logical volumes belong to
> Cinder and could Do The Right Thing.
>
> With three loopback files being created, all wanting larger and larger
> defaults, adding a fourth becomes Just One More Thing.  If Nova's use of LVM
> is similar enough to Cinder's (uses deterministic naming for the LVs) I'm
> betting we could make it work.
>
> dt
>
> Nova's disk names are of the form _. So
> deterministic but, unfortunately, not necessarily predictable. It sounds
> like Duncan is saying that Cinder needs a fixed prefix for testing its
> functionality. I will be honest, I am not optimistic about convincing Nova
> to change their disk naming scheme for the sake of LVM testing. Far more
> important changes have lingered for months and sometimes longer.
>
> It sounds like you are concerned about two issues with regard to the
> separate volume groups approach: 1) potential loop device shortage and 2)
> growing space demand. The second issue, it seems to me, will arise no matter
> which of the two solutions we choose. More space will be required for
> testing Nova's LVM functionality one way or another, although, using a
> shared volume group would permit a more efficient use of the available
> space. The first issue is, indeed, a direct consequence of the choice to use
> distinct volume groups. However, the number of available loop devices can be
> increased by passing the appropriate boot parameter to the kernel, which can
> be easy or hard depending on how the test VMs are spun up.
>
> I am not saying that we should necessarily go the way of separate volume
> groups but, assuming for the moment that changing Nova's disk naming scheme
> is not an option, we need to figure out what will bring the least amount of
> pain forcing Cinder tests to work around Nova volumes or create separate
> volume groups.
>
> Let me know what you think.
> Dan
>
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Ben Nemec
On 10/28/2014 06:18 AM, Steven Hardy wrote:
> On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
>> On 28 October 2014 22:51, Steven Hardy  wrote:
>>> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
 So this should work and I think its generally good.

 But - I'm curious, you only need a single image for devtest to
 experiment with tuskar - the seed - which should be about the same
 speed (or faster, if you have hot caches) than devstack, and you'll
 get Ironic and nodes registered so that the panels have stuff to show.
>>>
>>> TBH it's not so much about speed (although, for me, devstack is faster as
>>> I've not yet mirrored all-the-things locally, I only have a squid cache),
>>> it's about establishing a productive test/debug/hack/re-test workflow.
>>
>> mm, squid-cache should still give pretty good results. If its not, bug
>> time :). That said..
>>
>>> I've been configuring devstack to create Ironic nodes FWIW, so that works
>>> OK too.
>>
>> Cool.
>>
>>> It's entirely possible I'm missing some key information on how to compose
>>> my images to be debug friendly, but here's my devtest frustration:
>>>
>>> 1. Run devtest to create seed + overcloud
>>
>> If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
>> devtest_seed.sh only. The seed has everything on it, so the rest is
>> waste (unless you need all the overcloud bits - in which case I'd
>> still tune things - e.g. I'd degrade to single node, and I'd iterate
>> on devtest_overcloud.sh, *not* on the full plumbing each time).
> 
> Yup, I went round a few iterations of those, e.g running devtest_overcloud
> with -c so I could more quickly re-deploy, until I realized I could drive
> heat directly, so I started doing that :)
> 
> Most of my investigations atm are around investigating Heat issues, or
> testing new tripleo-heat-templates stuff, so I do need to spin up the
> overcloud (and update it, which is where the fun really began ref bug 
> #1383709 and #1384750 ...)
> 
>>> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
>>> 3. Log onto seed VM to debug the issue.  Discover there are no logs.
>>
>> We should fix that - is there a bug open? Thats a fairly serious issue
>> for debugging a deployment.
> 
> I've not yet raised one, as I wasn't sure if it was either by design, or if
> I was missing some crucial element from my DiB config.
> 
> If you consider it a bug, I'll raise one and look into a fix.
> 
>>> 4. Restart the heat-engine logging somewhere
>>> 5. Realize heat-engine isn't quite latest master
>>> 6. Git pull heat, discover networking won't allow it
>>
>> Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
>> totally fine - I've depended heavily on that to debug various things
>> over time.
> 
> Not yet dug into it in a lot of detail tbh, my other VMs can access the
> internet fine so it may be something simple, I'll look into it.

Are you sure this is a networking thing?  When I try a git pull I get this:

[root@localhost heat]# git pull
fatal:
'/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
does not appear to be a git repository
fatal: Could not read from remote repository.

That's actually because the git repo on the seed would have come from
the local cache during the image build.  We should probably reset the
remote to a sane value once we're done with the cache one.

Networking-wise, my Fedora seed can pull from git.o.o just fine though.

> 
>>> 7. scp latest master from my laptop->VM
>>> 8. setup.py install, discover the dependencies aren't all there
>>
>> This one might be docs: heat is installed in a venv -
>> /opt/stack/venvs/heat, so the deps be should in that, not in the
>> global site-packages.
> 
> Aha, I did think that may be the case, but I'd already skipped to step (9)
> by that point :D
> 
>>> 9. Give up and try to recreate issue on devstack
>>
>> :)
>>
>>> I'm aware there are probably solutions to all of these problems, but my
>>> point is basically that devstack on my laptop already solves all of them,
>>> so... maybe I can just use that?  That's my thinking, anyway.
>>
>> Sure - its fine to use devstack. In fact, we don't *want* devtest to
>> supplant devstack, they're solving different problems.
>>
>>> E.g here's my tried, tested and comfortable workflow:
>>>
>>> 1. Run stack.sh on my laptop
>>> 2. Do a heat stack-create
>>> 3. Hit a problem, look at screen logs
>>> 4. Fix problem, restart heat, re-test, git-review, done!
>>>
>>> I realize I'm swimming against the tide a bit here, so feel free to educate
>>> me if there's an easier way to reduce the developer friction that exists
>>> with devtest :)
>>
>> Quite possibly there isn't. Some of your issues are ones we should not
>> at all have, and I'd like to see those removed. But they are different
>> tools for different scenarios, so I'd expect some impedance mismatch
>> doing single-code-base-dev in a prod-deploy-cont

Re: [openstack-dev] [Fuel] fuel-library merge policy and Fuel CI

2014-10-28 Thread Aleksandra Fedorova
Vitaly,

though comments like this are definitely better than nothing, I think
we should address these issues in a more formal way.

For random failures we have to retrigger the build until it passes.
Yes, it could take some time (two-three rebuilds?), but it is the only
reliable way which shows that it is indeed random and hasn't suddenly
become permanent. If it fails three times in a row, this issue is
probably bigger than you think. Should we really ignore/postpone it
then?

And if it is really the known issue, we need to fix or disable this
particular test. And I think that this fix should be merged in the
repo via the general workflow.

It doesn't only make everything pass the CI properly, it also adds
this necessary step where you announce the issue publicly and it gets
approved as the "official" known issue. I would even add certain
keyword for the commit message to mark this temporary fixes to
simplify tracking.



On Tue, Oct 28, 2014 at 8:19 PM, Vitaly Kramskikh
 wrote:
> Aleksandra,
>
> As you may know, there is a randomly failing nailgun unit test in fuel-web
> repo, which fails for the major part of review requests. It's been happening
> for a few days. But I need to merge some stuff and cannot wait for the fix
> of this well known issue. So for every request with -1 from Fuel CI I write
> an explanation why I decided to merge the request. Are you ok with this?
> Here is an example: https://review.openstack.org/#/c/131079/
>
> 2014-10-28 23:10 GMT+07:00 Aleksandra Fedorova :
>>
>> Hi everyone,
>>
>> with recent disruption in our CI process I'd like to discuss again the
>> issues in our merge workflow.
>>
>> See the summary at the end.
>>
>>
>> As a starting point, here is the list of patches which were merged
>> into fuel-library repository without "Verified +1" label from Fuel CI:
>>
>>
>> https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+NOT+label:Verified%252B1%252Cuser%253Dfuel-ci,n,z
>>
>> And the list of merged patches with explicit "Verified -1" label:
>>
>>
>> https://review.openstack.org/#/q/project:stackforge/fuel-library+AND+status:merged+AND+label:Verified-1%252Cuser%253Dfuel-ci,n,z
>>
>> There are two common reasons I know why these patchsets exist:
>>
>> Case 1: "Who cares about CI anyway".
>>
>> Case 2: These patches can not pass CI because of some real reason,
>> which makes Fuel CI result irrelevant.
>>
>> I am not sure, if i need to comment on the first one, but please just
>> remember: CI is not a devops playground made to disrupt your otherwise
>> clean and smooth development process. It is an extremely critical
>> service, providing the clear reference point for all the work we do.
>> And we all know how important the reference point is [1].
>>
>> So let's move on to the Case 2 and talk about our CI limitations and
>> what could possibly make the test result irrelevant.
>>
>> 1) Dependencies.
>>
>> Let's say you have a chain of dependent patchsets and none of them
>> could pass the CI on its own. How do you merge it?
>>
>> Here is the trick: the "leaf", i.e. last, topmost patch in the chain
>> should pass the CI.
>>
>> The test we run for this patchset automatically pulls all dependencies
>> involved. Which makes Fuel CI result for this patchset perfectly
>> relevant for the whole chain.
>>
>> 2) Environment.
>>
>> Fuel CI test environment usually uses slightly outdated version of
>> Fuel iso image and fuel-main code. Therefore it happens that you write
>> and test your patch against latest code via custom iso builds and it
>> works, but it can not pass CI. Does it make test results irrelevant?
>> No. It makes them even more valuable.
>>
>> CI environment can be broken and can be outdated. This is the part of
>> the process. To deal with these situations we first need to fix the
>> environment, then run tests, and then merge the code.
>>
>> And it helps if you contact devops team in advance  and inform us that
>> you soon will need the ISO with this particular features.
>>
>> 3) ?
>>
>> Please add your examples and let's deal with them one by one.
>>
>>
>> Summary:
>>
>> I'd like to propose the following merge policy:
>>
>> 1. any individual patchset MUST have +1 from Fuel CI;
>>
>> 2. any chain of dependent patchsets MUST have +1 from Fuel CI for the
>> topmost patch;
>>
>> 3. for all exceptional cases the person who does the merge MUST
>> explicitly contact devops team, and make sure that there will be
>> devops engineer available who will run additional checks before or
>> right after the merge. The very same person who does the merge also
>> MUST be available for some time after the merge to help the devops
>> engineer to deal with the test failures if they appear.
>>
>>
>>
>> [1]
>> http://www.youtube.com/watch?feature=player_embedded&v=QkCQ_-Id8zI#t=211
>>
>>
>> --
>> Aleksandra Fedorova
>> Fuel Devops Engineer
>> bookwar
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "fuel-core-team" grou

[openstack-dev] [Neutron][QoS] Pod time at Paris Summit

2014-10-28 Thread Collins, Sean
Hi,

Like Atlanta, I will be at the summit. If there is interest, I can
schedule a time to talk about the QoS API extension.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to connect to a serial port of an instance via websocket?

2014-10-28 Thread Solly Ross
You should be able to connect like a normal WebSocket, assuming you're running 
the serial console websocketproxy (it's a different command from the VNC web 
socket proxy).  If you want, you can ping
on IRC and I can help you debug your JS code.

Best Regards,
Solly Ross

(directxman12 on freenode IRC)

- Original Message -
> From: "Markus Zoeller" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, October 28, 2014 10:09:44 AM
> Subject: [openstack-dev] [nova] How to connect to a serial port of an 
> instance via websocket?
> 
> The API provides an endpoint for querying the serial console of an
> instance ('os-getSerialConsole'). The nova-client interacts with this
> API endpoint via the command `get-serial-console`.
> 
> nova get-serial-console myInstance
>  
> It returns a string like:
> 
> ws://127.0.0.1:6083/?token=e2b42240-375d-41fe-a166-367e4bbdce35
>  
> Q: How is one supposed to connect to such a websocket?
> 
> [1]
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/consoles.py#L111
> [2]
> https://ask.openstack.org/en/question/50671/how-to-connect-to-a-serial-port-of-an-instance-via-websocket/
> 
> Regards,
> Markus Zoeller
> IRC: markus_z
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
> On 10/28/2014 06:18 AM, Steven Hardy wrote:
> > On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
> >> On 28 October 2014 22:51, Steven Hardy  wrote:
> >>> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
>  So this should work and I think its generally good.
> 
>  But - I'm curious, you only need a single image for devtest to
>  experiment with tuskar - the seed - which should be about the same
>  speed (or faster, if you have hot caches) than devstack, and you'll
>  get Ironic and nodes registered so that the panels have stuff to show.
> >>>
> >>> TBH it's not so much about speed (although, for me, devstack is faster as
> >>> I've not yet mirrored all-the-things locally, I only have a squid cache),
> >>> it's about establishing a productive test/debug/hack/re-test workflow.
> >>
> >> mm, squid-cache should still give pretty good results. If its not, bug
> >> time :). That said..
> >>
> >>> I've been configuring devstack to create Ironic nodes FWIW, so that works
> >>> OK too.
> >>
> >> Cool.
> >>
> >>> It's entirely possible I'm missing some key information on how to compose
> >>> my images to be debug friendly, but here's my devtest frustration:
> >>>
> >>> 1. Run devtest to create seed + overcloud
> >>
> >> If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
> >> devtest_seed.sh only. The seed has everything on it, so the rest is
> >> waste (unless you need all the overcloud bits - in which case I'd
> >> still tune things - e.g. I'd degrade to single node, and I'd iterate
> >> on devtest_overcloud.sh, *not* on the full plumbing each time).
> > 
> > Yup, I went round a few iterations of those, e.g running devtest_overcloud
> > with -c so I could more quickly re-deploy, until I realized I could drive
> > heat directly, so I started doing that :)
> > 
> > Most of my investigations atm are around investigating Heat issues, or
> > testing new tripleo-heat-templates stuff, so I do need to spin up the
> > overcloud (and update it, which is where the fun really began ref bug 
> > #1383709 and #1384750 ...)
> > 
> >>> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
> >>> 3. Log onto seed VM to debug the issue.  Discover there are no logs.
> >>
> >> We should fix that - is there a bug open? Thats a fairly serious issue
> >> for debugging a deployment.
> > 
> > I've not yet raised one, as I wasn't sure if it was either by design, or if
> > I was missing some crucial element from my DiB config.
> > 
> > If you consider it a bug, I'll raise one and look into a fix.
> > 
> >>> 4. Restart the heat-engine logging somewhere
> >>> 5. Realize heat-engine isn't quite latest master
> >>> 6. Git pull heat, discover networking won't allow it
> >>
> >> Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
> >> totally fine - I've depended heavily on that to debug various things
> >> over time.
> > 
> > Not yet dug into it in a lot of detail tbh, my other VMs can access the
> > internet fine so it may be something simple, I'll look into it.
> 
> Are you sure this is a networking thing?  When I try a git pull I get this:
> 
> [root@localhost heat]# git pull
> fatal:
> '/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
> does not appear to be a git repository
> fatal: Could not read from remote repository.
> 
> That's actually because the git repo on the seed would have come from
> the local cache during the image build.  We should probably reset the
> remote to a sane value once we're done with the cache one.
> 
> Networking-wise, my Fedora seed can pull from git.o.o just fine though.
> 

I think we should actually just rip the git repos out of the images in
production installs. What good does it do sending many MB of copies of
the git repos around? Perhaps just record HEAD somewhere in a manifest
and rm -r the source repos during cleanup.d.

But, for supporting dev/test, we could definitely leave them there and
change the remotes back to their canonical (as far as diskimage-builder
knows) sources.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Duncan Thomas
On 28 October 2014 18:01, Dan Genin  wrote:

> Changing Nova disk names is a lng shot. It's likely I will be doing
> something else by the time that gets merged:) So we are left with the two
> options of 1) using a shared volume group and, thus, complicating life for
> Cinder or 2) using separate volume groups potentially causing headaches for
> DevStack. I am trying to figure out which of these two is the lesser evil.
> It seems that Dean's concerns can be addressed, though, he still has to
> weigh in on the proposed mitigation approaches. I have little understanding
> of what problems a shared Cinder-Nova volume group would cause for Cinder
> testing. How hard would it be to make the tests work with a shared volume
> group?

As I commented above, it looks like nova volumes always start with a
UUID. If this is true then we can just make the cinder tests slightly
more clever, since cinder volumes default to being called
'volume-' so will never collide.

If somebody can confirm that nova volumes will always start with a
UUID then we can code around the shared volume group in the tests.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread Dan Genin

Great, thank you, Duncan. I will then proceed with the shared volume group.

Dan

On 10/28/2014 02:06 PM, Duncan Thomas wrote:

Cinder volumes are always (unless you go change the default) in the
form: volume-, and since the string 'volume-' is never a valid
uuid, then I think we can work around nova volumes fine when we come
to write our tests.

Sorry for the repeated circling on this, but I think I'm now happy.

Thanks



On 28 October 2014 17:53, Dan Genin  wrote:

On 10/28/2014 11:56 AM, Dean Troyer wrote:

On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin  wrote:

So this brings us back to the original proposal of having separate backing
files for Cinder and Nova which Dean thought might take too much space.


Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some loopback
disk images.  DevStack's Swift and Ceph configurations assume loopback
devices and do no sharing.


Duncan, could you please elaborate on the pain a single volume group is
likely to cause for Cinder? Is it a show stopper?


Back in the day, DevStack was built to configure Cinder (and Nova Volume
before that) to use a specific existing volume group (VOLUME_GROUP_NAME) or
create a loopback file if necessary.  With the help of VOLUME_NAME_PREFIX
and volume_name_template DevStack knew which logical volumes belong to
Cinder and could Do The Right Thing.

With three loopback files being created, all wanting larger and larger
defaults, adding a fourth becomes Just One More Thing.  If Nova's use of LVM
is similar enough to Cinder's (uses deterministic naming for the LVs) I'm
betting we could make it work.

dt

Nova's disk names are of the form _. So
deterministic but, unfortunately, not necessarily predictable. It sounds
like Duncan is saying that Cinder needs a fixed prefix for testing its
functionality. I will be honest, I am not optimistic about convincing Nova
to change their disk naming scheme for the sake of LVM testing. Far more
important changes have lingered for months and sometimes longer.

It sounds like you are concerned about two issues with regard to the
separate volume groups approach: 1) potential loop device shortage and 2)
growing space demand. The second issue, it seems to me, will arise no matter
which of the two solutions we choose. More space will be required for
testing Nova's LVM functionality one way or another, although, using a
shared volume group would permit a more efficient use of the available
space. The first issue is, indeed, a direct consequence of the choice to use
distinct volume groups. However, the number of available loop devices can be
increased by passing the appropriate boot parameter to the kernel, which can
be easy or hard depending on how the test VMs are spun up.

I am not saying that we should necessarily go the way of separate volume
groups but, assuming for the moment that changing Nova's disk naming scheme
is not an option, we need to figure out what will bring the least amount of
pain forcing Cinder tests to work around Nova volumes or create separate
volume groups.

Let me know what you think.
Dan


--

Dean Troyer
dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Barbican] Removing custom openssl calls with certmonger

2014-10-28 Thread Adam Young
In certmonger 0.75.13  you can use local-getcert and it will have a 
signing cert (selfsigned CA cert).  This allows us to replace the 
keystone-manage ssl_swetup and pki_setup with certmonger calls.


This should be the plan moving forward, with Certmonger/Barbican 
integration a short term target.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-28 Thread John Griffith
On Tue, Oct 28, 2014 at 12:37 PM, Dan Genin  wrote:

> Great, thank you, Duncan. I will then proceed with the shared volume group.
>
> Dan
>
>
> On 10/28/2014 02:06 PM, Duncan Thomas wrote:
>
>> Cinder volumes are always (unless you go change the default) in the
>> form: volume-, and since the string 'volume-' is never a valid
>> uuid, then I think we can work around nova volumes fine when we come
>> to write our tests.
>>
>> Sorry for the repeated circling on this, but I think I'm now happy.
>>
>> Thanks
>>
>>
>>
>> On 28 October 2014 17:53, Dan Genin  wrote:
>>
>>> On 10/28/2014 11:56 AM, Dean Troyer wrote:
>>>
>>> On Tue, Oct 28, 2014 at 9:27 AM, Dan Genin 
>>> wrote:
>>>
 So this brings us back to the original proposal of having separate
 backing
 files for Cinder and Nova which Dean thought might take too much space.

>>>
>>> Between Cinder, Nova and Swift (and Ceph, etc) everybody wants some
>>> loopback
>>> disk images.  DevStack's Swift and Ceph configurations assume loopback
>>> devices and do no sharing.
>>>
>>>  Duncan, could you please elaborate on the pain a single volume group is
 likely to cause for Cinder? Is it a show stopper?

>>>
>>> Back in the day, DevStack was built to configure Cinder (and Nova Volume
>>> before that) to use a specific existing volume group (VOLUME_GROUP_NAME)
>>> or
>>> create a loopback file if necessary.  With the help of VOLUME_NAME_PREFIX
>>> and volume_name_template DevStack knew which logical volumes belong to
>>> Cinder and could Do The Right Thing.
>>>
>>> With three loopback files being created, all wanting larger and larger
>>> defaults, adding a fourth becomes Just One More Thing.  If Nova's use of
>>> LVM
>>> is similar enough to Cinder's (uses deterministic naming for the LVs) I'm
>>> betting we could make it work.
>>>
>>> dt
>>>
>>> Nova's disk names are of the form _. So
>>> deterministic but, unfortunately, not necessarily predictable. It sounds
>>> like Duncan is saying that Cinder needs a fixed prefix for testing its
>>> functionality. I will be honest, I am not optimistic about convincing
>>> Nova
>>> to change their disk naming scheme for the sake of LVM testing. Far more
>>> important changes have lingered for months and sometimes longer.
>>>
>>> It sounds like you are concerned about two issues with regard to the
>>> separate volume groups approach: 1) potential loop device shortage and 2)
>>> growing space demand. The second issue, it seems to me, will arise no
>>> matter
>>> which of the two solutions we choose. More space will be required for
>>> testing Nova's LVM functionality one way or another, although, using a
>>> shared volume group would permit a more efficient use of the available
>>> space. The first issue is, indeed, a direct consequence of the choice to
>>> use
>>> distinct volume groups. However, the number of available loop devices
>>> can be
>>> increased by passing the appropriate boot parameter to the kernel, which
>>> can
>>> be easy or hard depending on how the test VMs are spun up.
>>>
>>> I am not saying that we should necessarily go the way of separate volume
>>> groups but, assuming for the moment that changing Nova's disk naming
>>> scheme
>>> is not an option, we need to figure out what will bring the least amount
>>> of
>>> pain forcing Cinder tests to work around Nova volumes or create separate
>>> volume groups.
>>>
>>> Let me know what you think.
>>> Dan
>>>
>>>
>>> --
>>>
>>> Dean Troyer
>>> dtro...@gmail.com
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Noticed my response never posted think there's something up with my
mail client, so if you get this a few more times forgive me :)

But

The idea of sharing a VG
between Nova and Cinder is only relevant in an all in one deployment
anyway, it's a specific edge case for testing.  It certainly (IMHO) does
not warrant any changes in Nova and Cinder.  Also keep in mind that at
some point (I think we're already there) we need to consider whether our
default gating and setup can continue to be done on a single node
anyway.

The answer to this seems relatively simple to me, Dean pointed out just
add a loopback device specifically for Nova LVM testing and move on.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Fox, Kevin M


From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, October 28, 2014 11:34 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

*SNIP*

> I think we should actually just rip the git repos out of the images in
> production installs. What good does it do sending many MB of copies of
> the git repos around? Perhaps just record HEAD somewhere in a manifest
> and rm -r the source repos during cleanup.d.
>
> But, for supporting dev/test, we could definitely leave them there and
> change the remotes back to their canonical (as far as diskimage-builder
> knows) sources.

You could also set git to pull only the latest revision to save a bunch of 
space but still allow updating easily.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel standards

2014-10-28 Thread Meg McRoberts
Could we specify that all Fuel configuration files should include all
allowable
parameters.  The optional parameters can be commented out but being able
to uncomment and populate a parameter is a lot easier than having to find
the
exact name and order.

For bonus points, we could include commentary about when and how to activate
these optional parameters but we could also cover this in the documentation
for
each configuration file.

meg

On Tue, Oct 28, 2014 at 1:08 AM, Dmitriy Shulyak 
wrote:

>
>
>> Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
>> standards as is and use them for Fuel. But maybe there are other opinions.
>> Let's discuss this and decide what to do. Do we actually need those
>> standards at all?
>>
>> Agree that we can take openstack standarts as example, but lets not
> simply copy them and just live with it.
>
>>
>> 0) Standard for projects naming.
>> Currently most of Fuel projects are named like fuel-whatever or even
>> whatever? Is it ok? Or maybe we need some formal rules for naming. For
>> example, all OpenStack clients are named python-someclient. Do we need to
>> rename fuelclient into python-fuelclient?
>>
> I dont like that fuel is added into every project that we start, correct
> me if I am wrong but:
> - shotgun can be self-contained project, and still provide certain value,
> actually i think it can be used by jenkins in our and openstack gates
>   to copy logs and other info
> - same for network verification tool
> - fuel_agent (image based provisioning) can work without all other fuel
> parts
>
>>
>> 1) Standard for an architecture.
>> Most of OpenStack services are split into several independent parts
>> (raughly service-api, serivce-engine, python-serivceclient) and those parts
>> interact with each other via REST and AMQP. python-serivceclient is usually
>> located in a separate repository. Do we actually need to do the same for
>> Fuel? According to fuelclient it means it should be moved into a separate
>> repository. Fortunately, it already uses REST API for interacting with
>> nailgun. But it should be possible to use it not only as a CLI tool, but
>> also as a library.
>>
>> 2) Standard for project directory structure (directory names for api, db
>> models,  drivers, cli related code, plugins, common code, etc.)
>> Do we actually need to standardize a directory structure?
>>
>> Well, we need some project, agree on that project structure and then just
> provide as example during review.
> We can choose:
> - fuelclient as cli example (but first refactor it)
> - fuel-stats as web app example
>
>> 3) Standard for third party libraries
>> As far as Fuel is a deployment tool for OpenStack, let's make a decision
>> about using OpenStack components wherever it is possible.
>> 3.1) oslo.config for configuring.
>> 3.2) oslo.db for database layer
>> 3.3) oslo.messaging for AMQP layer
>> 3.4) cliff for CLI (should we refactor fuelclient so as to make based on
>> cliff?)
>> 3.5) oslo.log for logging
>> 3.6) stevedore for plugins
>> etc.
>> What about third party components which are not OpenStack related? What
>> could be the requirements for an arbitrary PyPi package?
>>
> In my opinion we should not pick some library just because it is used in
> openstack, there should be some research and analys,
> for example:
> Cli application, there is several popular alternatives to cliff in python
> community:
> - https://github.com/docopt/docopt
> - https://github.com/mitsuhiko/click
> I personnaly would prefer to use docopt, but click looks good as well.
> Web frameworks is whole different story, in python community we have
> mature flask and pyramid,
> and i dont see any benefits from using pecan.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Ben Nemec
On 10/28/2014 01:34 PM, Clint Byrum wrote:
> Excerpts from Ben Nemec's message of 2014-10-28 11:13:22 -0700:
>> On 10/28/2014 06:18 AM, Steven Hardy wrote:
>>> On Tue, Oct 28, 2014 at 11:08:05PM +1300, Robert Collins wrote:
 On 28 October 2014 22:51, Steven Hardy  wrote:
> On Tue, Oct 28, 2014 at 03:22:36PM +1300, Robert Collins wrote:
>> So this should work and I think its generally good.
>>
>> But - I'm curious, you only need a single image for devtest to
>> experiment with tuskar - the seed - which should be about the same
>> speed (or faster, if you have hot caches) than devstack, and you'll
>> get Ironic and nodes registered so that the panels have stuff to show.
>
> TBH it's not so much about speed (although, for me, devstack is faster as
> I've not yet mirrored all-the-things locally, I only have a squid cache),
> it's about establishing a productive test/debug/hack/re-test workflow.

 mm, squid-cache should still give pretty good results. If its not, bug
 time :). That said..

> I've been configuring devstack to create Ironic nodes FWIW, so that works
> OK too.

 Cool.

> It's entirely possible I'm missing some key information on how to compose
> my images to be debug friendly, but here's my devtest frustration:
>
> 1. Run devtest to create seed + overcloud

 If you're in dev-of-a-component cycle, I wouldn't do that. I'd run
 devtest_seed.sh only. The seed has everything on it, so the rest is
 waste (unless you need all the overcloud bits - in which case I'd
 still tune things - e.g. I'd degrade to single node, and I'd iterate
 on devtest_overcloud.sh, *not* on the full plumbing each time).
>>>
>>> Yup, I went round a few iterations of those, e.g running devtest_overcloud
>>> with -c so I could more quickly re-deploy, until I realized I could drive
>>> heat directly, so I started doing that :)
>>>
>>> Most of my investigations atm are around investigating Heat issues, or
>>> testing new tripleo-heat-templates stuff, so I do need to spin up the
>>> overcloud (and update it, which is where the fun really began ref bug 
>>> #1383709 and #1384750 ...)
>>>
> 2. Hit an issue, say a Heat bug (not that *those* ever happen! ;D)
> 3. Log onto seed VM to debug the issue.  Discover there are no logs.

 We should fix that - is there a bug open? Thats a fairly serious issue
 for debugging a deployment.
>>>
>>> I've not yet raised one, as I wasn't sure if it was either by design, or if
>>> I was missing some crucial element from my DiB config.
>>>
>>> If you consider it a bug, I'll raise one and look into a fix.
>>>
> 4. Restart the heat-engine logging somewhere
> 5. Realize heat-engine isn't quite latest master
> 6. Git pull heat, discover networking won't allow it

 Ugh. Thats horrid. Is it a fedora thing? My seed here can git pull
 totally fine - I've depended heavily on that to debug various things
 over time.
>>>
>>> Not yet dug into it in a lot of detail tbh, my other VMs can access the
>>> internet fine so it may be something simple, I'll look into it.
>>
>> Are you sure this is a networking thing?  When I try a git pull I get this:
>>
>> [root@localhost heat]# git pull
>> fatal:
>> '/home/bnemec/.cache/image-create/source-repositories/heat_dc24d8f2ad92ef55b8479c7ef858dfeba8bf0c84'
>> does not appear to be a git repository
>> fatal: Could not read from remote repository.
>>
>> That's actually because the git repo on the seed would have come from
>> the local cache during the image build.  We should probably reset the
>> remote to a sane value once we're done with the cache one.
>>
>> Networking-wise, my Fedora seed can pull from git.o.o just fine though.
>>
> 
> I think we should actually just rip the git repos out of the images in
> production installs. What good does it do sending many MB of copies of
> the git repos around? Perhaps just record HEAD somewhere in a manifest
> and rm -r the source repos during cleanup.d.

I actually thought we were removing git repos, but evidently not.

> 
> But, for supporting dev/test, we could definitely leave them there and
> change the remotes back to their canonical (as far as diskimage-builder
> knows) sources.

I wonder if it would make sense to pip install -e.  Then the copy of the
application in the venvs is simply a pointer to the actual git repo.
This would also make it easier to make changes to the running code -
instead of having to make a change, reinstall, and restart services you
could just make the change and restart like in Devstack.

I guess I don't know if that has any negative impacts for production use
though.

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStac

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Kevin Benton
>1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
Hash/Index Lookup ?
>2. Will there be Hierarchical network ?  How much of the Routes will
be imported from external world ?
>3. Will there be  Separate routing domain for overlay network  ? Or it
will be mixed with external/underlay network ?

These are all implementation specific details. Different deployments and
network backends can implement them however they want. What we need to
discuss now is how this model will look to the end-user and API.

>4. What will be the basic use case of this ? Thinking of L3 switching to
support BGP-MPLS L3 VPN Scenario right from compute node ?

I think the simplest use case is just that a provider doesn't want to deal
with extending L2 domains all over their datacenter.

On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava  wrote:

> Hi Cory,
>
> Yes that is the basic question I have.
>
> OpenStack cloud  is ready to move away from Flat L2 network ?
>
> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> Hash/Index Lookup ?
> 2. Will there be Hierarchical network ?  How much of the Routes will
> be imported from external world ?
> 3. Will there be  Separate routing domain for overlay network  ? Or it
> will be mixed with external/underlay network ?
> 4. What will be the basic use case of this ? Thinking of L3 switching to
> support BGP-MPLS L3 VPN Scenario right from compute node ?
>
> Others can give their opinion also.
>
> Thanks & Regards,
> keshava
>
> -Original Message-
> From: Cory Benfield [mailto:cory.benfi...@metaswitch.com]
> Sent: Tuesday, October 28, 2014 10:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
> On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> > Hi,
> >
> > Current Open-stack was built as flat network.
> >
> > With the introduction of the L3 lookup (by inserting the routing table
> > in forwarding path) and separate 'VIF Route Type' interface:
> >
> > At what point of time in the packet processing  decision will be made
> > to lookup FIB  during ? For each packet there will additional  FIB
> > lookup ?
> >
> > How about the  impact on  'inter compute traffic', processed by  DVR  ?
> > Here thinking  OpenStack cloud as hierarchical network instead of Flat
> > network ?
>
> Keshava,
>
> It's difficult for me to answer in general terms: the proposed specs are
> general enough to allow multiple approaches to building purely-routed
> networks in OpenStack, and they may all have slightly different answers to
> some of these questions. I can, however, speak about how Project Calico
> intends to apply them.
>
> For Project Calico, the FIB lookup is performed for every packet emitted
> by a VM and destined for a VM. Each compute host routes all the traffic
> to/from its guests. The DVR approach isn't necessary in this kind of
> network because it essentially already implements one: all packets are
> always routed, and no network node is ever required in the network.
>
> The routed network approach doesn't add any hierarchical nature to an
> OpenStack cloud. The difference between the routed approach and the
> standard OVS approach is that packet processing happens entirely at layer
> 3. Put another way, in Project Calico-based networks a Neutron subnet no
> longer maps to a layer 2 broadcast domain.
>
> I hope that clarifies: please shout if you'd like more detail.
>
> Cory
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Bringing some DevOps love to Openstack

2014-10-28 Thread Philip Cheong
Hi all,

In preparation of the OpenStack Summit in Paris next week, I'm hoping to
speak to some people in the OpenStack foundation about the benefits of a
partnership with Hashicorp, who make fantastic tools like Vagrant and
Packer (and others).

As a n00b aspiring to become an OpenStack contributor, the variety of
Vagrant devstack environments is pretty overwhelming. It appears to me that
it really depends on what project you are contributing to, which denotes
which devstack you should use. The ones I have tried take a long time (45
mins+) to provision from scratch.

One aspect which I am acutely aware of is developer productivity and 45
minutes is a lot of time. Packer was designed to help alleviate bottleneck,
and Vagrantcloud has inbuilt support for the versioning of Vagrant boxes.
It would be a pretty straight forward exercise to use Packer to do a daily
(or however often) build of a devstack box and upload it to Vagrantcloud
for developers to download. With a decent internet connection that time
would be significantly less than 45 minutes.

I would really like to think that this community should also be able to
come to a consensus over what to include in a "standard" devstack. That
there currently seems to be many different flavours cannot help with issues
of fragmentation between so many different moving parts to build an
OpenStack environment.

Another big issue that I hope to address with the foundation, is the
integration of Hashicorp's tools with OpenStack.

The various Vagrant plugins to add OpenStack as a provider is a mess. There
is one specific for Rackspace who have a different Keystone API, and at
least 3 others for the vanilla OpenStack:
https://github.com/mitchellh/vagrant-rackspace
https://github.com/ggiamarchi/vagrant-openstack-provider
https://github.com/cloudbau/vagrant-openstack-plugin
https://github.com/FlaPer87/vagrant-openstack

The significance of not having an "official" provider, for one example, is
when you use Packer to build an image in OpenStack and try to post-process
it into a Vagrant box, it bombs with this error:

==> openstack: Running post-processor: vagrant
Build 'openstack' errored: 1 error(s) occurred:

* Post-processor failed: Unknown artifact type, can't build box:
mitchellh.openstack


Because Packer doesn't know what Vagrant expects the provider to be, as
explained here .

In my opinion this a pretty big issue holding back the wider acceptance of
OpenStack. When I am at a customer and introduce them to tools like Vagrant
and Packer and how well they work with AWS, I still avoid the conversation
about OpenStack when I would really love to put them on our (Elastx's)
public cloud.

What say you? Could I get a +1 from those who see this as a worthwhile
issue?

Cheers,

Phil.
-- 
*Philip Cheong*
*Elastx *| Public and Private PaaS
email: philip.che...@elastx.se
office: +46 8 557 728 10
mobile: +46 702 870 814
twitter: @Elastx 
http://elastx.se
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova] Changing default replacement_policy for Neutron port?

2014-10-28 Thread Steve Baker

On 29/10/14 06:51, Steven Hardy wrote:

Hi all,

So I've been investigating bug #1383709, which has caused me to run into a
bad update pattern involving OS::Neutron::Port

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Port

I'm not quite clear on the history, but for some reason, we have a
"replacement_policy" property, unlike all other resources, and it defaults
to replacing the resource every time you update, unless you pass "AUTO" to
the property.

I'm sure there's a good reason for this, but on the face of it, it seems to
be a very unsafe and inconvenient default when considering updates?

The problem (which may actually be the cause the bug #1383709) is the UUID
changes, so you don't only replace the port, you replace it and everything
that references it, which makes the Port resource a landmine of
HARestarter-esque proportions ;)

Can anyone (and in particular stevebaker who initally wrote the code) shed
any light on this?  Can we just flip the default to AUTO, as it seems to be
a more desirable default for nearly all users?

Thanks!



The commit does a reasonable job of explaining the whole sorry situation

https://review.openstack.org/#/c/121693/

This was an attempt to improve port modelling enough for Juno while nova 
bug #1158684 [1] remains unfixed.


If we defaulted to replacement_policy:AUTO then we have the 2 issues 
when a server is replaced on stack update [3][1]


If we keep the current default then we have the symptoms of bug #1383709.

Both options suck and there is no way of always doing the right thing, 
which is why replacement_policy exists - to push this decision to the 
template author.


I've come to the conclusion that ports shouldn't be modelled as 
resources at all; they sometimes represent exclusive resources (fixed 
IPs) and their dependencies with servers sometimes goes both ways. To 
fix this properly I've written a Kilo spec for blueprint 
rich-network-prop [2]


Before we switch the default to AUTO maybe we could investigate getting 
REPLACE_ALWAYS to interact better with ResourceGroup (or the tripleo 
templates which use it)


[1] https://bugs.launchpad.net/nova/+bug/1158684
[2] https://review.openstack.org/#/c/130093/
[3] https://bugs.launchpad.net/heat/+bug/1301486
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova] Changing default replacement_policy for Neutron port?

2014-10-28 Thread Steve Baker

On 29/10/14 09:28, Steve Baker wrote:

On 29/10/14 06:51, Steven Hardy wrote:

Hi all,

So I've been investigating bug #1383709, which has caused me to run into a
bad update pattern involving OS::Neutron::Port

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Port

I'm not quite clear on the history, but for some reason, we have a
"replacement_policy" property, unlike all other resources, and it defaults
to replacing the resource every time you update, unless you pass "AUTO" to
the property.

I'm sure there's a good reason for this, but on the face of it, it seems to
be a very unsafe and inconvenient default when considering updates?

The problem (which may actually be the cause the bug #1383709) is the UUID
changes, so you don't only replace the port, you replace it and everything
that references it, which makes the Port resource a landmine of
HARestarter-esque proportions ;)

Can anyone (and in particular stevebaker who initally wrote the code) shed
any light on this?  Can we just flip the default to AUTO, as it seems to be
a more desirable default for nearly all users?

Thanks!



The commit does a reasonable job of explaining the whole sorry situation

https://review.openstack.org/#/c/121693/

This was an attempt to improve port modelling enough for Juno while 
nova bug #1158684 [1] remains unfixed.


If we defaulted to replacement_policy:AUTO then we have the 2 issues 
when a server is replaced on stack update [3][1]


If we keep the current default then we have the symptoms of bug #1383709.

Both options suck and there is no way of always doing the right thing, 
which is why replacement_policy exists - to push this decision to the 
template author.


I've come to the conclusion that ports shouldn't be modelled as 
resources at all; they sometimes represent exclusive resources (fixed 
IPs) and their dependencies with servers sometimes goes both ways. To 
fix this properly I've written a Kilo spec for blueprint 
rich-network-prop [2]


Before we switch the default to AUTO maybe we could investigate 
getting REPLACE_ALWAYS to interact better with ResourceGroup (or the 
tripleo templates which use it)




I've looked at the tripleo templates now, and they create ports which 
are resources in their own right, so switching to 
replacement_policy:AUTO is entirely appropriate.  However in most 
templates the vast majority of port resources are just to define a 
simple server/port/floating-IP combo. Therefore I think there is a good 
argument for the default REPLACE_ALWAYS causing the least problems for 
the majority of cases.



[1] https://bugs.launchpad.net/nova/+bug/1158684
[2] https://review.openstack.org/#/c/130093/
[3] https://bugs.launchpad.net/heat/+bug/1301486




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Rohit Agarwalla (roagarwa)
There isn't a mechanism for us to get a BoF scheduled in advance. So,
let's gather at the Neutron contributors meetup on Friday.
Hopefully, some of us would have already met each other at the Neutron
design sessions before Friday and we can figure out a good time slot that
works for everyone interested.

Thanks
Rohit

On 10/27/14 2:20 AM, "Cory Benfield"  wrote:

>On Sun, Oct 26, 2014 at 19:05:43, Rohit Agarwalla (roagarwa) wrote:
>> Hi
>> 
>> I'm interested as well in this model. Curious to understand the routing
>> filters and their implementation that will enable isolation between
>> tenant networks.
>> Also, having a BoF session on "Virtual Networking using L3" may be
>> useful to get all interested folks together at the Summit.
>
>A BoF sounds great. I've also proposed a lightning talk for the summit.
>
>Cory
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Rohit Agarwalla (roagarwa)
Agreed. The way I'm thinking about this is that tenants shouldn't care what the 
underlying implementation is - L2 or L3. As long as the connectivity 
requirements are met using the model/API, end users should be fine.
The data center network design should be an administrators decision based on 
the implementation mechanism that has been configured for OpenStack.

Thanks
Rohit

From: Kevin Benton mailto:blak...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, October 28, 2014 1:01 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

>1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
>Hash/Index Lookup ?
>2. Will there be Hierarchical network ?  How much of the Routes will be 
>imported from external world ?
>3. Will there be  Separate routing domain for overlay network  ? Or it will be 
>mixed with external/underlay network ?

These are all implementation specific details. Different deployments and 
network backends can implement them however they want. What we need to discuss 
now is how this model will look to the end-user and API.

>4. What will be the basic use case of this ? Thinking of L3 switching to 
>support BGP-MPLS L3 VPN Scenario right from compute node ?

I think the simplest use case is just that a provider doesn't want to deal with 
extending L2 domains all over their datacenter.

On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava 
mailto:keshav...@hp.com>> wrote:
Hi Cory,

Yes that is the basic question I have.

OpenStack cloud  is ready to move away from Flat L2 network ?

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ?
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?
4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

Others can give their opinion also.

Thanks & Regards,
keshava

-Original Message-
From: Cory Benfield 
[mailto:cory.benfi...@metaswitch.com]
Sent: Tuesday, October 28, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> Hi,
>
> Current Open-stack was built as flat network.
>
> With the introduction of the L3 lookup (by inserting the routing table
> in forwarding path) and separate 'VIF Route Type' interface:
>
> At what point of time in the packet processing  decision will be made
> to lookup FIB  during ? For each packet there will additional  FIB
> lookup ?
>
> How about the  impact on  'inter compute traffic', processed by  DVR  ?
> Here thinking  OpenStack cloud as hierarchical network instead of Flat
> network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-28 Thread Jay Pipes

On 10/23/2014 07:57 PM, Elzur, Uri wrote:

We’d like to bring it up in the coming design summit. Where do you think
it needs to be discussed: cross project tack? Scheduler discussion? Other?

I’ve just added a proposed item 17.1 to the
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

1.

2.“present Application’s Network and Storage requirements, coupled with
infrastructure capabilities and status (e.g. up/dn, utilization levels)
and placement policy (e.g. proximity, HA) to get optimized placement
decisions accounting for all application elements (VMs, virt Network
appliances, Storage) vs. Compute only”


Hi again, Uri,

I'm afraid that there were not enough votes to get a cross-project 
scheduler session on the design summit agenda on the Tuesday (when 
cross-project sessions are held):


http://kilodesignsummit.sched.org/overview/type/cross-project+workshops#.VFAGDB8aekA

That said, there is a 90-minute (double) session on *Thursday, the 6th*, 
between 11:00 and 12:30 on the Nova scheduler and resource tracker:


http://kilodesignsummit.sched.org/overview/type/nova#.VFAGgh8aekA

I encourage you and like-minded folks to attend at least part of this 
session. It will be around technical details involved in the refactoring 
groundwork that needs to be accomplished in the Kilo timeframe, but I 
would welcome your input on the proposed scheduler interfaces involved 
in this refactoring, particularly in relation to how you perceive future 
placement decision requirements will affect those interfaces.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Finalizing cross-project design summit track

2014-10-28 Thread Russell Bryant
A draft schedule has been posted for the cross-project design summit track:

http://kilodesignsummit.sched.org/overview/type/cross-project+workshops#.VFAFFXVGjUa

If you have any schedule changes to propose for really bad conflicts,
please let me know.  We really tried to minimize conflicts, but it's
impossible to resolve them all.

The next steps are to identify session leads and get the leads to write
session descriptions to put on the schedule.  We're collecting both at
the top of the proposals etherpad:

https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

If you were the proposer of one of these sessions and are not already
listed as the session lead, please add yourself.  If you'd like to
volunteer to lead a session that doesn't have a lead, please speak up.

For the sessions you are leading, please draft a description on the
etherpad that can be used for the session on sched.org.

Thank you!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Carl Baldwin
On Tue, Oct 28, 2014 at 2:01 PM, Kevin Benton  wrote:
> I think the simplest use case is just that a provider doesn't want to deal
> with extending L2 domains all over their datacenter.

This is similar to a goal behind [1] and [2].  I'm trying to figure
out where the commonalities and differences are with our respective
approaches.  One obvious difference is that the approach that I
referenced deals only with external networks where typically only
routers connect their gateway interfaces whereas your approach is an
ML2 driver.  As an ML2 driver, it could handle tenant networks too.
I'm curious to know how it works.  I will read through the blueprint
proposals.  The first question that pops in my mind is how (or if) it
supports isolated overlapping L3 address spaces between tenant
networks.  I imagine that it will support a restricted networking
model.

I look forward to meeting and discussing this at Summit.  Look for me.

Carl

[1] https://blueprints.launchpad.net/neutron/+spec/pluggable-ext-net
[2] https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Carl Baldwin
On Tue, Oct 28, 2014 at 3:07 PM, Rohit Agarwalla (roagarwa)
 wrote:
> Agreed. The way I'm thinking about this is that tenants shouldn't care what
> the underlying implementation is - L2 or L3. As long as the connectivity
> requirements are met using the model/API, end users should be fine.
> The data center network design should be an administrators decision based on
> the implementation mechanism that has been configured for OpenStack.

Many API users won't care about the L2 details.  This could be a
compelling alternative for them.  However, some do.  The L2 details
seem to matter an awful lot to many NFV use cases.  It might be that
this alternative is just not compelling for those.  Not to say it
isn't compelling overall though.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] client release 0.7.5

2014-10-28 Thread Sergey Lukjanov
Hi folks,

we have sahara client 0.7.5 released today with following changes:

* AZ support for Nova and Cinder
* Volume type support for Cinder

More info: https://launchpad.net/python-saharaclient/0.7.x/0.7.5

We need this changes to update Horizon and Heat Sahara bindings.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tuskar] Puppet module

2014-10-28 Thread Emilien Macchi
Hi,

I was looking at deploying Tuskar API with Puppet and I was wondering if
you guys have already worked on a Puppet module.

If not, I think we could start something in stackforge like we already
did for other OpenStack components.

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Clint Byrum
Excerpts from Cory Benfield's message of 2014-10-24 06:38:44 -0700:
> All,
> 
> Project Calico [1] is an open source approach to virtual networking based on 
> L3 routing as opposed to L2 bridging.  In order to accommodate this approach 
> within OpenStack, we've just submitted 3 blueprints that cover
> 
> -  minor changes to nova to add a new VIF type [2]
> -  some changes to neutron to add DHCP support for routed interfaces [3]
> -  an ML2 mechanism driver that adds support for Project Calico [4].
> 
> We feel that allowing for routed network interfaces is of general use within 
> OpenStack, which was our motivation for submitting [2] and [3].  We also 
> recognise that there is an open question over the future of 3rd party ML2 
> drivers in OpenStack, but until that is finally resolved in Paris, we felt 
> submitting our driver spec [4] was appropriate (not least to provide more 
> context on the changes proposed in [2] and [3]).
> 
> We're extremely keen to hear any and all feedback on these proposals from the 
> community.  We'll be around at the Paris summit in a couple of weeks and 
> would love to discuss with anyone else who is interested in this direction. 

I'm quite interested in this, as we've recently been looking at how to
scale OpenStack on bare metal servers beyond the limits of a single flat
L2 network. We have a blueprint for it in TripleO as well:

https://blueprints.launchpad.net/tripleo/+spec/l3-network-segmentation

Hopefully you will be at the summit and can attend our scheduled session,
which I believe will be some time on Wednesday.

We're basically just planning on having routers configured out of band
and just writing a Nova filter to ensure that the compute host has a
property which matches the network names requested with a server.

But if we could automate that too, that would make for an even more
automatic and scalable solution.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]

2014-10-28 Thread Jesse Cook


On 10/27/14, 6:08 PM, "Jay Pipes"  wrote:

>On 10/27/2014 06:18 PM, Jesse Cook wrote:
>> In the glance mini-summit there was a request for some documentation on
>> the architecture ideas I was discussing relating to: 1) removing data
>> consistency as a concern for glance 2) bootstraping vs baking VMs
>>
>> Here's a rough draft:
>>https://gist.github.com/CrashenX/8fc6d42ffc154ae0682b
>
>Hi Jesse!
>
>A few questions for you, since I wasn't at the mini-summit and I think
>don't have a lot of the context necessary here...
>
>1) In the High-Level Architecture diagram, I see Glance Middleware
>components calling to a "Router" component. Could you elaborate what
>this Router component is, in relation to what components currently exist
>in Glance and Nova? For instance, is the Router kind of like the
>existing Glance Registry component? Or is it something more like the
>nova.image.download modules in Nova? Or something entirely different?

It's a high-level abstraction. It's close to being equivalent to the cloud
icon you find in many architecture diagrams, but not quite that vague. If
I had to associate it with an existing OpenStack component, I'd probably
say nova-scheduler. There is much detail to be flushed out here. I have
some additional thoughts and documentation that I'm working on that I will
share once it is more flushed out. Ultimately, I would like to see a fully
documented prescriptive architecture that we can iterate over to address
some of the complexities and pain points within the system as a whole.

>
>2) The Glance Middleware. Do you mean WSGI middleware here? Or are you
>referring to something more like the existing nova.image.api module that
>serves as a shim over the Glance server communication?

At the risk of having something thrown at me, what I am suggesting is a
move away from Glance as a service to Glance as a purely functional API.
At some point caching would need to be discussed, but I am intentionally
neglecting caching and the existence of any data store as there is a risk
of complecting state. I want to avoid discussions on performance until
more important things can be addressed such as predictability,
reliability, scalability, consistency, maintainability, extensibility,
security, and simplicity (i.e. As defined by Rich Hickey).

>
>3) Images in Glance are already immutable, once the image bytes are
>actually uploaded to a backend block store. What conceptual differences
>are you introducing with the idea of object immutability?

I think the biggest difference is the object definitions themselves. These
objects are immutable, but contain history. Every "mutation" of the object
results in a new object with the history updated. Deleted binary data
(i.e. diff disks and overlays) would be truly deleted, but it's previous
existence will be recorded.

>
>4) How does the glance_store library play into your ideas, if at all?

I expect the glance_store library would contain much of the code that
would be required. However, what I have documented is at a higher level.

>
>5) How does the existing "image locations" collection in the Glance v2
>API work with your ideas? With an image uploaded to multiple locations
>(in Swift, Ceph clusters, wherever...), is the Router object in your
>architecture the thing that determines affinity for the best
>storage-locality to pull data from?

I don't know that where the data is stored is important in the context of
this conversation yet. I think it will become part of a more specific
conversation. Ultimately, though I believe this is one of the values that
Glance as Middleware would provide. It transparently performs the
operations documented over the object store(s).

>
>All the best,
>-jay

There are many important points that this documentation has not yet
addressed. Ideas such as discoverability, priority, performance, etc. This
is not because they are not important or that they haven't been
considered. It is because they are secondary to the primary goal:
simplicity. The OpenStack architecture as a whole has many complexities
build into it. Issues such as data consistency, state, time, etc. are
interwoven through various pieces of the system. These things have reared
their ugly head in many places. I would like to make a shift in
perception. One in which the design and architecture is more functional.
That doesn't mean a functional implementation. It does, however, mean that
there would be a real focus on unbraiding state from the components of the
system. Does this mean a massive rewrite? Well, no, but we might have to
slowly strangle parts of the system with stateless components.

Why Glance? The answer is three fold. Three major components of many use
cases are Nova, Glance, and the Object Store. Glance is right in the
middle. That's 1. I started in Glance. That's 2. Glance has been having
discussions about it's mission statement, which means that it seems to be
open to the idea there should be some sort of change. That's 3.

A final note. So

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Angus Lees
On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
> Agreed. The way I'm thinking about this is that tenants shouldn't care what
> the underlying implementation is - L2 or L3. As long as the connectivity
> requirements are met using the model/API, end users should be fine. The
> data center network design should be an administrators decision based on
> the implementation mechanism that has been configured for OpenStack.

I don't know anything about Project Calico, but I have been involved with 
running a large cloud network previously that made heavy use of L3 overlays.  

Just because these points weren't raised earlier in this thread:  In my 
experience, a move to L3 involves losing:

- broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but that's 
a whole can of worms - so perhaps best to just say up front that this is a 
non-broadcast network.

- support for other IP protocols.

- various "L2 games" like virtual MAC addresses, etc that NFV/etc people like.


We gain:

- the ability to have proper hierarchical addressing underneath (which is a 
big one for scaling a single "network").  This itself is a tradeoff however - 
an efficient/strict hierarchical addressing scheme means VMs can't choose their 
own IP addresses, and VM migration is messy/limited/impossible.

- hardware support for dynamic L3 routing is generally universal, through a 
small set of mostly-standard protocols (BGP, ISIS, etc).

- can play various "L3 games" like BGP/anycast, which is super useful for 
geographically diverse services.


It's certainly a useful tradeoff for many use cases.  Users lose some 
generality in return for more powerful cooperation with the provider around 
particular features, so I sort of think of it like a step halfway up the IaaS-
>PaaS stack - except for networking.

 - Gus

> Thanks
> Rohit
> 
> From: Kevin Benton mailto:blak...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-dev@lists.openstack.org
> >> Date: Tuesday, October 28, 2014 1:01 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
> networking
> >1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> >Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
> >of the Routes will be imported from external world ? 3. Will there be 
> >Separate routing domain for overlay network  ? Or it will be mixed with
> >external/underlay network ?
> These are all implementation specific details. Different deployments and
> network backends can implement them however they want. What we need to
> discuss now is how this model will look to the end-user and API.
> >4. What will be the basic use case of this ? Thinking of L3 switching to
> >support BGP-MPLS L3 VPN Scenario right from compute node ?
> I think the simplest use case is just that a provider doesn't want to deal
> with extending L2 domains all over their datacenter.
> 
> On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
> mailto:keshav...@hp.com>> wrote: Hi Cory,
> 
> Yes that is the basic question I have.
> 
> OpenStack cloud  is ready to move away from Flat L2 network ?
> 
> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
> of the Routes will be imported from external world ? 3. Will there be 
> Separate routing domain for overlay network  ? Or it will be mixed with
> external/underlay network ? 4. What will be the basic use case of this ?
> Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
> compute node ?
> 
> Others can give their opinion also.
> 
> Thanks & Regards,
> keshava
> 
> -Original Message-
> From: Cory Benfield
> [mailto:cory.benfi...@metaswitch.com]
> Sent: Tuesday, October 28, 2014 10:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
> 
> On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> > Hi,
> > 
> > Current Open-stack was built as flat network.
> > 
> > With the introduction of the L3 lookup (by inserting the routing table
> > in forwarding path) and separate 'VIF Route Type' interface:
> > 
> > At what point of time in the packet processing  decision will be made
> > to lookup FIB  during ? For each packet there will additional  FIB
> > lookup ?
> > 
> > How about the  impact on  'inter compute traffic', processed by  DVR  ?
> > Here thinking  OpenStack cloud as hierarchical network instead of Flat
> > network ?
> 
> Keshava,
> 
> It's difficult for me to answer in general terms: the proposed specs are
> general enough to allow multiple approaches to building purely-routed
> networks in OpenStack, and they may all have slightly different answers to
> some of these questions.

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Fred Baker (fred)

On Oct 28, 2014, at 4:59 PM, Angus Lees  wrote:

> On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
>> Agreed. The way I'm thinking about this is that tenants shouldn't care what
>> the underlying implementation is - L2 or L3. As long as the connectivity
>> requirements are met using the model/API, end users should be fine. The
>> data center network design should be an administrators decision based on
>> the implementation mechanism that has been configured for OpenStack.
> 
> I don't know anything about Project Calico, but I have been involved with 
> running a large cloud network previously that made heavy use of L3 overlays.  
> 
> Just because these points weren't raised earlier in this thread:  In my 
> experience, a move to L3 involves losing:
> 
> - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but that's 
> a whole can of worms - so perhaps best to just say up front that this is a 
> non-broadcast network.
> 
> - support for other IP protocols.
> 
> - various "L2 games" like virtual MAC addresses, etc that NFV/etc people like.

I’m a little confused. IP supports multicast. It requires a routing protocol, 
and you have to “join” the multicast group, but it’s not out of the picture.

What other “IP” protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?

I’m afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I’d need to understand what you were trying to achieve before I 
would consider that a loss.

> We gain:
> 
> - the ability to have proper hierarchical addressing underneath (which is a 
> big one for scaling a single "network").  This itself is a tradeoff however - 
> an efficient/strict hierarchical addressing scheme means VMs can't choose 
> their 
> own IP addresses, and VM migration is messy/limited/impossible.

It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.

> - hardware support for dynamic L3 routing is generally universal, through a 
> small set of mostly-standard protocols (BGP, ISIS, etc).
> 
> - can play various "L3 games" like BGP/anycast, which is super useful for 
> geographically diverse services.
> 
> 
> It's certainly a useful tradeoff for many use cases.  Users lose some 
> generality in return for more powerful cooperation with the provider around 
> particular features, so I sort of think of it like a step halfway up the IaaS-
>> PaaS stack - except for networking.
> 
> - Gus
> 
>> Thanks
>> Rohit
>> 
>> From: Kevin Benton mailto:blak...@gmail.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org
 Date: Tuesday, October 28, 2014 1:01 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
>> networking
>>> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
>>> Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
>>> of the Routes will be imported from external world ? 3. Will there be 
>>> Separate routing domain for overlay network  ? Or it will be mixed with
>>> external/underlay network ?
>> These are all implementation specific details. Different deployments and
>> network backends can implement them however they want. What we need to
>> discuss now is how this model will look to the end-user and API.
>>> 4. What will be the basic use case of this ? Thinking of L3 switching to
>>> support BGP-MPLS L3 VPN Scenario right from compute node ?
>> I think the simplest use case is just that a provider doesn't want to deal
>> with extending L2 domains all over their datacenter.
>> 
>> On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
>> mailto:keshav...@hp.com>> wrote: Hi Cory,
>> 
>> Yes that is the basic question I have.
>> 
>> OpenStack cloud  is ready to move away from Flat L2 network ?
>> 
>> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
>> Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
>> of the Routes will be imported from external world ? 3. Will there be 
>> Separate routing domain for overlay network  ? Or it will be mixed with
>> external/underlay network ? 4. What will be the basic use case of this ?
>> Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
>> compute node ?
>> 
>> Others can give their opinion also.
>> 
>> Thanks & Regards,
>> keshava
>> 
>> -Original Message-
>> From: Cory Benfield
>> [mailto:cory.benfi...@metaswitch.com]
>> Sent: Tuesday, October 28, 2014 10:35 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>> 
>> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Angus Lees
On Tue, 28 Oct 2014 04:42:27 PM Jorge Miramontes wrote:
> Thanks for the reply Angus,
> 
> DDoS attacks are definitely a concern we are trying to address here. My
> assumptions are based on a solution that is engineered for this type of
> thing. Are you more concerned with network I/O during a DoS attack or
> storing the logs? Under the idea I had, I wanted to make the amount of
> time logs are stored for configurable so that the operator can choose
> whether they want the logs after processing or not. The network I/O of
> pumping logs out is a concern of mine, however.

My primary concern was the generated network I/O, and the write bandwidth to 
storage media implied by that (not so much the accumulated volume of data).

We're in an era where 10Gb/s networking is now common for serving/loadbalancer 
infrastructure and as far as I can see the trend for networking is climbing 
more steeply that storage I/O, so it's only going to get worse.   10Gb/s of 
short-lived connections is a *lot* to try to write to reliable storage 
somewhere and later analyse.
It's a useful option for some users, but it would be a shame to have to limit 
loadbalancer throughput by the logging infrastructure just because we didn't 
have an alternative available.

I think you're right, that we don't have an obviously-correct choice here.  I 
think we need to expose both cheap sampling/polling of counters and more 
detailed logging of connections matching patterns (and indeed actual packet 
capture would be nice too).  Someone could then choose to base their billing 
on either datasource depending on their own accuracy-vs-cost-of-collection 
tradeoffs.  I don't see that either approach is going to be sufficiently 
universal to obsolete the other :(

Also: UDP.   Most providers are all about HTTP now, but there are still some 
people that need to bill for UDP, SIP, VPN, etc traffic.

 - Gus

> Sampling seems like the go-to solution for gathering usage but I was
> looking for something different as sampling can get messy and can be
> inaccurate for certain metrics. Depending on the sampling rate, this
> solution has the potential to miss spikes in traffic if you are gathering
> gauge metrics such as active connections/sessions. Using logs would be
> 100% accurate in this case. Also, I'm assuming LBaaS will have events so
> combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
> gets complicated. Combining logs with events is arguably less complicated
> as the granularity of logs is high. Due to this granularity, one can split
> the logs based on the event times cleanly. Since sampling will have a
> fixed cadence you will have to perform a "manual" sample at the time of
> the event (i.e. add complexity).
> 
> At the end of the day there is no free lunch so more insight is
> appreciated. Thanks for the feedback.
> 
> Cheers,
> --Jorge
> 
> On 10/27/14 6:55 PM, "Angus Lees"  wrote:
> >On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
> >> > I,d like to start a conversation on usage requirements and have a few
> >> > suggestions. I advocate that, since we will be using TCP and
> >>
> >>HTTP/HTTPS
> >>
> >> > based protocols, we inherently enable connection logging for load
> >> 
> >> > balancers for several reasons:
> >> Just request from the operator side of things:
> >> Please think about the scalability when storing all logs.
> >> 
> >> e.g. we are currently logging http requests to one load balanced
> >>
> >>application
> >>
> >> (that would be a fit for LBAAS) It is about 500 requests per second,
> >>
> >>which
> >>
> >> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
> >> solution is chosen it can cope with machines doing 1000s of requests per
> >> second...
> >
> >And to take this further, what happens during DoS attack (either syn
> >flood or
> >full connections)?  How do we ensure that we don't lose our logging
> >system
> >and/or amplify the DoS attack?
> >
> >One solution is sampling, with a tunable knob for the sampling rate -
> >perhaps
> >tunable per-vip.  This still increases linearly with attack traffic,
> >unless you
> >use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
> >
> >One of the advantages of (eg) polling the number of current sessions is
> >that
> >the cost of that monitoring is essentially fixed regardless of the number
> >of
> >connections passing through.  Numerous other metrics (rate of new
> >connections,
> >etc) also have this property and could presumably be used for accurate
> >billing
> >- without amplifying attacks.
> >
> >I think we should be careful about whether we want logging or metrics for
> >more
> >accurate billing.  Both are useful, but full logging is only really
> >required
> >for ad-hoc debugging (important! but different).
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___

[openstack-dev] [Cinder] add a common cache mechanism for block device storage

2014-10-28 Thread yoo bright
Dear all,

We proposed a new blueprint (at https://review.openstack.org/128814)
for adding a common cache mechanism for block device storage.

All requirements, suggestions and comments are welcome.

Thank you!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Harshad Nakil
L3 routed network can support
1. broadcast/multicast
2. VRRP virtual MAC like technology

For example OpenContrail does support both of these in fully L3 routed
networks.

Regards
-Harshad

On Tue, Oct 28, 2014 at 4:59 PM, Angus Lees  wrote:

> On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
> > Agreed. The way I'm thinking about this is that tenants shouldn't care
> what
> > the underlying implementation is - L2 or L3. As long as the connectivity
> > requirements are met using the model/API, end users should be fine. The
> > data center network design should be an administrators decision based on
> > the implementation mechanism that has been configured for OpenStack.
>
> I don't know anything about Project Calico, but I have been involved with
> running a large cloud network previously that made heavy use of L3
> overlays.
>
> Just because these points weren't raised earlier in this thread:  In my
> experience, a move to L3 involves losing:
>
> - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but
> that's
> a whole can of worms - so perhaps best to just say up front that this is a
> non-broadcast network.
>
> - support for other IP protocols.
>
> - various "L2 games" like virtual MAC addresses, etc that NFV/etc people
> like.
>
>
> We gain:
>
> - the ability to have proper hierarchical addressing underneath (which is a
> big one for scaling a single "network").  This itself is a tradeoff
> however -
> an efficient/strict hierarchical addressing scheme means VMs can't choose
> their
> own IP addresses, and VM migration is messy/limited/impossible.
>
> - hardware support for dynamic L3 routing is generally universal, through a
> small set of mostly-standard protocols (BGP, ISIS, etc).
>
> - can play various "L3 games" like BGP/anycast, which is super useful for
> geographically diverse services.
>
>
> It's certainly a useful tradeoff for many use cases.  Users lose some
> generality in return for more powerful cooperation with the provider around
> particular features, so I sort of think of it like a step halfway up the
> IaaS-
> >PaaS stack - except for networking.
>
>  - Gus
>
> > Thanks
> > Rohit
> >
> > From: Kevin Benton mailto:blak...@gmail.com>>
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >  openstack-dev@lists.openstack.org
> > >> Date: Tuesday, October 28, 2014 1:01 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> >  openstack-dev@lists.openstack.org
> > >> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
> > networking
> > >1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> > >Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How
> much
> > >of the Routes will be imported from external world ? 3. Will there be
> > >Separate routing domain for overlay network  ? Or it will be mixed with
> > >external/underlay network ?
> > These are all implementation specific details. Different deployments and
> > network backends can implement them however they want. What we need to
> > discuss now is how this model will look to the end-user and API.
> > >4. What will be the basic use case of this ? Thinking of L3 switching to
> > >support BGP-MPLS L3 VPN Scenario right from compute node ?
> > I think the simplest use case is just that a provider doesn't want to
> deal
> > with extending L2 domains all over their datacenter.
> >
> > On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
> > mailto:keshav...@hp.com>> wrote: Hi Cory,
> >
> > Yes that is the basic question I have.
> >
> > OpenStack cloud  is ready to move away from Flat L2 network ?
> >
> > 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> > Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
> > of the Routes will be imported from external world ? 3. Will there be
> > Separate routing domain for overlay network  ? Or it will be mixed with
> > external/underlay network ? 4. What will be the basic use case of this ?
> > Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
> > compute node ?
> >
> > Others can give their opinion also.
> >
> > Thanks & Regards,
> > keshava
> >
> > -Original Message-
> > From: Cory Benfield
> > [mailto:cory.benfi...@metaswitch.com >]
> > Sent: Tuesday, October 28, 2014 10:35 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
> networking
> >
> > On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
> > > Hi,
> > >
> > > Current Open-stack was built as flat network.
> > >
> > > With the introduction of the L3 lookup (by inserting the routing table
> > > in forwarding path) and separate 'VIF Route Type' interface:
> > >
> > > At what point of time in the packet processing  decision will be made
> > > to lookup FIB  during ? For each packet there will additional  FIB
> > > lookup ?
> > >
> > > Ho

Re: [openstack-dev] [glance]

2014-10-28 Thread Robert Collins
On 29 October 2014 11:18, Jesse Cook  wrote:

> At the risk of having something thrown at me, what I am suggesting is a
> move away from Glance as a service to Glance as a purely functional API.
> At some point caching would need to be discussed, but I am intentionally
> neglecting caching and the existence of any data store as there is a risk
> of complecting state. I want to avoid discussions on performance until
> more important things can be addressed such as predictability,
> reliability, scalability, consistency, maintainability, extensibility,
> security, and simplicity (i.e. As defined by Rich Hickey).

I won't throw anything at you; I might buy you a drink.

For folk that haven't seen it:
http://www.infoq.com/presentations/Simple-Made-Easy //
http://www.reddit.com/r/programming/comments/lirke/simple_made_easy_by_rich_hickey_video/

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] towards "inbox zero" on bashate changes, release?

2014-10-28 Thread Ian Wienand
On 10/14/2014 04:03 PM, Ian Wienand wrote:
> Maybe it is time for a release?  One thing; does the pre-release check
> run over TOT devstack and ensure there are no errors?  We don't want
> to release and then 10 minutes later gate jobs start failing.

Just to loop back on this ...

Our main goal here should be to get [1] merged so we don't further
regress on any checks.

TOT bashate currently passes against devstack; so we can release as
is.  Two extra changes we might consider as useful in a release:

 - https://review.openstack.org/131611 (Remove automagic file finder)
   I think we've agreed to just let test-frameworks find their own
   files, so get rid of this

 - https://review.openstack.org/131616 (Add man page)
   Doesn't say much, but it can't hurt

As future work, we can do things like add warning-level checks,
automatically generate the documentation on errors being checked, etc.

-i

[1] https://review.openstack.org/128809 (Fix up file-matching in bashate tox 
test)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Finalizing cross-project design summit track

2014-10-28 Thread joehuang
Hello, Russell,

Is the "cascading" included in the session "Approaches for scaling out" [1] ? 

>From the selected topics listed in [2], the selected topic is 
plans for scaling out OpenStack using cells
 * Session Lead: John Garbutt
 * Session Description:
 
But according to the comments of TCs' [2], cells and cascading are merged in 
this session. All minus score on cascading is to ask the merge of cells and 
cascading:
Could you pls confirm whether cascading is included in the "Approaches for 
scaling out" or not. If yes, I would like to add me as the co-lead with John 
Garbutt in this session.

The comments from TCs are as following:

19. plans for scaling out OpenStack using cells: -1 / 3
19.1.  (johnthetubaguy)Interested: jaypipes, edleafe (annegentle: I wonder if 
we can split up one slot for nova/glance/interaction?) +1(ttx) if merged with 
cascading session (#21) +1(dhellmann) Merge with cascading - let's pick one 
approach to this +1 (jeblair merge) +0 (mikal) how does this differ from the 
cells sessions in the nova track? -0 (sdague) honestly think that should just 
be in Nova track. -1 (russellb) We have 2 slots for this in the Nova track 
already.  Nova needs to figure out if this is actually moving forward or not.

21. Introduce OpenStack cascading  for integrating multi-site / multi-vendor / 
multi-version OpenStack instances into one cloud with  OpenStack API exposed 
(Chaoyi Huang, joehu...@huawei.com): -4 / 2
21.1.  -1(ttx) merge with the cells session (#19) so that both approaches are 
compared merge in cells session annegentle -1 (dhellmann) merge +2 (devananda) 
merge with cells discussion, and discuss both, -1 (jeblair merge) -1 (russellb) 
can be discussed as an alternative in the nova cells session.

[1] http://kilodesignsummit.sched.org/type/cross-project+workshops
[2] https://etherpad.openstack.org/p/kilo-crossproject-summit-topics


Best Regards

Chaoyi Huang ( joehuang )



From: Russell Bryant [rbry...@redhat.com]
Sent: 29 October 2014 5:22
To: OpenStack Development Mailing List
Subject: [openstack-dev] [All] Finalizing cross-project design summit track

A draft schedule has been posted for the cross-project design summit track:

http://kilodesignsummit.sched.org/overview/type/cross-project+workshops#.VFAFFXVGjUa

If you have any schedule changes to propose for really bad conflicts,
please let me know.  We really tried to minimize conflicts, but it's
impossible to resolve them all.

The next steps are to identify session leads and get the leads to write
session descriptions to put on the schedule.  We're collecting both at
the top of the proposals etherpad:

https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

If you were the proposer of one of these sessions and are not already
listed as the session lead, please add yourself.  If you'd like to
volunteer to lead a session that doesn't have a lead, please speak up.

For the sessions you are leading, please draft a description on the
etherpad that can be used for the session on sched.org.

Thank you!

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Angus Lees
On Wed, 29 Oct 2014 12:21:10 AM Fred Baker wrote:
> On Oct 28, 2014, at 4:59 PM, Angus Lees  wrote:
> > On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
> >> Agreed. The way I'm thinking about this is that tenants shouldn't care
> >> what
> >> the underlying implementation is - L2 or L3. As long as the connectivity
> >> requirements are met using the model/API, end users should be fine. The
> >> data center network design should be an administrators decision based on
> >> the implementation mechanism that has been configured for OpenStack.
> > 
> > I don't know anything about Project Calico, but I have been involved with
> > running a large cloud network previously that made heavy use of L3
> > overlays.
> > 
> > Just because these points weren't raised earlier in this thread:  In my
> > experience, a move to L3 involves losing:
> > 
> > - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but
> > that's a whole can of worms - so perhaps best to just say up front that
> > this is a non-broadcast network.
> > 
> > - support for other IP protocols.
> > 
> > - various "L2 games" like virtual MAC addresses, etc that NFV/etc people
> > like.
> I’m a little confused. IP supports multicast. It requires a routing
> protocol, and you have to “join” the multicast group, but it’s not out of
> the picture.

Agreed, you absolutely can do multicast and broadcast on an L3 overlay 
network.  I was just saying that IGMP support tends to be a lot more 
inconsistent and flaky across vendors compared with L2 multicast (which pretty 
much always works).

Further, if the goal of moving to routed L3 is to allow a "network" to span 
more geographically diverse underlying networks, then we might want to 
administratively prohibit broadcast due to its increased cost and it no longer 
being a hard requirement for basic functionality (no need for ARP/DHCP 
anymore!).

If we're foregoing an L2 abstraction and moving to L3, I was merely suggesting 
it might also be reasonable to say that broadcast/multicast are not supported 
and thus the requirements on the underlying infrastructure can be drastically 
reduced.  Non-broadcast L3 overlay networks are common and prove to be useful 
for just about every task except mDNS/WINS discovery, which everyone is rather 
happy to leave behind ;)

> What other “IP” protocols do you have in mind? Are you thinking about
> IPX/CLNP/etc? Or are you thinking about new network layers?

eg: If the underlying L3 network only supported IPv4, then it would be 
impossible to run IPv6 (without yet another overlay network).  With a L2 
abstraction, theoretically any IP protocol can be used.

> I’m afraid the L2 games leave me a little cold. We have been there, such as
> with DECNET IV. I’d need to understand what you were trying to achieve
> before I would consider that a loss.

Sure, just listing it as one of the changes for completeness.

"Traditional" network devices often use VRRP or similar for HA failover, and 
so NFV-on-L3 would need to use some alternative (failover via overlapping BGP 
advertisements, for example, is easy and works well, so this isn't hard - just 
different).

> > We gain:
> > 
> > - the ability to have proper hierarchical addressing underneath (which is
> > a
> > big one for scaling a single "network").  This itself is a tradeoff
> > however - an efficient/strict hierarchical addressing scheme means VMs
> > can't choose their own IP addresses, and VM migration is
> > messy/limited/impossible.
> 
> It does require some variation on a host route, and it leads us to ask about
> renumbering. The hard part of VM migration is at the application layer, not
> the network, and is therefore pretty much the same.
> > - hardware support for dynamic L3 routing is generally universal, through
> > a
> > small set of mostly-standard protocols (BGP, ISIS, etc).
> > 
> > - can play various "L3 games" like BGP/anycast, which is super useful for
> > geographically diverse services.
> > 
> > 
> > It's certainly a useful tradeoff for many use cases.  Users lose some
> > generality in return for more powerful cooperation with the provider
> > around
> > particular features, so I sort of think of it like a step halfway up the
> > IaaS-> 
> >> PaaS stack - except for networking.
> > 
> > - Gus
> > 
> >> Thanks
> >> Rohit
> >> 
> >> From: Kevin Benton mailto:blak...@gmail.com>>
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >> mailto:openstack-dev@lists.openstack.o
> >> rg
> >> 
>  Date: Tuesday, October 28, 2014 1:01 PM
> >> 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> mailto:openstack-dev@lists.openstack.o
> >> rg
> >> 
>  Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
> >> 
> >> networking
> >> 
> >>> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
> >>> Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How
> >>> much
> >>> of the Routes will be imported fro

Re: [openstack-dev] [neutron] Clear all flows when ovs agent start?why and how avoid?

2014-10-28 Thread Zebra
Hi, Kyle


> This is likely due to this bug [1] which was fixed in Juno. On agent
>restart, all flows are reprogrammed. We do this to ensure that
>everything is reprogrammed correctly and no stale flows are left.
 
If the neutron-openvswitch-agent restarts , the exist flows should not be 
reprogrammed because
this will cause the network failure be perceived by the end-users.


And in setup_tunnel_br/setup_physical_br/setup_intergration_br, clear all flows 
is not 
necessary because the flows will be synchronized by scan_ports function.


So I think we should not call remove_all_flows in 
setup_tunnel_br/setup_physical_br/setup_intergration_br.


Besides, delete port operations in 
setup_tunnel_br/setup_physical_br/setup_intergration_br should also be 
commented for the same reason.




Zebra



-- Original --
From:  "Kyle Mestery";
Date:  Tue, Oct 28, 2014 09:00 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [neutron] Clear all flows when ovs agent 
start?why and how avoid?

 
On Mon, Oct 27, 2014 at 10:01 PM, Damon Wang  wrote:
> Hi all,
>
> We have suffered a long down time when we upgrade our public cloud's neutron
> into the latest version (close to Juno RC2), for ovs-agent cleaned all flows
> in br-tun when it start.
>
This is likely due to this bug [1] which was fixed in Juno. On agent
restart, all flows are reprogrammed. We do this to ensure that
everything is reprogrammed correctly and no stale flows are left.

[1] https://bugs.launchpad.net/tripleo/+bug/1290486
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
>
> ( plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent.__init__ ->
> plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_tunnel_br
> :
> self.tun_br.remove_all_flows() )
>
> Do we have any mechanism or ideas to avoid this, or should we rethink
> current design? Welcome comments
>
Perhaps a way around this would be to add a flag on agent startup
which would have it skip reprogramming flows. This could be used for
the upgrade case.

> Wei Wang
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >