Re: [openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-04 Thread Dean Troyer
On Tue, Nov 4, 2014 at 10:35 PM, JunJie Nan  wrote:

> I think it's a bug, rejion should work after unstack. And stack.sh is need
> after clean.sh instead of unstack.sh.
>
As Chmouel said, rejoin-stack.sh is meant to only re-create the screen
sessions from the last stack.sh run.  As services are configured to run
under Apache's mod_wsgi they will not be handled by rejoin.stack.sh.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-04 Thread Salvatore Orlando
>From what I gather from this thread and related bug report, the change
introduced in the OVS agent is causing a data plane outage upon agent
restart, which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe,
cleaning up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually
quite straightforward; we might add a startup flag to reset all flows and
not reset them by default.
While I agree the "flow synchronisation" process proposed in the previous
post is valuable too, I hope we might be able to fix this with a simpler
approach.

Salvatore

On 5 November 2014 04:43, Germy Lure  wrote:

> Hi,
>
> Consider the triggering of restart agent, I think it's nothing but:
> 1). only restart agent
> 2). reboot the host that agent deployed on
>
> When the agent started, the ovs may:
> a.have all correct flows
> b.have nothing at all
> c.have partly correct flows, the others may need to be reprogrammed,
> deleted or added
>
> In any case, I think both user and developer would happy to see that the
> system recovery ASAP after agent restarting. The best is agent only push
> those incorrect flows, but keep the correct ones. This can ensure those
> business with correct flows working during agent starting.
>
> So, I suggest two solutions:
> 1.Agent gets all flows from ovs and compare with its local flows after
> restarting. And agent only corrects the different ones.
> 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
> and ovs prepares two tables for flows switch(like RCU lock).
>
> 1 is recommended because of the 3rd vendors.
>
> BR,
> Germy
>
>
> On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
> wrote:
>
>> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
>> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
>> >>
>> >>
>> >> Sent from my iPad
>> >>
>> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
>> robert.vanleeu...@spilgames.com> wrote:
>> >>
>> > I find our current design is remove all flows then add flow by
>> entry, this
>> > will cause every network node will break off all tunnels between
>> other
>> > network node and all compute node.
>>  Perhaps a way around this would be to add a flag on agent startup
>>  which would have it skip reprogramming flows. This could be used for
>>  the upgrade case.
>> >>>
>> >>> I hit the same issue last week and filed a bug here:
>> >>> https://bugs.launchpad.net/neutron/+bug/1383674
>> >>>
>> >>> From an operators perspective this is VERY annoying since you also
>> cannot push any config changes that requires/triggers a restart of the
>> agent.
>> >>> e.g. something simple like changing a log setting becomes a hassle.
>> >>> I would prefer the default behaviour to be to not clear the flows or
>> at the least an config option to disable it.
>> >>>
>> >>
>> >> +1, we also suffered from this even when a very little patch is done
>> >>
>> > I'd really like to get some input from the tripleo folks, because they
>> > were the ones who filed the original bug here and were hit by the
>> > agent NOT reprogramming flows on agent restart. It does seem fairly
>> > obvious that adding an option around this would be a good way forward,
>> > however.
>>
>> Since nobody else has commented, I'll put in my two cents (though I
>> might be overcharging you ;-).  I've also added the TripleO tag to the
>> subject, although with Summit coming up I don't know if that will help.
>>
>> Anyway, if the bug you're referring to is the one I think, then our
>> issue was just with the flows not existing.  I don't think we care
>> whether they get reprogrammed on agent restart or not as long as they
>> somehow come into existence at some point.
>>
>> It's possible I'm wrong about that, and probably the best person to talk
>> to would be Robert Collins since I think he's the one who actually
>> tracked down the problem in the first place.
>>
>> -Ben
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack-dev] [neutron] [nfv]

2014-11-04 Thread A, Keshava
Hi,
I am thinking loud here, about NFV Service VM and OpenStack infrastructure.
Please let me know does the below scenario analysis make sense.

NFV Service VM's are hosted on cloud (OpenStack)  where in there are  2 Tenants 
with different Service order of execution.
(Service order what I have mentioned here is  just an example ..)

* Does OpenStack controls the order of Service execution for every 
packet ?

* Does OpenStack will have different Service-Tag for different Service ?

* If there are multiple features with in a Service-VM, how 
Service-Execution is controlled in that  VM ?

* After completion of a particular Service ,  how the next Service will 
be invoked ?

Will there be pre-configured flows from OpenStack  to invoke next service for 
tagged packet from Service-VM ?

[cid:image003.png@01CFF8F8.1BCCFAF0]

[cid:image007.png@01CFF8F8.1BCCFAF0]


Thanks & regards,
keshava




image001.emz
Description: image001.emz


oledata.mso
Description: oledata.mso


image006.emz
Description: image006.emz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] unable to collect compute.node.cpu.* data

2014-11-04 Thread Lu, Lianhao
Hi Frank,

Could you try ‘celometer sample-list’ to see if the compute.node.cpu samples 
are there?

-Lianhao

From: Du Jun [mailto:dj199...@gmail.com]
Sent: Wednesday, November 05, 2014 3:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer] unable to collect compute.node.cpu.* data

Hi all,

I attempt to collect compute.node.cpu as the following link mentions:

http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova

I set:

compute_monitors = ComputeDriverCPUMonitor

in /etc/nova/nova.conf and restart nova-compute, nova-scheduler, 
ceilometer-agent-notification, ceilometer-api, ceilometer-collector.

From ceilometer-agent-notification's log, I can see agent transform and publish 
data samples compute.node.cpu.*

What's more, from ceilometer database, I can see all the meters 
compute.node.cpu.*


mysql> select * from meter;

++-++---+

| id | name| type   | unit  |

++-++---+

| 39 | compute.node.cpu.frequency  | gauge  | MHz   |

| 41 | compute.node.cpu.idle.percent   | gauge  | % |

| 38 | compute.node.cpu.idle.time  | cumulative | ns|

| 45 | compute.node.cpu.iowait.percent | gauge  | % |

| 42 | compute.node.cpu.iowait.time| cumulative | ns|

| 36 | compute.node.cpu.kernel.percent | gauge  | % |

| 44 | compute.node.cpu.kernel.time| cumulative | ns|

| 37 | compute.node.cpu.percent| gauge  | % |

| 43 | compute.node.cpu.user.percent   | gauge  | % |

| 40 | compute.node.cpu.user.time  | cumulative | ns|


However, when I type

ceilometer meter-list

It shows nothing about compute.node.cpu.*, so I wonder what's wrong with my 
steps.

--
Regards,
Frank
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [BarbicanClient] [Cinder] Cinder unit tests failing

2014-11-04 Thread ZhiQiang Fan
Actually, Alan Pevec on 2014-11-01 reported a bug
https://bugs.launchpad.net/cinder/+bug/1388461  and it is assigned to
original spec developer Brianna Poulos



On Wed, Nov 5, 2014 at 6:33 AM, John Griffith 
wrote:

> Hey Everyone,
>
> So there's been a bit of activity around barbicanclient as of late due
> version 3.0.0 causing unit test failures as a result of cliff
> dependencies in stable.
>
> Unfortunately, there's a detail that's been neglected here.  Looking
> at the logs for the unit test it turns out that barbicanclient remove
> a module.  That would be fine but sadly a unit test was written in
> cinder that imported module from python-barbicanclient directly
> (that's bad!!).  So as a result said unit tests now obviously fail.
>
> As a temporary solution I've removed
> cinder/tests/keymgr/test_barbican.py from Cinders unit tests. I'll
> look at rewriting the unit tests or maybe some of the barbican folks
> would be willing to step up and take a shot at cleaning this all up
> before I get a chance.
>
> For reference I've filed a cinder bug [1]
>
> Thanks,
> John
>
> [1]: https://bugs.launchpad.net/cinder/+bug/1389419
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] unable to collect compute.node.cpu.* data

2014-11-04 Thread ZhiQiang Fan
is there any error or warning information related to that meter-list
request in ceilometer-api.log?

On Wed, Nov 5, 2014 at 10:43 AM, Du Jun  wrote:

> Hi all,
>
> I attempt to collect compute.node.cpu as the following link mentions:
>
>
> http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova
>
> I set:
>
> compute_monitors = ComputeDriverCPUMonitor
>
> in /etc/nova/nova.conf and restart nova-compute, nova-scheduler,
> ceilometer-agent-notification, ceilometer-api, ceilometer-collector.
>
> From ceilometer-agent-notification's log, I can see agent transform and
> publish data samples compute.node.cpu.*
>
> What's more, from ceilometer database, I can see all the meters
> compute.node.cpu.*
>
> mysql> select * from meter;
>
> ++-++---+
>
> | id | name| type   | unit  |
>
> ++-++---+
>
> | 39 | compute.node.cpu.frequency  | gauge  | MHz   |
>
> | 41 | compute.node.cpu.idle.percent   | gauge  | % |
>
> | 38 | compute.node.cpu.idle.time  | cumulative | ns|
>
> | 45 | compute.node.cpu.iowait.percent | gauge  | % |
>
> | 42 | compute.node.cpu.iowait.time| cumulative | ns|
>
> | 36 | compute.node.cpu.kernel.percent | gauge  | % |
>
> | 44 | compute.node.cpu.kernel.time| cumulative | ns|
>
> | 37 | compute.node.cpu.percent| gauge  | % |
>
> | 43 | compute.node.cpu.user.percent   | gauge  | % |
>
> | 40 | compute.node.cpu.user.time  | cumulative | ns|
>
>
> However, when I type
>
> ceilometer meter-list
>
> It shows nothing about compute.node.cpu.*, so I wonder what's wrong with
> my steps.
>
> --
> Regards,
> Frank
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-04 Thread JunJie Nan
I think it's a bug, rejion should work after unstack. And stack.sh is need
after clean.sh instead of unstack.sh.
Hi,

If you do ./unstack.sh you probably want to do ./stack.sh back again to
restack, ./rejoin-stack.sh is here when you have your screen session killed
and want to rejoin it without having to ./stack.sh the full shenanigan
again.

Cheers,
Chmouel

On Tue, Nov 4, 2014 at 1:52 PM, Angelo Matarazzo <
angelo.matara...@dektech.com.au> wrote:

> Hi all,
>
> sometimes I use devstack (in a VM with Ubuntu installed) and I perform
> ./unstack command to reset my environment.
>
> When I perform rejoin-stack.sh keystone endpoint doesn't work.
> Following http://www.gossamer-threads.com/lists/openstack/dev/41939
> suggestion
> I checked /etc/apache2/sites-enabled
> and symbolic link to
> ../sites-available/keystone.conf and doesn't exist.
>
> If I recreate the symbolic link keystone works..
>
> what is the correct workflow after I have performed ./unstack.sh
> Should I perform ./stack.sh or this is a bug?
>
> Cheers,
> Angelo
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-04 Thread Germy Lure
Hi,

Address Translation(FIP, snat and dnat) looks like an advanced service. Why
it is integrated into L3 router? Actually, this is not how it's done in
practice. They are usually provided by Firewall device but not router.

What's the design concept?

Thanks&Regards,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-04 Thread Germy Lure
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed,
deleted or added

In any case, I think both user and developer would happy to see that the
system recovery ASAP after agent restarting. The best is agent only push
those incorrect flows, but keep the correct ones. This can ensure those
business with correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and
ovs prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec  wrote:

> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
> >>
> >>
> >> Sent from my iPad
> >>
> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
> robert.vanleeu...@spilgames.com> wrote:
> >>
> > I find our current design is remove all flows then add flow by
> entry, this
> > will cause every network node will break off all tunnels between
> other
> > network node and all compute node.
>  Perhaps a way around this would be to add a flag on agent startup
>  which would have it skip reprogramming flows. This could be used for
>  the upgrade case.
> >>>
> >>> I hit the same issue last week and filed a bug here:
> >>> https://bugs.launchpad.net/neutron/+bug/1383674
> >>>
> >>> From an operators perspective this is VERY annoying since you also
> cannot push any config changes that requires/triggers a restart of the
> agent.
> >>> e.g. something simple like changing a log setting becomes a hassle.
> >>> I would prefer the default behaviour to be to not clear the flows or
> at the least an config option to disable it.
> >>>
> >>
> >> +1, we also suffered from this even when a very little patch is done
> >>
> > I'd really like to get some input from the tripleo folks, because they
> > were the ones who filed the original bug here and were hit by the
> > agent NOT reprogramming flows on agent restart. It does seem fairly
> > obvious that adding an option around this would be a good way forward,
> > however.
>
> Since nobody else has commented, I'll put in my two cents (though I
> might be overcharging you ;-).  I've also added the TripleO tag to the
> subject, although with Summit coming up I don't know if that will help.
>
> Anyway, if the bug you're referring to is the one I think, then our
> issue was just with the flows not existing.  I don't think we care
> whether they get reprogrammed on agent restart or not as long as they
> somehow come into existence at some point.
>
> It's possible I'm wrong about that, and probably the best person to talk
> to would be Robert Collins since I think he's the one who actually
> tracked down the problem in the first place.
>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack build is failing

2014-11-04 Thread Pradip Mukhopadhyay
Thanks Anish.


I guess the same happened to me. A mixed of python packages. So what I did
is that 'pip uninstall', followed by unstack, followed by ./clean.sh,
followed by stack.sh. It worked. So far so good.



--pradip



On Wed, Nov 5, 2014 at 5:00 AM, Anish Bhatt  wrote:

>  I had similar errors due to a mixture of python packages installed via
> yum not playing too nicely with pip. I ended up nuking most of the install
> to fix it, so don’t have a recommendation beyond that, plus I was using
> RHEL 7. Maybe this is of interest to you
>
> https://bugs.launchpad.net/oslo.config/+bug/1374741
>
>
>
> -Anish
>
>
>
> *From:* Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
> *Sent:* Tuesday, November 4, 2014 2:27 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] devstack build is failing
>
>
>
> Hello,
>
>
>
> Trying first time to build a devstack in my Ubuntu-14.10 VM. Getting the
> following error:
>
>
> 2014-11-04 09:27:32.182 | + recreate_database_mysql nova latin1
> 2014-11-04 09:27:32.182 | + local db=nova
> 2014-11-04 09:27:32.182 | + local charset=latin1
> 2014-11-04 09:27:32.182 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'DROP
> DATABASE IF EXISTS nova;'
> 2014-11-04 09:27:32.186 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'CREATE
> DATABASE nova CHARACTER SET latin1;'
> 2014-11-04 09:27:32.189 | + /usr/local/bin/nova-manage db sync
> 2014-11-04 09:27:32.455 | Traceback (most recent call last):
> 2014-11-04 09:27:32.455 |   File "/usr/local/bin/nova-manage", line 6, in
> 
> 2014-11-04 09:27:32.455 | from nova.cmd.manage import main
> 2014-11-04 09:27:32.455 |   File
> "/home/ubuntu/devstack/nova/nova/cmd/manage.py", line 68, in 
> 2014-11-04 09:27:32.455 | from nova.api.ec2 import ec2utils
> 2014-11-04 09:27:32.455 |   File
> "/home/ubuntu/devstack/nova/nova/api/ec2/__init__.py", line 34, in 
> 2014-11-04 09:27:32.455 | from nova.api.ec2 import faults
> 2014-11-04 09:27:32.455 |   File
> "/home/ubuntu/devstack/nova/nova/api/ec2/faults.py", line 20, in 
> 2014-11-04 09:27:32.455 | from nova import utils
> 2014-11-04 09:27:32.455 |   File
> "/home/ubuntu/devstack/nova/nova/utils.py", line 39, in 
> 2014-11-04 09:27:32.456 | from oslo.concurrency import lockutils
> 2014-11-04 09:27:32.456 |   File
> "/usr/local/lib/python2.7/dist-packages/oslo/concurrency/lockutils.py",
> line 30, in 
> 2014-11-04 09:27:32.456 | from oslo.config import cfgfilter
> 2014-11-04 09:27:32.456 | ImportError: cannot import name cfgfilter
> 2014-11-04 09:27:32.471 | + exit_trap
> 2014-11-04 09:27:32.471 | + local r=1
> 2014-11-04 09:27:32.471 | ++ jobs -p
> 2014-11-04 09:27:32.472 | + jobs=
> 2014-11-04 09:27:32.472 | + [[ -n '' ]]
> 2014-11-04 09:27:32.472 | + kill_spinner
> 2014-11-04 09:27:32.472 | + '[' '!' -z '' ']'
> 2014-11-04 09:27:32.472 | + [[ 1 -ne 0 ]]
>
>
>
>  Any idea what is missing? Any help is highly appreciated.
>
>
>   Some more details may be helpful:
>
> ubuntu@ubuntu:~$ uname -a
>
> Linux ubuntu 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014
> x86_64 x86_64 x86_64 GNU/Linux
>
>
>
> ubuntu@ubuntu:~$ python --version
>
> Python 2.7.6
>
> ubuntu@ubuntu:~$ which python
>
> /usr/bin/python
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Pradip
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] unable to collect compute.node.cpu.* data

2014-11-04 Thread Du Jun
Hi all,

I attempt to collect compute.node.cpu as the following link mentions:

http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova

I set:

compute_monitors = ComputeDriverCPUMonitor

in /etc/nova/nova.conf and restart nova-compute, nova-scheduler,
ceilometer-agent-notification, ceilometer-api, ceilometer-collector.

>From ceilometer-agent-notification's log, I can see agent transform and
publish data samples compute.node.cpu.*

What's more, from ceilometer database, I can see all the meters
compute.node.cpu.*

mysql> select * from meter;

++-++---+

| id | name| type   | unit  |

++-++---+

| 39 | compute.node.cpu.frequency  | gauge  | MHz   |

| 41 | compute.node.cpu.idle.percent   | gauge  | % |

| 38 | compute.node.cpu.idle.time  | cumulative | ns|

| 45 | compute.node.cpu.iowait.percent | gauge  | % |

| 42 | compute.node.cpu.iowait.time| cumulative | ns|

| 36 | compute.node.cpu.kernel.percent | gauge  | % |

| 44 | compute.node.cpu.kernel.time| cumulative | ns|

| 37 | compute.node.cpu.percent| gauge  | % |

| 43 | compute.node.cpu.user.percent   | gauge  | % |

| 40 | compute.node.cpu.user.time  | cumulative | ns|


However, when I type

ceilometer meter-list

It shows nothing about compute.node.cpu.*, so I wonder what's wrong with my
steps.

--
Regards,
Frank
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Adding support for iSCSI helper

2014-11-04 Thread Anish Bhatt
This is very helpful, thank you !  Is this planned for kilo ?

> -Original Message-
> From: John Griffith [mailto:john.griffi...@gmail.com]
> Sent: Tuesday, November 4, 2014 5:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Karen Xie
> Subject: Re: [openstack-dev] [cinder] Adding support for iSCSI helper
> 
> On Wed, Nov 5, 2014 at 12:40 AM, Anish Bhatt  wrote:
> > Do the minimum driver features listed here
> > https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features  still
> > hold if I’m trying to add support for something that is a drop in
> > replacement for iet/tgt that has no volume support. Does this still
> > need to shipped as a driver plugin ?
> >
> >
> >
> > If the volume support stuff is a hard requirement, is it acceptable to
> > reuse the LVM backend already shipped with cinder (or brick, though
> > I’m not a 100% sure of the brick-cinder separation). Are there any
> > examples of this available at all ?
> >
> > -Anish
> >
> >
> >
> > One socket to bind them all.
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Hi Anish,
> 
> No IMO those requirements are specific for Backend Storage Devices and the
> drivers submitted for them.  Target Helpers are a different category.  That
> being said, I've been trying to get us away from the current model we have
> of the Target Helper/Driver inheritance.  We almost had this in Juno but it
> was very late in the release and I wasn't comfortable releasing it right 
> before
> RC.
> 
> Since then a number of changes have gone in and I've had to do some
> rework of it.  You can get an idea of the direction I'm hoping to go here 
> [1].  If
> you have a new Target Helper you'd like to submit it would be great if you
> took a look at this model going forward, it should make life a good bit easier
> for you.  Feel free to grab me on IRC and maybe I can just roll your addition
> into the work I've got in progress already.
> 
> Note that the follow up patch that actually separates the Target from the
> Volume Driver has been abandoned and I'm going to need to rewrite it.
> 
> Thanks,
> John
> 
> [1]: https://review.openstack.org/#/c/131860/
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Adding support for iSCSI helper

2014-11-04 Thread John Griffith
On Wed, Nov 5, 2014 at 12:40 AM, Anish Bhatt  wrote:
> Do the minimum driver features listed here
> https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features  still hold
> if I’m trying to add support for something that is a drop in replacement for
> iet/tgt that has no volume support. Does this still need to shipped as a
> driver plugin ?
>
>
>
> If the volume support stuff is a hard requirement, is it acceptable to reuse
> the LVM backend already shipped with cinder (or brick, though I’m not a 100%
> sure of the brick-cinder separation). Are there any examples of this
> available at all ?
>
> -Anish
>
>
>
> One socket to bind them all.
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi Anish,

No IMO those requirements are specific for Backend Storage Devices and
the drivers submitted for them.  Target Helpers are a different
category.  That being said, I've been trying to get us away from the
current model we have of the Target Helper/Driver inheritance.  We
almost had this in Juno but it was very late in the release and I
wasn't comfortable releasing it right before RC.

Since then a number of changes have gone in and I've had to do some
rework of it.  You can get an idea of the direction I'm hoping to go
here [1].  If you have a new Target Helper you'd like to submit it
would be great if you took a look at this model going forward, it
should make life a good bit easier for you.  Feel free to grab me on
IRC and maybe I can just roll your addition into the work I've got in
progress already.

Note that the follow up patch that actually separates the Target from
the Volume Driver has been abandoned and I'm going to need to rewrite
it.

Thanks,
John

[1]: https://review.openstack.org/#/c/131860/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Adding support for iSCSI helper

2014-11-04 Thread Anish Bhatt
Do the minimum driver features listed here 
https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features  still hold if 
I'm trying to add support for something that is a drop in replacement for 
iet/tgt that has no volume support. Does this still need to shipped as a driver 
plugin ?

If the volume support stuff is a hard requirement, is it acceptable to reuse 
the LVM backend already shipped with cinder (or brick, though I'm not a 100% 
sure of the brick-cinder separation). Are there any examples of this available 
at all ?
-Anish

One socket to bind them all.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack build is failing

2014-11-04 Thread Anish Bhatt
I had similar errors due to a mixture of python packages installed via yum not 
playing too nicely with pip. I ended up nuking most of the install to fix it, 
so don’t have a recommendation beyond that, plus I was using RHEL 7. Maybe this 
is of interest to you
https://bugs.launchpad.net/oslo.config/+bug/1374741

-Anish

From: Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
Sent: Tuesday, November 4, 2014 2:27 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] devstack build is failing

Hello,

Trying first time to build a devstack in my Ubuntu-14.10 VM. Getting the 
following error:


2014-11-04 09:27:32.182 | + recreate_database_mysql nova latin1
2014-11-04 09:27:32.182 | + local db=nova
2014-11-04 09:27:32.182 | + local charset=latin1
2014-11-04 09:27:32.182 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'DROP 
DATABASE IF EXISTS nova;'
2014-11-04 09:27:32.186 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'CREATE 
DATABASE nova CHARACTER SET latin1;'
2014-11-04 09:27:32.189 | + /usr/local/bin/nova-manage db sync
2014-11-04 09:27:32.455 | Traceback (most recent call last):
2014-11-04 09:27:32.455 |   File "/usr/local/bin/nova-manage", line 6, in 

2014-11-04 09:27:32.455 | from nova.cmd.manage import main
2014-11-04 09:27:32.455 |   File 
"/home/ubuntu/devstack/nova/nova/cmd/manage.py", line 68, in 
2014-11-04 09:27:32.455 | from nova.api.ec2 import ec2utils
2014-11-04 09:27:32.455 |   File 
"/home/ubuntu/devstack/nova/nova/api/ec2/__init__.py", line 34, in 
2014-11-04 09:27:32.455 | from nova.api.ec2 import faults
2014-11-04 09:27:32.455 |   File 
"/home/ubuntu/devstack/nova/nova/api/ec2/faults.py", line 20, in 
2014-11-04 09:27:32.455 | from nova import utils
2014-11-04 09:27:32.455 |   File "/home/ubuntu/devstack/nova/nova/utils.py", 
line 39, in 
2014-11-04 09:27:32.456 | from oslo.concurrency import lockutils
2014-11-04 09:27:32.456 |   File 
"/usr/local/lib/python2.7/dist-packages/oslo/concurrency/lockutils.py", line 
30, in 
2014-11-04 09:27:32.456 | from oslo.config import cfgfilter
2014-11-04 09:27:32.456 | ImportError: cannot import name cfgfilter
2014-11-04 09:27:32.471 | + exit_trap
2014-11-04 09:27:32.471 | + local r=1
2014-11-04 09:27:32.471 | ++ jobs -p
2014-11-04 09:27:32.472 | + jobs=
2014-11-04 09:27:32.472 | + [[ -n '' ]]
2014-11-04 09:27:32.472 | + kill_spinner
2014-11-04 09:27:32.472 | + '[' '!' -z '' ']'
2014-11-04 09:27:32.472 | + [[ 1 -ne 0 ]]



Any idea what is missing? Any help is highly appreciated.


Some more details may be helpful:
ubuntu@ubuntu:~$ uname -a
Linux ubuntu 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014 
x86_64 x86_64 x86_64 GNU/Linux

ubuntu@ubuntu:~$ python --version
Python 2.7.6
ubuntu@ubuntu:~$ which python
/usr/bin/python




Thanks,
Pradip

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

It sounds like this would match VLAN trunk.

Maybe it could be mapped to trunk port also, but I have not really worked with 
flat networks so I am not sure how DHCP etc. looks like.

Is it desired to be able to control port membership of VLANs or is it ok to 
connect all VLANs to all ports?

/Erik


From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: den 4 november 2014 03:41
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Is the trunk port use case like the super vlan?

Also there is another typical use case maybe not covered: extended flat 
network. Traffic on the port carries multiple vlans, but these vlans are not 
necessarily managed by neutron-network, so can not be classified to trunk port. 
And they don't need a gateway to communicate with other nodes in the physical 
provider network, what they expect neutron to do, is much like what the flat 
network does(so I call it extended flat): just keep the packets as is 
bidirectionally between wire and vnic.


From: Erik Moe [erik@ericsson.com]
Sent: Tuesday, November 04, 2014 5:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that "VLAN aware VMs", Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let's try. I hope you theory turns out to be realistic. :)
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In "VLAN aware VMs" trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM w

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

Hi,

I have reserved the last slot on Friday.

https://etherpad.openstack.org/p/neutron-kilo-meetup-slots

/Erik


From: Richard Woo [mailto:richardwoo2...@gmail.com]
Sent: den 3 november 2014 23:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hello, will this topic be discussed in the design session?

Richard

On Mon, Nov 3, 2014 at 10:36 PM, Erik Moe 
mailto:erik@ericsson.com>> wrote:

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.
Same as above, this is related to reuse of MAC addresses.
Benefits with “VLAN aware VMs” are integration with ex

Re: [openstack-dev] [Neutron] BGP - VPN BoF session in Kilo design summit

2014-11-04 Thread Richard Woo
Did we have room for this discussion?

On Tue, Nov 4, 2014 at 6:09 PM, Mathieu Rohon 
wrote:

> Hi,
>
> Thanks Jaume, it make sense since those use cases need l3-agent
> refectoring.
> I've updated the BGPVPN etherpad [1] with materials used during
> today's techtalk.
>
> I hope this will help everyone to better uderstand our use cases.
>
> [1]https://etherpad.openstack.org/p/bgpvpn
>
> On Tue, Nov 4, 2014 at 2:43 PM, Jaume Devesa  wrote:
> > Hello,
> >
> > BoF will be Wednesday 5 at 15:00 pm at Design Summit building. We will
> have
> > the chance to talk about it into the Kilo L3 refactoring BoF
> >
> > https://etherpad.openstack.org/p/kilo-l3-refactoring
> >
> > Cheers,
> >
> > On 30 October 2014 07:28, Carl Baldwin  wrote:
> >>
> >> Yes, let's discuss this in the meeting on Thursday.
> >>
> >> Carl
> >>
> >> On Oct 29, 2014 5:27 AM, "Jaume Devesa"  wrote:
> >>>
> >>> Hello,
> >>>
> >>> it seems like the BGP dynamic routing it is in a good shape to be
> >>> included in Neutron during Kilo[1]. There is quite interest in offer
> BGP-VPN
> >>> too. Mathieu Rohon's spec[2] goes in this direction. Of course it makes
> >>> sense that his development leverages the BGP one.
> >>>
> >>> I would like to have a BoF session and invite anyone interested on
> these
> >>> blueprints to join us or even add a new related one. I've created an
> >>> etherpad[3] to share ideas and agree with session schedule. I propose
> >>> Wednesday afternoon.
> >>>
> >>> If Carl Baldwin is agree, we can talk about it also during the open
> >>> discussion of today's L3 subteam meeting.
> >>>
> >>> [1]: https://review.openstack.org/#/c/125401/
> >>> [
> >>> 2]: https://review.openstack.org/#/c/125401/
> >>> [3]: https://etherpad.openstack.org/p/bgp-vpn-dynamic-routing
> >>>
> >>> Cheers,
> >>> --
> >>> Jaume Devesa
> >>> Software Engineer at Midokura
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Jaume Devesa
> > Software Engineer at Midokura
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [BarbicanClient] [Cinder] Cinder unit tests failing

2014-11-04 Thread John Griffith
Hey Everyone,

So there's been a bit of activity around barbicanclient as of late due
version 3.0.0 causing unit test failures as a result of cliff
dependencies in stable.

Unfortunately, there's a detail that's been neglected here.  Looking
at the logs for the unit test it turns out that barbicanclient remove
a module.  That would be fine but sadly a unit test was written in
cinder that imported module from python-barbicanclient directly
(that's bad!!).  So as a result said unit tests now obviously fail.

As a temporary solution I've removed
cinder/tests/keymgr/test_barbican.py from Cinders unit tests. I'll
look at rewriting the unit tests or maybe some of the barbican folks
would be willing to step up and take a shot at cleaning this all up
before I get a chance.

For reference I've filed a cinder bug [1]

Thanks,
John

[1]: https://bugs.launchpad.net/cinder/+bug/1389419

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] no meeting this week

2014-11-04 Thread Angus Salkeld
Hi

As we are all (mostly) at summit, let's cancel this week's meeting.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread David Chadwick
Hi John

there is another way of tackling this problem, based on the following
assumption "We know the person who should be authorised, but we dont
know what random set of attributes the IDP will provide for him (or if
we do know, they are as horrible as you indicated in your example below)".

We will demonstrate this solution this week at the keystone design
summit. It is based on the administrator sending the user (out of band)
the name of the group he is invited to join (specified as a Virtual
Organisation name and role) plus a secret. The user logs in with his
IDP, the IDP sends the unknown/horrible set of attributes to keystone,
the user asks to join the group and presents the secret, and our code
automatically adds a mapping rule specifically for this user to put him
in the group. The user can either be automatically added to the group,
or put in a pending queue for the administrator to OK at a later time.

Users who try to hack in are blacklisted if they present a wrong secret
for 3 times.

Would this solve your problem?

regards

David



On 04/11/2014 15:30, John Dennis wrote:
> On 11/04/2014 02:46 AM, David Chadwick wrote:
>> Hi John
> 
> Good morning David. I hope you're enjoying Paris and the summit is both
> productive and enjoyable. Wish I was there :-)
> 
>> Its seems like your objective is somewhat different to what was intended
>> with the current mapping rules. You seem to want a general purpose
>> mapping engine that can map from any set of attribute values into any
>> other set of values, whereas the primary objective of the current
>> mapping rules is to map from any set of attribute values into a Keystone
>> group, so that we can use the existing Keystone functions to assign
>> roles (and hence permissions) to the group members. So the current
>> mapping rules provide a means of identifying and then authorising a
>> potentially random set of external users, who logically form a coherent
>> group of users for authorisation purposes.
> 
> O.K. group assignment is the final goal in Keystone. I suppose the
> relevant question then is the functionality in the current Keystone
> mapper sufficiently rich such that you can present to it an arbitrary
> set of values and yield a group assessment? It's been a while since I
> looked at the mapper, things might have changed, but it seemed to me it
> had a lot of baked in assumptions about the data (assertion) it would
> receive. As long as those assumptions held true all is good.
> 
> My concern arose from real world experience where I saw a lot of "messy"
> data (plus I had a task that had some other requirements). There is
> often little consistency over how data is formatted and what data is
> included when you receive it from a foreign source. Now combine the
> messy data with complex rules dictated by management and you have an
> admin with a headache who is asked to make sure the rules are
> implemented. An admin might have to implement something like this:
> 
> "If the user is a member of domain D and has authenticated with
> mechanisms X,Y or Z and has IdP attribute A but does not have suffix S
> in their username and is not in a blacklist then assign them group G and
> transform their username by stripping the suffix, replacing all hyphens
> with underscores and lowercase it."
> 
> I'll grant you this example is perhaps a bit contrived but it's not too
> far afield from the questions I've seen admins ask when trying to manage
> actual RADIUS deployments. BTW, where is that domain information coming
> from in the example? Usually it has to be extracted from the username in
> any one of a number of formats.
> 
> It's things like this that motivate me towards a more general purpose
> mechanism because at the end of the day the real world isn't pretty :-)
> 
> FWIW FreeRADIUS didn't start out with a policy language with
> capabilities like this, it grew one out of necessity.
> 
> I'm definitely not trying to say Keystone needs to switch mappers,
> instead what I'm offering is one approach you might want to consider
> before the current mapping syntax becomes entrenched and ugly real world
> problems begin to crop up. I don't have any illusions this solution is
> ideal, these things are difficult to spec out and write. One advantage
> is it's easy to extend in a backwards compatible manner with minimal
> effort (basically it's just adding a new function you can "call").
> 
> FWIW the ideal mapper in my mind is something written in a general
> purpose scripting language where you have virtually no limitations on
> how you transform values and enforce conditions, but as I indicated in
> my other document managing a script interpreter for this task has its
> own problems. Which is the lesser of two evils, a script interpreter or
> a custom policy "language"? I don't think I have the answer to that but
> came down on the side of the custom policy "language" as being the most
> palatable and portable.
> 
>> Am I right in assuming that
>> you will a

Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread John Dennis
On 11/04/2014 04:19 PM, David Stanek wrote:
> There are probably a few other assumptions, but the main one is that the
> mapper expects the incoming data to be a dictionary where the value is a
> string. If there are multiple values we expect them to be delimited with
> a semicolon in the string.

and ...

any value with a colon will be split regardless of the value's semantic
meaning. In other words the colon character is illegal in any context
other than as a list separator. And you had better hope that any value
whose semantic meaning is a list will use the colon separator and not
space, tab, comma, etc. as the separator.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread David Stanek
On Tue, Nov 4, 2014 at 10:30 AM, John Dennis  wrote:

> O.K. group assignment is the final goal in Keystone. I suppose the
> relevant question then is the functionality in the current Keystone
> mapper sufficiently rich such that you can present to it an arbitrary
> set of values and yield a group assessment? It's been a while since I
> looked at the mapper, things might have changed, but it seemed to me it
> had a lot of baked in assumptions about the data (assertion) it would
> receive. As long as those assumptions held true all is good.
>

There are probably a few other assumptions, but the main one is that the
mapper expects the incoming data to be a dictionary where the value is a
string. If there are multiple values we expect them to be delimited with a
semicolon in the string.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Jorge Miramontes
Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture bill

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread A, Keshava

Hi Ian/Erik,

If the Service-VM contain the ‘multiple Features’ in which packets needs to 
processed one after other.
Example: When the packet the Service-VM from external network via OpenStack , 
First it should processed for vNAT and after finishing that packet  should be 
processed for  DPI functionality ).

How to control the chaining of execution for each packet entering the NFV 
service VM ?



1.   Each feature execution in the Service-VM should  be controlled by 
OpenStack ? By having nested Q-in-Q (where each Q maps to corresponding feature 
in that Service/NFV VM ? )

Or

2.   It will be informed  by Service Layer to Service-VM  (outside 
OpenStack) .Then execution chain should be handled  “internally   by that 
Service-VM”  itself and it should be transparent to OpenStack  ?

Or thinking is different here ?
[cid:image001.png@01CFF889.F7F5C1C0]

Thanks & regards,
Keshava

From: Erik Moe [mailto:erik@ericsson.com]
Sent: Monday, November 03, 2014 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.
 HI
Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Susanne Balle
Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be
moved to various backends such as an elastic search, hadoop HDFS, Swift,
etc as well as by default (but with the option to disable it) ceilometer.
Ceilometer is the metering defacto for OpenStack so we need to support it.
We would like the integration with Ceilometer to be based on Notifications.
I believe German send a reference to that in another email. The
pre-processing will need to be optional and the amount of data aggregation
configurable.

What you describe below to me is usage gathering/metering. The billing is
independent since companies with private clouds might not want to bill but
still need usage reports for capacity planning etc. Billing/Charging is
just putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift
or Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we
were in disagreement on the IRC. I am not sure why but it sounded like you
were talking about something else when you were talking about the real time
processing. If we are just taking about moving the logs to your Hadoop
cluster or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

> Hey German/Susanne,
>
> To continue our conversation from our IRC meeting could you all provide
> more insight into you usage requirements? Also, I'd like to clarify a few
> points related to using logging.
>
> I am advocating that logs be used for multiple purposes, including
> billing. Billing requirements are different that connection logging
> requirements. However, connection logging is a very accurate mechanism to
> capture billable metrics and thus, is related. My vision for this is
> something like the following:
>
> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).
> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.
> - Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:
>
> A) We have connection logging as a planned feature. If a customer
> turns
> on the connection logging feature for their load balancer it will already
> have a history. One important aspect of this is that customers (at least
> ours) tend to turn on logging after they realize they need it (usually
> after a tragic lb event). By already capturing the logs I'm sure customers
> will be extremely happy to see that there are already X days worth of logs
> they can immediately sift through.
> B) Operators and their support teams can leverage logs when
> providing
> service to their customers. This is huge for finding issues and resolving
> them quickly.
> C) Albeit a minor point, building support for logs from the get-go
> mitigates capacity management uncertainty. My example earlier was the
> extreme case of every customer turning on logging at the same time. While
> unlikely, I would hate to manage that!
>
> I agree that there are other ways to capture billing metrics but, from my
> experience, those tend to be more complex than what I am advocating and
> without the added benefits listed above. An understanding of HP's desires
> on this matter will hopefully get this to a point where we can start
> working on a spec.
>
> Cheers,
> --Jorge
>
> P.S. Real-time stats is a different beast and I envision there being an
> API call that returns "real-time" data such as this ==>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.
>
>
> From:  , German 
> Reply-To:  "OpenStack Development Mailing List (not for usage questions)"
> 
> Date:  Wednesday, October 22, 2014 2:41 PM
> To:  "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>
> >Hi Jorge,
> >
> >

Re: [openstack-dev] [Neutron] BGP - VPN BoF session in Kilo design summit

2014-11-04 Thread Mathieu Rohon
Hi,

Thanks Jaume, it make sense since those use cases need l3-agent refectoring.
I've updated the BGPVPN etherpad [1] with materials used during
today's techtalk.

I hope this will help everyone to better uderstand our use cases.

[1]https://etherpad.openstack.org/p/bgpvpn

On Tue, Nov 4, 2014 at 2:43 PM, Jaume Devesa  wrote:
> Hello,
>
> BoF will be Wednesday 5 at 15:00 pm at Design Summit building. We will have
> the chance to talk about it into the Kilo L3 refactoring BoF
>
> https://etherpad.openstack.org/p/kilo-l3-refactoring
>
> Cheers,
>
> On 30 October 2014 07:28, Carl Baldwin  wrote:
>>
>> Yes, let's discuss this in the meeting on Thursday.
>>
>> Carl
>>
>> On Oct 29, 2014 5:27 AM, "Jaume Devesa"  wrote:
>>>
>>> Hello,
>>>
>>> it seems like the BGP dynamic routing it is in a good shape to be
>>> included in Neutron during Kilo[1]. There is quite interest in offer BGP-VPN
>>> too. Mathieu Rohon's spec[2] goes in this direction. Of course it makes
>>> sense that his development leverages the BGP one.
>>>
>>> I would like to have a BoF session and invite anyone interested on these
>>> blueprints to join us or even add a new related one. I've created an
>>> etherpad[3] to share ideas and agree with session schedule. I propose
>>> Wednesday afternoon.
>>>
>>> If Carl Baldwin is agree, we can talk about it also during the open
>>> discussion of today's L3 subteam meeting.
>>>
>>> [1]: https://review.openstack.org/#/c/125401/
>>> [
>>> 2]: https://review.openstack.org/#/c/125401/
>>> [3]: https://etherpad.openstack.org/p/bgp-vpn-dynamic-routing
>>>
>>> Cheers,
>>> --
>>> Jaume Devesa
>>> Software Engineer at Midokura
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] deep dives at the summit, join us!

2014-11-04 Thread Sean Roberts
deep dive congress wed 11am data source drivers dev lounge
deep dive congress thur 11am delegation dev lounge
reference from the design session etherpad 
https://etherpad.openstack.org/p/par-kilo-congress-design-session 


~ sean

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread John Dennis
On 11/04/2014 02:46 AM, David Chadwick wrote:
> Hi John

Good morning David. I hope you're enjoying Paris and the summit is both
productive and enjoyable. Wish I was there :-)

> Its seems like your objective is somewhat different to what was intended
> with the current mapping rules. You seem to want a general purpose
> mapping engine that can map from any set of attribute values into any
> other set of values, whereas the primary objective of the current
> mapping rules is to map from any set of attribute values into a Keystone
> group, so that we can use the existing Keystone functions to assign
> roles (and hence permissions) to the group members. So the current
> mapping rules provide a means of identifying and then authorising a
> potentially random set of external users, who logically form a coherent
> group of users for authorisation purposes.

O.K. group assignment is the final goal in Keystone. I suppose the
relevant question then is the functionality in the current Keystone
mapper sufficiently rich such that you can present to it an arbitrary
set of values and yield a group assessment? It's been a while since I
looked at the mapper, things might have changed, but it seemed to me it
had a lot of baked in assumptions about the data (assertion) it would
receive. As long as those assumptions held true all is good.

My concern arose from real world experience where I saw a lot of "messy"
data (plus I had a task that had some other requirements). There is
often little consistency over how data is formatted and what data is
included when you receive it from a foreign source. Now combine the
messy data with complex rules dictated by management and you have an
admin with a headache who is asked to make sure the rules are
implemented. An admin might have to implement something like this:

"If the user is a member of domain D and has authenticated with
mechanisms X,Y or Z and has IdP attribute A but does not have suffix S
in their username and is not in a blacklist then assign them group G and
transform their username by stripping the suffix, replacing all hyphens
with underscores and lowercase it."

I'll grant you this example is perhaps a bit contrived but it's not too
far afield from the questions I've seen admins ask when trying to manage
actual RADIUS deployments. BTW, where is that domain information coming
from in the example? Usually it has to be extracted from the username in
any one of a number of formats.

It's things like this that motivate me towards a more general purpose
mechanism because at the end of the day the real world isn't pretty :-)

FWIW FreeRADIUS didn't start out with a policy language with
capabilities like this, it grew one out of necessity.

I'm definitely not trying to say Keystone needs to switch mappers,
instead what I'm offering is one approach you might want to consider
before the current mapping syntax becomes entrenched and ugly real world
problems begin to crop up. I don't have any illusions this solution is
ideal, these things are difficult to spec out and write. One advantage
is it's easy to extend in a backwards compatible manner with minimal
effort (basically it's just adding a new function you can "call").

FWIW the ideal mapper in my mind is something written in a general
purpose scripting language where you have virtually no limitations on
how you transform values and enforce conditions, but as I indicated in
my other document managing a script interpreter for this task has its
own problems. Which is the lesser of two evils, a script interpreter or
a custom policy "language"? I don't think I have the answer to that but
came down on the side of the custom policy "language" as being the most
palatable and portable.

> Am I right in assuming that
> you will also want this functionality after your general purpose mapping
> has taken place?

The mapper I designed does give you a lot of flexibility to assign a
foreign identity to a group (or multiple groups) with no additional
steps so I'm not sure I follow the above comment. There is no need for
any extra or multiple steps, it should be self contained.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI/SR-IOV meet-up at the summit

2014-11-04 Thread Zhipeng Huang
Thanks for the info! Thought we were to have a Nova NFV session for this

On Tue, Nov 4, 2014 at 3:53 PM, Irena Berezovsky 
wrote:

>  Hi,
>
> We thought it would be a good idea to have some chat regarding further
> SR-IOV enchantment that we want to achieve during Kilo.
>
> If you are interested to discuss it, please join us Wednesday 5, at 13:15
> at the developers lounge.
>
> The list of topics raised till now can be found here:
>
> https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough
>
>
>
> See you there,
>
> Irena
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng Huang
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402
OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PCI/SR-IOV meet-up at the summit

2014-11-04 Thread Irena Berezovsky
Hi,
We thought it would be a good idea to have some chat regarding further SR-IOV 
enchantment that we want to achieve during Kilo.
If you are interested to discuss it, please join us Wednesday 5, at 13:15 at 
the developers lounge.
The list of topics raised till now can be found here:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

See you there,
Irena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel master monitoring

2014-11-04 Thread Przemyslaw Kaminski

Hello,

In extension to my comment in this bug [1] I'd like to discuss the 
possibility of adding Fuel master node monitoring. As I wrote in the 
comment, when disk is full it might be already too late to perform any 
action since for example Nailgun could be down because DB shut itself 
down. So we should somehow warn the user that disk is running low (in 
the UI and fuel CLI on stderr for example) before it actually happens.


For now the only meaningful value to monitor would be disk usage -- do 
you have other suggestions? If not then probably a simple API endpoint 
with statvfs calls would suffice. If you see other usages of this then 
maybe it would be better to have some daemon collecting the stats we want.


If we opted for a daemon, then I'm aware that the user can optionally 
install Zabbix server although looking at blueprints in [2] I don't see 
anything about monitoring Fuel master itself -- is it possible to do? 
Though the installation of Zabbix though is not mandatory so it still 
doesn't completely solve the problem.


[1] https://bugs.launchpad.net/fuel/+bug/1371757
[2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system

Przemek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP - VPN BoF session in Kilo design summit

2014-11-04 Thread Jaume Devesa
Hello,

BoF will be Wednesday 5 at 15:00 pm at Design Summit building. We will have
the chance to talk about it into the Kilo L3 refactoring BoF

https://etherpad.openstack.org/p/kilo-l3-refactoring

Cheers,

On 30 October 2014 07:28, Carl Baldwin  wrote:

> Yes, let's discuss this in the meeting on Thursday.
>
> Carl
> On Oct 29, 2014 5:27 AM, "Jaume Devesa"  wrote:
>
>> Hello,
>>
>> it seems like the BGP dynamic routing it is in a good shape to be
>> included in Neutron during Kilo[1]. There is quite interest in offer
>> BGP-VPN too. Mathieu Rohon's spec[2] goes in this direction. Of course it
>> makes sense that his development leverages the BGP one.
>>
>> I would like to have a BoF session and invite anyone interested on these
>> blueprints to join us or even add a new related one. I've created an
>> etherpad[3] to share ideas and agree with session schedule. I propose
>> Wednesday afternoon.
>>
>> If Carl Baldwin is agree, we can talk about it also during the open
>> discussion of today's L3 subteam meeting.
>>
>> [1]: https://review.openstack.org/#/c/125401/
>> [
>> ​2]: https://review.openstack.org/#/c/125401/
>> [3]: https://etherpad.openstack.org/p/bgp-vpn-dynamic-routing
>>
>> ​Cheers,​
>> ​
>> --
>> Jaume Devesa
>> Software Engineer at Midokura
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-04 Thread melanie witt
On Nov 4, 2014, at 0:32, Doug Hellmann  wrote:

>>> I think this is reasonable, though do we actually support setting
>>> the same key twice ?
> 
> Yes, if it is registered in different groups.

I have found that for a MultiStrOpt, the same key can be set multiple times 
even in the same group, and the result is a list of values for that option [0].

[0] 
https://github.com/openstack/oslo.config/blob/11ecf18/oslo/config/cfg.py#L1011


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-04 Thread Chmouel Boudjnah
Hi,

If you do ./unstack.sh you probably want to do ./stack.sh back again to
restack, ./rejoin-stack.sh is here when you have your screen session killed
and want to rejoin it without having to ./stack.sh the full shenanigan
again.

Cheers,
Chmouel

On Tue, Nov 4, 2014 at 1:52 PM, Angelo Matarazzo <
angelo.matara...@dektech.com.au> wrote:

> Hi all,
>
> sometimes I use devstack (in a VM with Ubuntu installed) and I perform
> ./unstack command to reset my environment.
>
> When I perform rejoin-stack.sh keystone endpoint doesn't work.
> Following http://www.gossamer-threads.com/lists/openstack/dev/41939
> suggestion
> I checked /etc/apache2/sites-enabled
> and symbolic link to
> ../sites-available/keystone.conf and doesn't exist.
>
> If I recreate the symbolic link keystone works..
>
> what is the correct workflow after I have performed ./unstack.sh
> Should I perform ./stack.sh or this is a bug?
>
> Cheers,
> Angelo
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-04 Thread Robert Li (baoli)


On 11/3/14, 6:32 PM, "Doug Hellmann"  wrote:

>
>On Oct 31, 2014, at 9:27 PM, Robert Li (baoli)  wrote:
>
>> 
>> 
>> On 10/28/14, 11:01 AM, "Daniel P. Berrange"  wrote:
>> 
>>> On Tue, Oct 28, 2014 at 10:18:37AM -0400, Jay Pipes wrote:
 On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
> One option would be a more  CSV like syntax eg
> 
>   pci_passthrough_whitelist =
 address=*0a:00.*,physical_network=physnet1
>   pci_passthrough_whitelist = vendor_id=1137,product_id=0071
> 
> But this gets confusing if we want to specifying multiple sets of
>data
> so might need to use semi-colons as first separator, and comma for
>list
> element separators
> 
>   pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
 vendor_id=1137;product_id=0071
 
 What about this instead (with each being a MultiStrOpt, but no comma
or
 semicolon delimiters needed…)?
>
>This is easy for a developer to access, but not easy for a deployer to
>make sure they have configured correctly because they have to keep up
>with the order of the options instead of making sure there is a new group
>header for each set of options.
>
 
 [pci_passthrough_whitelist]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1
>>> 
>>> I think this is reasonable, though do we actually support setting
>>> the same key twice ?
>
>Yes, if it is registered in different groups.
>
>>> 
>>> As an alternative we could just append an index for each "element"
>>> in the list, eg like this:
>>> 
>>> [pci_passthrough_whitelist]
>>> rule_count=2
>>> 
>>> # Any Intel PRO/1000 F Sever Adapter
>>> vendor_id.0=8086
>
>Be careful about constructing the names. You can’t have “.” in them
>because then you can’t access them in python, for example:
>cfg.CONF.pci_passthrough_whitelist.vendor_id.0
>
>>> product_id.0=1001
>>> address.0=*
>>> physical_network.0=*
>>> 
>>> # Cisco VIC SR-IOV VF only on specified address and physical network
>>> vendor_id.1=1137
>>> product_id.1=0071
>>> address.1=*:0a:00.*
>>> physical_network.1=physnet1
>>> [pci_passthrough_whitelist]
>>> rule_count=2
>>> 
>>> # Any Intel PRO/1000 F Sever Adapter
>>> vendor_id.0=8086
>>> product_id.0=1001
>>> address.0=*
>>> physical_network.0=*
>>> 
>>> # Cisco VIC SR-IOV VF only on specified address and physical network
>>> vendor_id.1=1137
>>> product_id.1=0071
>>> address.1=*:0a:00.*
>>> physical_network.1=physnet1
>>> 
>>> Or like this:
>>> 
>>> [pci_passthrough]
>>> whitelist_count=2
>>> 
>>> [pci_passthrough_rule.0]
>>> # Any Intel PRO/1000 F Sever Adapter
>>> vendor_id=8086
>>> product_id=1001
>>> address=*
>>> physical_network=*
>>> 
>>> [pci_passthrough_rule.1]
>>> # Cisco VIC SR-IOV VF only on specified address and physical network
>>> vendor_id=1137
>>> product_id=0071
>>> address=*:0a:00.*
>>> physical_network=physnet1
>> 
>> Yeah, The last format (copied in below) is a good idea (without the
>> section for the count) to handle list of dictionaries. I¹ve seen similar
>> config examples in neutron code.
>> [pci_passthrough_rule.0]
>> # Any Intel PRO/1000 F Sever Adapter
>> vendor_id=8086
>> product_id=1001
>> address=*
>> physical_network=*
>> 
>> [pci_passthrough_rule.1]
>> # Cisco VIC SR-IOV VF only on specified address and physical network
>> vendor_id=1137
>> product_id=0071
>> address=*:0a:00.*
>> physical_network=physnet1
>> 
>> Without direct oslo support, to implement it requires a small method
>>that
>> uses oslo cfg¹s MultiConfigParser().
>
>I’m not sure what you mean needs new support? I think this would work,
>except for the “.” in the group name.

The group header is not fixed in this case. Let’s replace “.” with “:”,
then the user may have configured multiple groups such as
[pci_passthrough_rule:x]. With oslo, how would you register the group and
the options under it and access them as a list of dictionaries?

>
>> 
>> Now a few questions if we want to do it in Kilo:
>>  ‹ Do we still need to be back-ward compatible in configuring the
>> whitelist? If we do, then we still need to be able to handle the json
>> docstring.
>
>If there is code released using that format, you need to support it. You
>can define options as being deprecated so the new options replace the old
>but the old are available if found in the config file.
>
>Doug
>
>>  ‹ To support the new format in devstack, we can use meta-section in
>> local.conf. how would we support the old format which is still json
>> docstring?  Is something like this
>> https://review.openstack.org/#/c/123599/ acceptable?
>>  ‹ Do we allow old/new formats coexist in the config file? Probably not.
>> 
>> 
>>> 
 Either that, or the YAML file that Sean suggested, would be my
 preference...
>>> 
>>> I think it is nice to hav

[openstack-dev] [devstack] keystone doesn't restart after ./unstack

2014-11-04 Thread Angelo Matarazzo

Hi all,

sometimes I use devstack (in a VM with Ubuntu installed) and I perform 
./unstack command to reset my environment.


When I perform rejoin-stack.sh keystone endpoint doesn't work.
Following http://www.gossamer-threads.com/lists/openstack/dev/41939 
suggestion

I checked /etc/apache2/sites-enabled
and symbolic link to
../sites-available/keystone.conf and doesn't exist.

If I recreate the symbolic link keystone works..

what is the correct workflow after I have performed ./unstack.sh
Should I perform ./stack.sh or this is a bug?

Cheers,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Power management in Cobbler

2014-11-04 Thread Dmitriy Shulyak
Not long time ago we discussed necessity of power management feature in
Fuel.

What is your opinion on power management support in Cobbler, i took a look
at documentation [1] and templates [2] that  we have right now.
And it actually looks like we can make use of it.

The only issue is that power address that cobbler system is configured with
is wrong.
Because provisioning serializer uses one reported by boostrap, but it can
be easily fixed.

Ofcourse another question is separate network for power management, but we
can leave with
admin for now.

Please share your opinions on this matter. Thanks

[1] http://www.cobblerd.org/manuals/2.6.0/4/5_-_Power_Management.html
[2] http://paste.openstack.org/show/129063/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Dhcp and static pools for admin network

2014-11-04 Thread Dmitriy Shulyak
Hi guys,
I was trying to get rid from static pool in fuel and there is couple of
concerns that I want to
discuss with you.

1. Do we want to get rid from static pool completely (e.g remove any notion
of static pool
in nailgun, fuelmenu)? Or it will be enough to allow overlapping, and leave
possibility to split dhcp and static pools?

2. What is preferable way to handle master upgrade?
In my opinion we should not touch existing configuraion for cobbler/dnsmasq
and support single pool of addresses only for new environments (after 6.*)

Maybe you have other questions, i will appreciate if you list them here.
Thank you for your time
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based Policy] Audio stream for GBP Design Session in Paris

2014-11-04 Thread Mandeep Dhami
As no one was online, I closed the webex session.

On Tue, Nov 4, 2014 at 10:07 AM, Mandeep Dhami 
wrote:

> Use this webex meeting for Audio streaming:
>
> https://cisco.webex.com/ciscosales/j.php?MTID=m210c77f6f51a6f313a7d130d19ee3e4d
>
>
> Topic: GBP Design Session
>
> Date: Tuesday, November 4, 2014
>
> Time: 12:15 pm, Europe Time (Amsterdam, GMT+01:00)
>
> Meeting Number: 205 658 563
>
> Meeting Password: gbp
>
> On Mon, Nov 3, 2014 at 5:48 PM, Gregory Lebovitz 
> wrote:
>
>> Hey all,
>>
>> I'm participating remotely this session. Any plan for audio stream of
>> Tuesday's session? I'll happily offer a GoToMeeting, if needed.
>>
>> Would someone be willing to scribe discussion in #openstack-gbp channel?
>>
>> --
>> 
>> Open industry-related email from
>> Gregory M. Lebovitz
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack build is failing

2014-11-04 Thread Pradip Mukhopadhyay
Hello,


Trying first time to build a devstack in my Ubuntu-14.10 VM. Getting the
following error:


2014-11-04 09:27:32.182 | + recreate_database_mysql nova latin1
2014-11-04 09:27:32.182 | + local db=nova
2014-11-04 09:27:32.182 | + local charset=latin1
2014-11-04 09:27:32.182 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'DROP
DATABASE IF EXISTS nova;'
2014-11-04 09:27:32.186 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'CREATE
DATABASE nova CHARACTER SET latin1;'
2014-11-04 09:27:32.189 | + /usr/local/bin/nova-manage db sync
2014-11-04 09:27:32.455 | Traceback (most recent call last):
2014-11-04 09:27:32.455 |   File "/usr/local/bin/nova-manage", line 6, in

2014-11-04 09:27:32.455 | from nova.cmd.manage import main
2014-11-04 09:27:32.455 |   File
"/home/ubuntu/devstack/nova/nova/cmd/manage.py", line 68, in 
2014-11-04 09:27:32.455 | from nova.api.ec2 import ec2utils
2014-11-04 09:27:32.455 |   File
"/home/ubuntu/devstack/nova/nova/api/ec2/__init__.py", line 34, in 
2014-11-04 09:27:32.455 | from nova.api.ec2 import faults
2014-11-04 09:27:32.455 |   File
"/home/ubuntu/devstack/nova/nova/api/ec2/faults.py", line 20, in 
2014-11-04 09:27:32.455 | from nova import utils
2014-11-04 09:27:32.455 |   File
"/home/ubuntu/devstack/nova/nova/utils.py", line 39, in 
2014-11-04 09:27:32.456 | from oslo.concurrency import lockutils
2014-11-04 09:27:32.456 |   File
"/usr/local/lib/python2.7/dist-packages/oslo/concurrency/lockutils.py",
line 30, in 
2014-11-04 09:27:32.456 | from oslo.config import cfgfilter
2014-11-04 09:27:32.456 | ImportError: cannot import name cfgfilter
2014-11-04 09:27:32.471 | + exit_trap
2014-11-04 09:27:32.471 | + local r=1
2014-11-04 09:27:32.471 | ++ jobs -p
2014-11-04 09:27:32.472 | + jobs=
2014-11-04 09:27:32.472 | + [[ -n '' ]]
2014-11-04 09:27:32.472 | + kill_spinner
2014-11-04 09:27:32.472 | + '[' '!' -z '' ']'
2014-11-04 09:27:32.472 | + [[ 1 -ne 0 ]]




Any idea what is missing? Any help is highly appreciated.



Some more details may be helpful:

ubuntu@ubuntu:~$ uname -a

Linux ubuntu 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux



ubuntu@ubuntu:~$ python --version

Python 2.7.6

ubuntu@ubuntu:~$ which python

/usr/bin/python







Thanks,
Pradip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] rescheduling meeting

2014-11-04 Thread Vijay Venkatachalam
Any day 16:00 UTC is fine with me.
17:00 UTC+ is quite late in India.


-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: 04 November 2014 08:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][lbaas] rescheduling meeting

Hi LBaaS (and others),

We’ve been talking about possibly re-schedulng the LBaaS meeting to a time to 
is less crazy early for those in the US.  Alternately, we could also start 
alternating times.  For now, let’s see if we can find a slot that works every 
week.  Please respond with any time slots that you can NOT
attend:

Monday, 1600UTC
Monday, 1700UTC
Tuesday, 1600UTC (US pacific, 8am)
Tuesday, 1700UTC
Tuesday, 1800UTC
Wednesday, 1600UTC (US pacific, 8am)
Wednesday, 1700UTC
Wednesday, 1800UTC
Thursday, 1400UTC (US pacific, 6am)


Note that many of these slots will require the approval of the
#openstack-meeting-4 channel:

https://review.openstack.org/#/c/132629/

https://review.openstack.org/#/c/132630/


Thanks,
Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] no IRC meeting Nov 6 and Nov 13

2014-11-04 Thread Sergey Lukjanov
Hey Sahara folks,

just a friendly reminder that there will be no IRC meetings for Sahara on
both Nov 6 and Nov 13, because of the summit and a lot of folks who'll be
travelling / taking vacations after it.

We'll pick up the normal meeting schedule for Sahara on Nov 20.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] changing host name and /etc/hosts in container

2014-11-04 Thread Sergey Lukjanov
Probably Designate could help us doing it?

On Fri, Oct 31, 2014 at 6:39 AM, Zhidong Yu  wrote:

> Hi Trevor,
>
> Thanks for your response! We use Sahara to launch virtual nodes in
> containers, i.e. replacing KVM with Docker in Nova.
>
> The nature of this problem is about configuration management. This is
> actually a common issue in cloud based, horizontally scaled app. Sahara may
> need some advanced implementation in this. Not sure if confd/etcd/fleet
> could address this issue.
>
> Unfortunately I won't be at the summit. My colleague Zhongyue (who is Oslo
> core, by the way) will be there and he will attend the Sahara meetings.
>
> Thanks, Zhidong
>
>
> On Thu, Oct 30, 2014 at 10:18 PM, Trevor McKay  wrote:
>
>> Zhidong,
>>
>>  Thanks for your question.  I personally don't have an answer, but I
>> think we definitely should bring up the possibility of dockerization for
>> Sahara at the design summit next week.  It may be something we want to
>> formalize for Kilo.  Will you be at the summit?
>>
>> Just to be clear, are you running Sahara itself in a container, or
>> launching node instances in containers?
>>
>> I'll take a look and see if I can find anything useful about the ip
>> assignment/hostname sequence for node instances during launch.
>>
>> Best,
>>
>> Trevor
>>
>> On Thu, 2014-10-30 at 16:46 +0800, Zhidong Yu wrote:
>> > Hello hackers,
>> >
>> > We are experimenting Sahara with Docker container (nova-docker) and
>> > ran into an issue that Sahara needs to change the host name
>> > and /etc/hosts which is not allowed in container. I am wondering if
>> > there is any easy way to work around this by hacking into Sahara?
>> >
>> >
>> > thanks, Zhidong
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing Project Cue

2014-11-04 Thread Vipul Sabhaya
Hello Everyone,

I would like to introduce Cue, a new Openstack project aimed at simplifying
the application developer responsibilities by providing a managed service
focused on provisioning and lifecycle management of message-oriented
middleware services like RabbitMQ.

Messaging is a common development pattern for building loosely coupled
distributed systems. Provisioning and supporting Messaging Brokers for an
individual application can be a time consuming and painful experience. This
product aims to simplify the provisioning and management of message
brokers, providing High Availability, management, and auto-healing
capabilities to the end user, while providing tenant-level isolation.

More details, including the scope of the project can be found here:
https://wiki.openstack.org/wiki/Cue

We’ve started writing code: https://github.com/vipulsabhaya/cue — the plan
is to make it a Stackforge project in the coming weeks.

I work for HP, and we’ve built a team within HP to build Cue.  I am in
Paris for the Summit, and would appreciate feedback either on the mailing
list or in person.

If you are interested in helping build Cue, or have any questions/concerns
around the project vision, I plan to host a meetup in the design summit
area of the Le Meridien on *Friday morning*.  More details to come.

Thanks!
-Vipul Sabhaya
HP
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-04 Thread Manish Godara

Clearing all flows upon agent restart is a major issue, imho.  We should really 
look at this with higher priority than the modular L2 agent as the timeline of 
the refactor isn't clear for the modular layer 2 agent.  Whatever the issue 
was, I think we ought to be able to find a better solution that doesn't disrupt 
the network.  I agree that reconciling data after a restart is not 
straight-forward in all scenarios but there should be an option to just do 
basic sanity and not interrupt existing flows.  I'd like to help out on this 
(if needed) - there is a blueprint [1] that was suggested but I'm not sure who 
the owner is and what the status is.  If anyone is working on this and is at 
the summit this week, please let me know.  We can meet one of the days here at 
the summit.





thanks,

manish





[1] Adding an option of "Soft Restart" in neutron agent along with o... : 
Blueprints : neutron


|   |
|   |  |   |   |   |   |   |
| Adding an option of "Soft Restart" in neutron agent alon...While the 
blueprint of "ovs-firewall-driver" is being developed, a new concern comes up. 
When an ovs agent (or an ml2 agent with ovs) restarts, if it cleans up all ... |
|  |
| View on blueprints.launchpad.net | Preview by Yahoo |
|  |
|   |


  
 On Friday, October 31, 2014 7:32 AM, Ben Nemec  
wrote:
   

 On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>>  wrote:
>>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.
>>>
>>> I hit the same issue last week and filed a bug here:
>>> https://bugs.launchpad.net/neutron/+bug/1383674
>>>
>>> From an operators perspective this is VERY annoying since you also cannot 
>>> push any config changes that requires/triggers a restart of the agent.
>>> e.g. something simple like changing a log setting becomes a hassle.
>>> I would prefer the default behaviour to be to not clear the flows or at the 
>>> least an config option to disable it.
>>>
>>
>> +1, we also suffered from this even when a very little patch is done
>>
> I'd really like to get some input from the tripleo folks, because they
> were the ones who filed the original bug here and were hit by the
> agent NOT reprogramming flows on agent restart. It does seem fairly
> obvious that adding an option around this would be a good way forward,
> however.

Since nobody else has commented, I'll put in my two cents (though I
might be overcharging you ;-).  I've also added the TripleO tag to the
subject, although with Summit coming up I don't know if that will help.

Anyway, if the bug you're referring to is the one I think, then our
issue was just with the flows not existing.  I don't think we care
whether they get reprogrammed on agent restart or not as long as they
somehow come into existence at some point.

It's possible I'm wrong about that, and probably the best person to talk
to would be Robert Collins since I think he's the one who actually
tracked down the problem in the first place.

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] what is the different between ovs-ofctl and iptalbes? Can we use ovs-ofctl to nat floating ip into fixed ip if we use openvswitch agent?

2014-11-04 Thread Li Tianqing




At 2014-11-04 14:29:36, "loy wolfe"  wrote:

>maybe two reasons: performance caused by flow miss; feature parity


what do you mean `flow miss`? 


>
>L3+ flow table destroy the megaflow aggregation, so if your app has
>many concurrent sessions like web server, flow miss upcall would make

>vswitchd corrupted.


Then what the main purpose of flow table?  
>
>iptable is already there, migrating it to ovs flow table needs a lot
>of extra development, not to say that some advanced features is lost
>(for example, stateful firewall). However ovs is considering to add
>some hook to iptable, but in the very early stage yet. Even with that,

>it is not implemented by ovs datapath flowtable, but by iptable.


it makes sense...


>
>On Tue, Nov 4, 2014 at 1:07 PM, Li Tianqing  wrote:
>> ovs is implemented open flow, in ovs, it can see the l3, why do not use ovs?
>>
>> --
>> Best
>> Li Tianqing
>>
>> At 2014-11-04 11:55:46, "Damon Wang"  wrote:
>>
>> Hi,
>>
>> OVS mainly focus on l2 which iptables mainly focus on l3 or higher.
>>
>> Damon Wang
>>
>> 2014-11-04 11:12 GMT+08:00 Li Tianqing :
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best
>>> Li Tianqing
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based Policy] Audio stream for GBP Design Session in Paris

2014-11-04 Thread Mandeep Dhami
Use this webex meeting for Audio streaming:

https://cisco.webex.com/ciscosales/j.php?MTID=m210c77f6f51a6f313a7d130d19ee3e4d


Topic: GBP Design Session

Date: Tuesday, November 4, 2014

Time: 12:15 pm, Europe Time (Amsterdam, GMT+01:00)

Meeting Number: 205 658 563

Meeting Password: gbp

On Mon, Nov 3, 2014 at 5:48 PM, Gregory Lebovitz 
wrote:

> Hey all,
>
> I'm participating remotely this session. Any plan for audio stream of
> Tuesday's session? I'll happily offer a GoToMeeting, if needed.
>
> Would someone be willing to scribe discussion in #openstack-gbp channel?
>
> --
> 
> Open industry-related email from
> Gregory M. Lebovitz
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Mid-cycle hack-a-thon

2014-11-04 Thread Stephen Balukoff
Howdy, folks!

We are planning to have a mid-cycle hack-a-thon in Seattle from the 8th
through the 12th of December. This will be at the HP corporate offices
located in the Seattle convention center.

During this week we will be concentrating on Octavia code and hope to make
significant progress toward our v0.5 milestone.

If you are interested in attending, please e-mail me. If you are interested
in participating but can't travel to Seattle that week, please also let me
know, and we will see about using other means to collaborate with you in
real time.

Thanks!
Stephen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Feature Freeze for Fuel 6.0 in action

2014-11-04 Thread Mike Scherbakov
Hi all,
according to the schedule [1], with a bit of delay, I'm notifying everyone
that we've entered into Feature Freeze [2] state.

We broke our BVTs [3] though. Let's stop all the merges unless we stabilize
master and make BVTs passing - it has be done as soon as possible, as we
essentially blocking our QA and scaling teams with further testing on
latest builds.

Feature Leads, Component leads and core reviewers - please help to sort
blueprints out, as many have to be properly updated, and other (not started
/ not completed) moved to the next milestone.

PS. I'm visiting a design summit in Paris, and have heard a very positive
feedback on Fuel from several people already. In overall, the quality of
which we've done 5.1 is considered to be as very, very high. Let's make 6.0
even better!
Thanks all for the very focused and collaborative work!

[1] https://wiki.openstack.org/wiki/Fuel/6.0_Release_Schedule
[2] https://wiki.openstack.org/wiki/FeatureFreeze
[3] https://fuel-jenkins.mirantis.com/, 6.0 tab, last two builds
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] what is the different between ovs-ofctl and iptalbes? Can we use ovs-ofctl to nat floating ip into fixed ip if we use openvswitch agent?

2014-11-04 Thread Baohua Yang
As I remember, ovs does not support binding-on veth rules.
Hence now we might need tools like iptables.
However, this might change in future.

As to the l3 part, should be handled in more efficient way, e.g., NFV.


On Tue, Nov 4, 2014 at 2:29 PM, loy wolfe  wrote:

> maybe two reasons: performance caused by flow miss; feature parity
>
> L3+ flow table destroy the megaflow aggregation, so if your app has
> many concurrent sessions like web server, flow miss upcall would make
> vswitchd corrupted.
>
> iptable is already there, migrating it to ovs flow table needs a lot
> of extra development, not to say that some advanced features is lost
> (for example, stateful firewall). However ovs is considering to add
> some hook to iptable, but in the very early stage yet. Even with that,
> it is not implemented by ovs datapath flowtable, but by iptable.
>
> On Tue, Nov 4, 2014 at 1:07 PM, Li Tianqing  wrote:
> > ovs is implemented open flow, in ovs, it can see the l3, why do not use
> ovs?
> >
> > --
> > Best
> > Li Tianqing
> >
> > At 2014-11-04 11:55:46, "Damon Wang"  wrote:
> >
> > Hi,
> >
> > OVS mainly focus on l2 which iptables mainly focus on l3 or higher.
> >
> > Damon Wang
> >
> > 2014-11-04 11:12 GMT+08:00 Li Tianqing :
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Best
> >> Li Tianqing
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][compute] propose to use a table to deal with the vm_state when _init_instance in compute

2014-11-04 Thread Alex Xu
+1, good idea!

2014-11-04 15:15 GMT+08:00 Eli Qiao :

>  hello all:
> in current _init_instance function in compute manager,
> there's flood 'and' 'or' logic, to check the vm_state and task_state when
> initialize a instance during service list,
> this lead hard to read and hard to maintain, so I propose a new way to
> handle this.
>
> we can create a vm_state_table, by look up the table  we can find the
> action we need to do for the instance,
> from this table , you can clearly see what vm_state and task_state should
> take the action.
>
> for example:
> {vm_states list :{task_states list: action}},
>
> each entry stands for an action,
> and we walk though the tuple
> so the table should be like this:
>
> vm_state_table = (
> {vm_states.SOFT_DELETE :{'ALL': ACTION_NONE}},
> {vm_states.ERROR:  {('NOT_IN',[task_states.RESIZE_MIGRATING,
>
> task_states.DELETING]): ACTION_NONE}},
> {vm_states.DELETED: {'ALL': _complete_partial_deletion}},
> {vm_states.BUILDING: {'ALL': ACTION_ERROR}},
> {'ALL': {('IN',[task_states.SCHEDULING,
> task_states.BLOCK_DEVICE_MAPPING,
> task_states.NETWORKING,
> task_states.SPAWNING)]: ACTION_ERROR}},
> {('IN',[vm_states.ACTIVE, vm_states.STOPPED]: {('IN',
> [task_states.REBUILDING,
>
>task_states.REBUILD_BLOCK_DEVICE_MAPPING,
>
>   task_states.REBUILD_SPAWNING]): ACTION_ERROR}},
> {('NOT_IN',[vm_states.ERROR]): {('IN',
> [task_states.IMAGE_SNAPSHOT_PENDING,
>
> task_states.IMAGE_PENDING_UPLOAD,
>
> task_states.IMAGE_UPLOADING,
>
> task_states.IMAGE_SNAPSHOT]): _post_interrupted_snapshot_cleanup}}
> )
>
> what do you think, do we need a bp for this?
>
> --
> Thanks,
> Eli (Li Yong) Qiao
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev