Re: [openstack-dev] [Openstack] Cinder-service connectivity issues

2015-03-30 Thread Kamsali, RaghavendraChari (Artesyn)
Hi,
Now the time are in sync , but the issue is still exist.


From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Thursday, March 26, 2015 12:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Kamsali, RaghavendraChari [ENGINEERING/IN]; Ritesh Nanda; 
openst...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack] Cinder-service connectivity issues

Based on the checkin times in your post, it looks like time is out of sync 
between your nodes. The one reporting down is reporting time in the future. I 
would install ntp and make sure the clocks are in sync.

Vish

On Mar 25, 2015, at 2:33 AM, Kamsali, RaghavendraChari (Artesyn) 
raghavendrachari.kams...@artesyn.commailto:raghavendrachari.kams...@artesyn.com
 wrote:



Please find attachment log (c-api) , when I execute command cinder create 1.


From: Kamsali, RaghavendraChari (Artesyn) 
[mailto:raghavendrachari.kams...@artesyn.com]
Sent: Wednesday, March 25, 2015 1:39 PM
To: Ritesh Nanda
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Subject: Re: [Openstack] Cinder-service connectivity issues

FYI,

From: Ritesh Nanda [mailto:riteshnand...@gmail.com]
Sent: Wednesday, March 25, 2015 1:09 PM
To: Kamsali, RaghavendraChari [ENGINEERING/IN]
Cc: openst...@lists.openstack.orgmailto:openst...@lists.openstack.org; 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [Openstack] Cinder-service connectivity issues

Can you run cinder-scheduler , volume service in debug mode and paste the logs.

Regards,
Ritesh

On Wed, Mar 25, 2015 at 12:10 AM, Kamsali, RaghavendraChari (Artesyn) 
raghavendrachari.kams...@artesyn.commailto:raghavendrachari.kams...@artesyn.com
 wrote:
Hi,

My setup is shown below having  three networks (management, storage, 
data/virtual) .


image001.png

Am facing issue when I bring up the setup as shown above scenario , could 
anyone help me to figure out did I configured incorrectly or doing anything 
wrong .

On Controller Node

SERVICES ENABLED: (c-sch,c-api)
Management- 192.168.21.108
Storage- 10.130.98.97

Cinder_configarations :

my_ip : 10.130.98.97 (also tried 19.2168.21.108)
glance_host:10.130.98.97 (also tried 192.168.21.108)
iscsi_ip_address: 10.130.98.97 (also tried 192.168.21.108)

image002.jpg


image003.jpg

On Storage Node

SERVICES ENABLED: (c-vol)
Management - 192.1689.21.107
Stroage - 10.130.98.136

my_ip : 10.130.98.97 (also tried 19.2168.21.108)
glance_host:10.130.98.97 (also tried 192.168.21.108)
iscsi_ip_address: 10.130.98.97 (also tried 192.168.21.108)
lvmdriver-1.iscsi_ip_address   : 10.130.98.136 (also tried 192.168.21.107)


image004.jpg


Thanks and Regards,
Raghavendrachari kamsali | Software Engineer II  | Embedded Computing
Artesyn Embedded Technologies | 5th Floor, Capella Block, The V, Madhapur| 
Hyderabad, AP 500081 India


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



--
 With Regards
 Ritesh Nanda
cinder-create-1.txt__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Irina Povolotskaya for fuel-docs core

2015-03-30 Thread Nikolay Markov
+1
29 Мар 2015 г. 20:42 пользователь Sergey Vasilenko 
svasile...@mirantis.com написал:

 +1


 /sv

 On Fri, Mar 27, 2015 at 5:31 PM, Anastasia Urlapova 
 aurlap...@mirantis.com wrote:

 + 10

 On Fri, Mar 27, 2015 at 4:28 AM, Igor Zinovik izino...@mirantis.com
 wrote:

 +1

 On 26 March 2015 at 19:26, Fabrizio Soppelsa fsoppe...@mirantis.com
 wrote:
  +1 definitely
 
 
  On 03/25/2015 10:10 PM, Dmitry Borodaenko wrote:
 
  Fuelers,
 
  I'd like to nominate Irina Povolotskaya for the fuel-docs-core team.
  She has contributed thousands of lines of documentation to Fuel over
  the past several months, and has been a diligent reviewer:
 
 
 
 http://stackalytics.com/?user_id=ipovolotskayarelease=allproject_type=allmodule=fuel-docs
 
  I believe it's time to grant her core reviewer rights in the fuel-docs
  repository.
 
  Core reviewer approval process definition:
  https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Igor Zinovik
 Deployment Engineer at Mirantis, Inc
 izino...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Regarding neutron bug # 1432582

2015-03-30 Thread Sudipto Biswas

Someone from my team had installed the OS on baremetal with a wrong 'date'
When this node was added to the Openstack controller, the logs from the
neutron-agent on the compute node showed - AMQP connected. But the neutron
agent-list command would not list this agent at all.

I could figure out the problem when the neutron-server debug logs were 
enabled
and it vaguely pointed at the rejection of AMQP connections due to a 
timestamp
miss match. The neutron-server was treating these requests as stale due 
to the

timestamp of the node being behind the neutron-server. However, there's no
good way to detect this if the agent runs on a node which is ahead of time.

I recently raised a bug here: 
https://bugs.launchpad.net/neutron/+bug/1432582


And tried to resolve this with the review:
https://review.openstack.org/#/c/165539/

It went through quite a few +2s after 15 odd patch sets but we still are not
in common ground w.r.t addressing this situation.

My fix tries to log better and throw up an exception to the neutron agent on
FIRST time boot of the agent for better detection of the problem.

I would like to get your thoughts on this fix. Whether this seems legit 
to have
the fix per the patch OR could you suggest a approach to tackle this OR 
suggest

just abandoning the change.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] How will Tempest discover/run tests migrated to specific projects?

2015-03-30 Thread Rohan Kanade
Since tests can now be removed from Tempest 
https://wiki.openstack.org/wiki/QA/Tempest-test-removal and migrated to
their specific projects.

Does Tempest plan to discover/run these tests in tempest gates? If yes, how
is that going to be done?  Will there be a discovery mechanism in Tempest
to discover tests from individual projects?

Regards,
Rohan Kanade
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-03-30 Thread ozamiatin

Hi,

Sorry for not replying on [1] comments too long.
I'm almost ready to return to the spec with updates.

The main lack of current zmq-driver implementation is that
it manually implements REQ/REP on top of PUSH/PULL.

It results in:

1. PUSH/PULL is one way directed socket (reply needs another connection)
We need to support backwards socket pipeline (two pipelines). In 
REQ/REP

we have it all in one socket pipeline.

2. Supporting delivery of reply over second pipeline (REQ/REP state 
machine).


I would like to propose such socket pipeline:
rpc_client(REQ(tcp)) = proxy_frontend(ROUTER(tcp)) = 
proxy_backend(DEALER(ipc)) = rpc_server(REP(ipc))


ROUTER and DEALER are asynchronous substitution for REQ/REP for building 
1-N and N-N

topologies, and they don't break the pattern.

Recommended pipeline nicely matches for CALL.
However CAST can also be implemented over REQ/REP, using
reply as message delivery acknowledgement, but not returning it to caller.
Listening to reply for CAST in background thread keeps it async as well.

Regards,
Oleksii Zamiatin

On 30.03.15 06:39, Li Ma wrote:

Hi all,

I'd like to propose a simple but straightforward method to improve the
stability of the current implementation.

Here's the current implementation:

receiver(PULL(tcp)) -- service(PUSH(tcp))
receiver(PUB(ipc)) -- service(SUB(ipc))
receiver(PUSH(ipc)) -- service(PULL(ipc))

Actually, as far as I know, the local IPC method is much more stable
than network. I'd like to switch PULL/PUSH to REP/REQ for TCP
communication.

The change is very simple but effective for stable network
communication. I cannot apply the patch for our production systems. I
tried it in my lab, and it works well.

I know there's another blueprint for REP/REQ pattern [1], but it's not
the same, I think.

I'd like to discuss it about how to take advantage of REP/REQ of zeromq.

[1] https://review.openstack.org/#/c/154094/2/specs/kilo/zmq-req-rep-call.rst

Best regards,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-30 Thread Kevin Benton
What does fog do? Is it just a client to the Neutron HTTP API? If so, it
should not have broken like that because the API has remained pretty
stable. If it's a deployment tool, then I could see that because the
configuration options to tend to suffer quite a bit of churn as tools used
by the reference implementation evolve.

I agree that these changes are an unpleasant experience for the end users,
but that's what the deprecation timeline is for. This feature won't break
in L, it will just result in deprecation warnings. If we get feedback from
users that this serves an important use case that can't be addressed
another way, we can always stop the deprecation at that point.

On Sun, Mar 29, 2015 at 12:44 PM, George Shuklin george.shuk...@gmail.com
wrote:

 On 03/24/2015 09:21 PM, Assaf Muller wrote:

 Note that https://review.openstack.org/#/c/166888/ has been merged.
 This means that the option has been deprecated for K and will be
 removed in L. Anyone using the non-default value of False will be looking
 at errors in his logs.


 Well, I have nothing to do with that option, but every time I see how
 cruel is Openstack toward users, I can't avoid to compare it to Linux. They
 keep code which is used by userspace forever. Even if it cause programmers
 feel like they need to work more.

 I understand that Openstack is growing and there are many 'baby mistakes'
 in the past.

 But next time you will be curios why someone still sitting on cactus,
 remember this case. Every new release of openstack is like new software
 where you need to learn everything from scratches.

 Compare this to modern Linux updates, where changes in the kernel almost
 invisible for userspace and new versions are 'for new features', not for
 'oh, now I need to rewrite my libraries to support new version'.

 I'm not joking. Check out ruby's fog - it simply does not work in modern
 neutron-based network. Who is to blame? Users, obviously. Lazy bums without
 any respect to great work to rewrite and obsolete everything.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FWaaS iptables implementation

2015-03-30 Thread Miyashita, Kazuhiro
Hi,

I want to ask about FWaaS iptables rule implementation.
firewall rule are deployed as iptables rules in network node , and ACCEPT 
target is set at second rule(*).


Chain neutron-l3-agent-iv431d7bfbc (1 references)
pkts bytes target prot opt in out source   destination
0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0   
state INVALID
0 0 ACCEPT all  --  *  *   0.0.0.0/00.0.0.0/0   
state RELATED,ESTABLISHED   (*)
0 0 neutron-l3-agent-liA31d7bfbc  tcp  --  *  *   172.16.2.0/23 
   1.2.3.4 tcp spts:1025:65535 dpt:80   
0 0 neutron-l3-agent-liA31d7bfbc  tcp  --  *  *   172.16.6.0/24 
   1.2.3.4 tcp spts:1025:65535 dpt:80   
   0 0 neutron-l3-agent-liA31d7bfbc  tcp  --  *  *   1.2.3.4
  172.16.14.0/24  tcp spts:1025:65535 dpt:11051 
0 0 neutron-l3-agent-liA31d7bfbc  tcp  --  *  *   10.3.0.0/24   
   1.2.3.4 tcp spts:1025:65535 dpt:22   
0 0 neutron-l3-agent-liD31d7bfbc  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0 


Why is ACCEPT rule set at second in iptables rule. Performance reason(ICMP or 
other protocol such as UDP/TCP)?

This causes some wrong scenario for example...

[outside openstack cloud] --- Firewall(FWaaS) -- [inside openstack cloud]

1) admin create Firewall and create Filrewall rule accepting ICMP request from 
outside openstack cloud, and
2) ICMP request packets incoming from outside to inside, and
3) someday, admin detects that ICMP rule is security vulnerability and create 
Firewall rule blocking ICMP request from outside.

but ICMP request packets still incoming due to ACCEPT rule(*), because ICMP 
connection still hit rule at second(*).


Thanks.



kazuhiro MIYASHITA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Host maintenance notification

2015-03-30 Thread Balázs Gibizer
Hi, 

I have the following scenario. I have an application consisting of multiple VMs 
on different compute hosts. The admin puts one of the hosts into maintenance 
mode (nova-manage service disable ...) because there will  be some maintenance 
activity on that host in the near future. Is there a way to  get a notification 
from Nova when a host is put into maintenance mode?
If it is not the case today would the nova community support such an addition 
to Nova?

As a subsequent question is there a way for an external system to listen to 
such a notification published on the message bus? 

Thanks in advance.
Cheers,
Gibi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Added gating jobs on merge actions for chosen repositories.

2015-03-30 Thread Pawel Brzozowski

Hello,

I would like to inform you, that few gating jobs were added to 
https://fuel-jenkins.mirantis.com/ Jenkins:


gate-fuel-web
gate-fuel-astute
gate-fuel-library-python
gate-fuel-ostf
gate-fuel-tasks-validator
gate-python-fuelclient

As suggested by Sebastian, those jobs run CI tests on merge action on 
master node and inform a patch owner about any failures. This of course 
is to prevent lack of information when master is broken.


Thank you for your attention.

--
Regards,
Pawel Brzozowski
DevOps

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Network name as a Server properties

2015-03-30 Thread BORTMAN, Limor (Limor)
Hi,
I noticed the we can't use network name under OS::Neutron::Port (only 
network_id) as a valid neutron property, and I was wondering why?

I expected it to be like image under OS::Nova::Server: 
 The property name should be network, and it should accept both id and name

Thanks 
Stotland Limor


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Jesse Pretorius
On 28 March 2015 at 00:41, Steve Wormley openst...@wormley.com wrote:

 So, I figured I'd weigh in on this as an employee of a nova-network using
 company.

 Nova-network allowed us to do a couple things simply.

 1. Attach openstack networks to our existing VLANs using our existing
 firewall/gateway and allow easy access to hardware such as database servers
 and storage on the same VLAN.
 2. Floating IPs managed at each compute node(multi-host) and via the
 standard nova API calls.
 3. Access to our instances via their private IP addresses from inside the
 company(see 1)

 Our forklift replacement to neutron(as we know we can't 'migrate') is at
 the following state.
 2 meant we can't use pure provider VLAN networks so we had to wait for DVR
 VLAN support to work.


I'm always confused when I see operators mention that provider VLANs can't
be used in a Neutron configuration. While at my former employer we had that
setup with Grizzly, and also note that any instance attached to a VLAN
tagged tenant network did not go via the L3 agent... the traffic was tagged
and sent directly from the compute node onto the VLAN.

All we had to do to make this work was to allow VLAN tagged networks and
the cloud admin had to setup the provider network with the appropriate VLAN
tag.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Last client release in Kilo

2015-03-30 Thread Sergey Lukjanov
Hi Sahara folks,

please, share your thoughts about about which changes should be included
into the last sahara client release in Kilo to add this version to global
requirements and use it in Heat, Horizon, etc.

Here is an etherpad: https://etherpad.openstack.org/p/sahara-kilo-client

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Assaf Muller


- Original Message -
 On 03/27/2015 11:48 AM, Assaf Muller wrote:
  
  
  - Original Message -
  On 03/27/2015 05:22 AM, Thierry Carrez wrote:
  snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally not
  desirable.
 
  I think if you boil everything down, you end up with 3 really important
  differences.
 
  1) neutron is a fleet of services (it's very micro service) and every
  service requires multiple and different config files. Just configuring
  the fleet is a beast if it not devstack (and even if it is)
 
  2) neutron assumes a primary interesting thing to you is tenant secured
  self service networks. This is actually explicitly not interesting to a
  lot of deploys for policy, security, political reasons/restrictions.
 
  3) neutron open source backend defaults to OVS (largely because #2). OVS
  is it's own complicated engine that you need to learn to debug. While
  Linux bridge has challenges, it's also something that anyone who's
  worked with Linux  Virtualization for the last 10 years has some
  experience with.
 
  (also, the devstack setup code for neutron is a rats nest, as it was
  mostly not paid attention to. This means it's been 0 help in explaining
  anything to people trying to do neutron. For better or worse devstack is
  our executable manual for a lot of these things)
 
  so that being said, I think we need to talk about minimum viable
  neutron as a model and figure out how far away that is from n-net. This
  week at the QA Sprint, Dean, Sean Collins, and I have spent some time
  hashing it out, hopefully with something to show the end of the week.
  This will be the new devstack code for neutron (the old lib/neutron is
  moved to lib/neutron-legacy).
 
  Default setup will be provider networks (which means no tenant
  isolation). For that you should only need neutron-api, -dhcp, and -l2.
  So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
  like to revert back to linux bridge for the base case (though first code
  will probably be OVS because that's the happy path today).
 
  
  Looking at the latest user survey, OVS looks to be 3 times as popular as
  Linux bridge for production deployments. Having LB as the default seems
  like an odd choice. You also wouldn't want to change the default before
  LB is tested at the gate.
 
 Sure, actually testing defaults is presumed here. I didn't think it
 needed to be called out separately.

Quick update about OVS vs LB:
Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
https://review.openstack.org/#/c/168423/

So far it's failing pretty badly.

 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][MySQL][Galera] Improvements for the DB configuration

2015-03-30 Thread Bogdan Dobrelya
Hello.
There are several bugs and patches on review related to the MySQL with
Galera and HAproxy cluster-check script configuration. Please don't
hesitate to discuss them in this mail thread and in the review.

* MySQL O_DIRECT mode [0] - merged. Do we need some additional
performance test cases for this change? This is also about to
improve the memory allocation for the host OS as O_DIRECT eliminates
double buffering.

* Galera clustercheck script should use available_when_donor=1 [1].
This one should improve the UX for the case when all galera nodes went
down and cluster is being reassembled completely.

* Mysql deadlock duiring deployment [2]. While the fix for the
puppet-mysql provider issue [3] could be also nice to have, the related
fix is to increase wsrep_retry_autocommit to make deadlocks be tolerated
in a slightly better way from the server side as well.

* READ-COMMITED transactions isolation for MySQL [4]. There is an
original mail thread [5] explaining the related Oslo.db issue with
transactions semantics for different types of DB backend.

[0] https://bugs.launchpad.net/fuel/+bug/1378063
[1] https://bugs.launchpad.net/fuel/+bug/1437816
[2] https://bugs.launchpad.net/fuel/+bug/1431702
[3] https://tickets.puppetlabs.com/browse/MODULES-1852
[4] https://bugs.launchpad.net/fuel/+bug/1438107
[5]
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056245.html


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Re: Questions about kolla

2015-03-30 Thread Steven Dake (stdake)
Michal,

All great questions.  I have copied the OpenStack mailing list since these 
technical questions can benefit everyone.

From: Stachowski, Michal 
michal.stachow...@intel.commailto:michal.stachow...@intel.com
Date: Monday, March 30, 2015 at 6:21 AM
To: Steven Dake std...@cisco.commailto:std...@cisco.com
Cc: Chylinski, Arek 
arek.chylin...@intel.commailto:arek.chylin...@intel.com
Subject: Questions about kolla

Hi Steven

I am interested in Kolla project and I’ve got few questions about it. Could You 
tell me:

· What is state of whole project? What issues do You have?

Most issues we run into are actually docker issues.  For example, we had a bug 
where mariadb was not set to always-restart.  Rebooting a node would crater 
docker.  This is but one example of many.  To be fair the Docker community is 
super responsive to bug reports and debugging the problems I find :)


· What is scope of Kolla? Which services do You plan dockerize?

We plan to dockerize any service needed to deploy OpenStack.  We haven’t 
decided if that includes ceph, since ceph may already be dockerized by someone 
else.  But it does include the HA services we need as well as the rest of the 
OpenStack services.


· As I can see in docker-registry You’re providing centos and fedora 
images, but in files from git repositorie You are using only centos. Is It mean 
that fedora images are depracted?

I just don’t build fedora images because they are larger.  To build fedora 
images, in
.buildconf set
PREFIX=fedora-rdo-

And it will build fedora image


· Do You plan release Kolla as a part of whole OpenStack (for example: 
in Liberty release)?

Kilo 2015.1.0 will be our first release


· Do You plan support Mesos instead of Kubernetes?


Kubernetes has been deprecated because it doesn’t provide super privileged 
containers.


· Do You plan support containers with bridge instead of host network 
mode?


Docker-proxy adds about 20 microseconds to each network packet.  While that 
doesn’t sound like much it adds up.  The main reason we are not using 
docker-proxy and containers with bridging has to do with the fact we couldn’t 
get Neutron to work in this environment in the past (when we were using k8s).  
Neutron creates its own network namespaces, which requires host networking mode.

Regards
-steve

Thanks for Your help ☺

Kind regards
Michał Stachowski
Undergrad Intern Technical
Software Assurance Administrator – Cloud Platform Group – Data Center Group
Intel Corporation - Gdańsk


-
Intel Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk Północ | VII Wydział 
Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP 957-07-52-316 | 
Kapitał zakładowy 200.000 PLN.

Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego adresata i 
może zawierać informacje poufne. W razie przypadkowego otrzymania tej 
wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie; 
jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). If you are not the intended recipient, please 
contact the sender and delete all copies; any review or distribution by others 
is strictly prohibited.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Re: Questions about kolla

2015-03-30 Thread Chmouel Boudjnah
On Mon, Mar 30, 2015 at 3:42 PM, Steven Dake (stdake) std...@cisco.com
wrote:

 We plan to dockerize any service needed to deploy OpenStack.  We haven’t
 decided if that includes ceph, since ceph may already be dockerized by
 someone else.  But it does include the HA services we need as well as the
 rest of the OpenStack services.



it is and available here, https://github.com/ceph/ceph-docker

(Seb in Cc of this email is the one who have been working on this)

Chmouel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-03-30 Thread Sylvain Bauza

Hi,

tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if we 
could do this for the last time until we spin-off the project.


 As it is confusing for many people to understand the difference in 
between the future Gantt project and the Nova scheduler effort we're 
doing, I'm proposing to stop using that name for all the efforts related 
to reducing the technical debt and splitting out the scheduler. That 
includes, not exhaustively, the topic name for our IRC weekly meetings 
on Tuesdays, any ML thread related to the Nova scheduler or any 
discussed related to the scheduler happening on IRC.

Instead of using [gantt], please use [nova] [scheduler] tags.

That said, any discussion related to the real future of a cross-project 
scheduler based on the existing Nova scheduler makes sense to be tagged 
as Gantt, of course.



-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Question about Sahara API v2

2015-03-30 Thread michael mccune

On 03/30/2015 07:02 AM, Sergey Lukjanov wrote:

My personal opinion for API 2.0 - we should discuss design of all object
and endpoint, review how they are used from Horizon or
python-saharaclient and improve them as much as possible. For example,
it includes:

* get rid of tons of extra optional fields
* rename Job - Job Template, Job Execution - Job
* better support for Horizon needs
* hrefs

If you have any ideas ideas about 2.0 - please write them up, there is a
99% chance that we'll discuss an API 2.0 a lot on Vancouver summit.


+1

i've started a pad that we can use to collect ideas for the discussion: 
https://etherpad.openstack.org/p/sahara-liberty-api-v2


things that i'd like to see from the v2 discussion

* a full endpoint review, some of the endpoints might need to be 
deprecated or adjusted slightly (for example, job-binary-internals)


* a technology review, should we consider Pecan or stay with Flask?

* proposals for more radical changes to the api; use of micro-versions 
akin to nova's plan, migrating the project id into the headers, possible 
use of swagger to aid in auto-generation of api definitions.


i think we will have a good amount to discuss and i will be migrating 
some of my local notes into the pad over this week and the next. i 
invite everyone to add their thoughts to the pad for ideas.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][FFE] Floating IP traffic statistics meters

2015-03-30 Thread gordon chung
if we can get the related driver implementation(s) for these meters ASAP, i'm 
ok with supporitng this feature.

cheers,
gord



 From: me...@juniper.net
 To: openstack-dev@lists.openstack.org
 Date: Sat, 28 Mar 2015 00:56:27 +
 Subject: [openstack-dev] [ceilometer][FFE] Floating IP traffic statistics 
 meters

 Hello,
 Apologies for the double post, forgot to include FFE in the subject:

 I’d like to request an exemption for the following to go into the Kilo 
 release.

 This work is crucial for:
 Cloud operators need to be able to bill customers based on floating IP 
 traffic statistics.

 Why this needs an FFE?
 It’s officially new feature adding 4 new meters

 Status of the work:
 In summary the patch only introduces 4 new meters - 
 ip.floating.transmit.packets, ip.floating.transmit.bytes, 
 ip.floating.receive.packets, ip.floating.receive.bytes and adds 2 new 
 functions to the neutron_client - a) Function to get list of all floating IPs 
 and 2) Get information about a specific port.
 - The patch necessary for this is already submitted for the review - 
 https://review.openstack.org/#/c/166491/
 - The document impact patch has already been reviewed and is waiting for the 
 ceilometer commit to go through - https://review.openstack.org/#/c/166489/

 Thanks

 Megh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] How will Tempest discover/run tests migrated to specific projects?

2015-03-30 Thread Boris Pavlovic
Rohan,

In Rally we are going to automate this work extending rally verify command.
You can read here in more details:
http://boris-42.me/rally-verfiy-as-the-control-plane-for-gabbi-tempest-in-tree-functional-tests/

As well, you are welcome to take a part in spec discussion here:
https://review.openstack.org/#/c/166487/


Best regards,
Boris Pavlovic

On Mon, Mar 30, 2015 at 9:51 AM, Rohan Kanade openst...@rohankanade.com
wrote:

 Since tests can now be removed from Tempest 
 https://wiki.openstack.org/wiki/QA/Tempest-test-removal and migrated to
 their specific projects.

 Does Tempest plan to discover/run these tests in tempest gates? If yes,
 how is that going to be done?  Will there be a discovery mechanism in
 Tempest to discover tests from individual projects?

 Regards,
 Rohan Kanade

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] CI outage

2015-03-30 Thread Derek Higgins


Tl;dr tripleo ci is back up and running, see below for more

On 21/03/15 01:41, Dan Prince wrote:

Short version:

The RH1 CI region has been down since yesterday afternoon.

We have a misbehaving switch and have file a support ticket with the
vendor to troubleshoot things further. We hope to know more this
weekend, or Monday at the latest.

Long version:

Yesterday afternoon we started seeing issues in scheduling jobs on the
RH1 CI cloud. We haven't made any OpenStack configuration changes
recently, and things have been quite stable for some time now (our
uptime was 365 days on the controller).

Initially we found a misconfigured Keystone URL which was preventing
some diagnostic queries via OS clients external to the rack. This
setting hadn't been recently changed however and didn't seem to bother
nodepool before so I don't think it is the cause of the outage...

MySQL also got a bounce. It seemed happy enough after a restart as well.

After fixing the keystone setting and bouncing MySQL instances appears
to go ACTIVE but we were still having connectivity issues getting
floating IPs and DHCP working on overcloud instances. After a good bit
of debugging we started looking at the switches. Turns out one of them
has a high CPU usuage (above the warning threshold) and MAC address are
also unstable (ports are moving around).

Until this is resolved RH1 is unavailable to host jobs CI jobs. Will
post back here with an update once we have more information.


RH1 has been running as expected since last Thursday afternoon which 
means the cloud was down for almost a week, I'm left not entirely sure 
what some problems were, at various times during the week we tried a 
number of different interventions which may have caused (or exposed) 
some of our problems, e.g.


at one stage we restarted openvswitch in an attempt to ensure nothing 
had gone wrong with our ovs tunnels, around the same time (and possible 
caused by the restart), we started getting progressively worse 
connections to some of our servers. With lots of entries like this on 
our bastion server
Mar 20 13:22:49 host01-rack01 kernel: bond0.5: received packet with own 
address as source address


Not linking the restart with the looping packets message and instead 
thinking we may have a problem with the switch we put in a call with our 
switch vendor.


Continuing to chase down a problem on our own servers we noticed that 
tcpdump was reporting at times about 100,000 ARP packets per second 
(sometimes more).


Various interventions stopped the excess broadcast traffic e.g.
  Shutting down most of the compute nodes stopped the excess traffic, 
but the problem wasn't linked to any one particular compute node
  Running the tripleo os-refresh-config script on each compute node 
stopped the excess traffic


But restarting the controller node caused the excess traffic to return

Eventually we got the cloud running without the flood of broadcast 
traffic, with a small number of compute nodes, but instances still 
weren't getting IP address, with nova and neutron in debug mode we saw 
an error where nova failing to mount the qcow image (iirc it was 
attempting to resize the image).


Unable to figure out why this was working in the past but now isn't we 
redeployed this single compute node using the original image that was 
used (over a year ago), instances on this compute node we're booting but 
failing to get an IP address, we noticed this was because of a 
difference between the time on the controller when compared to the 
compute node. After resetting the time, now instances were booting and 
networking was working as expected (this was now Wednesday evening).


Looking back at the error while mounting the qcow image, I believe this 
was a red herring, it looks like this problem was always present on our 
system but we didn't have scary looking tracebacks in the logs until we 
switched to debug mode.


Now pretty confident we can get back to a running system by starting up 
all the compute nodes again and ensuring the os-refresh-config scripts 
were run then ensuring the times were all set on each host properly we 
decided to remove any entropy the may have built up while debugging 
problems on each computes node so we redeployed all of our compute nodes 
from scratch. This all went as expected but was a little time consuming 
as we spent time to verify each step as we went along, the steps went 
something like this


o with the exception of the overcloud controller, nova delete all of 
the hosts on the undercloud (31 hosts)


o we now have a problem, in tripleo the controller and compute nodes are 
tied together in a single heat template, so we need the heat template 
that was used a year ago to deploy the whole overcloud along with the 
parameters that were passed into it, we had actually done this before 
when adding new compute nodes to the cloud so it wasn't new territory.
   o Use heat template-show ci-overcloud to get the original heat 
template (a 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/30/2015 09:25 AM, Assaf Muller wrote:
 
 
 - Original Message -
 On 03/27/2015 11:48 AM, Assaf Muller wrote:


 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).


 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.

 Sure, actually testing defaults is presumed here. I didn't think it
 needed to be called out separately.
 
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/
 
 So far it's failing pretty badly.
That is the nature of development.

Let's also note that is patchset 1 of a patch marked work in progress.

If we start to make decisions about whether or not a direction is a
reasonable direction on a patch which is expected to fail this early in
the development process we serious injure our ability to foster development.

Please understand and respect the development process prior to expecting
others to make decisions prematurely.

Thank you,
Anita.
 

  -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/26/2015 06:31 PM, Michael Still wrote:
 Hi,
 
 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.
 
 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?
 
 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration tooling for operators who are keen to move to Neutron.
 However, that migration tooling would be less critical than it is now.
 
 Unfortunately, this has all come to a head at a time when the Nova
 team is heads down getting the Kilo release out the door. We simply
 don't have the time at the moment to properly consider these issues.
 So, I'd like to ask for us to put a pause on this current work until
 we have Kilo done. These issues are complicated and important, so I
 feel we shouldn't rush them at a time we are distracted.
 
 Finally, I want to reinforce that the position we currently find
 ourselves in isn't because of a lack of effort. Oleg, Angus and Anita
 have all worked very hard on this problem during Kilo, and it is
 frustrating that we haven't managed to find a magic bullet to solve
 all of these problems. I want to personally thank each of them for
 their efforts this cycle on this relatively thankless task.
 
 I'd appreciate other's thoughts on these issues.
 
 Michael
 
 
 1: 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/migration-from-nova-net.html#impact-limitations
 
 
Thank you, Michael, for this post.

It is clear that we need some additional discussion and agreement here,
and I welcome the discussion.

It is disheartening to try to create an implementation that won't
achieve the goal.

I too would like to thank everyone who has worked hard to try to create
a migration path with the understanding we had been operating with, my
thanks to each of you.

I have placed the weekly nova-net to neutron migration meeting on
hold[0], pending the outcome of this or other discussions and some
additional direction.

Thank you to all participating,
Anita.

[0] https://wiki.openstack.org/wiki/Meetings/Nova-nettoNeutronMigration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host maintenance notification

2015-03-30 Thread Tim Gao
Hi, Balázs

I have the same scenario with you. AFAIK, notifications in Nova does not
support some thing like service enable/disable action. These APIs do
nothing more than saving new value to database. But I believe supporting
these kind of notifications will not be very complicated.

To the second question, exactly yes. You can get inspiration from OpenStack
Ceilometer, stacktach. These project have already listened on OpenStack
message bus to get notifications, then do some awesome job. You can find
some thing interesting from here (
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-notifications.html
)

2015-03-30 18:16 GMT+08:00 Balázs Gibizer balazs.gibi...@ericsson.com:

 Hi,

 I have the following scenario. I have an application consisting of
 multiple VMs on different compute hosts. The admin puts one of the hosts
 into maintenance mode (nova-manage service disable ...) because there will
 be some maintenance activity on that host in the near future. Is there a
 way to  get a notification from Nova when a host is put into maintenance
 mode?
 If it is not the case today would the nova community support such an
 addition to Nova?

 As a subsequent question is there a way for an external system to listen
 to such a notification published on the message bus?

 Thanks in advance.
 Cheers,
 Gibi



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Tim Gao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Question about Sahara API v2

2015-03-30 Thread Sergey Lukjanov
Hi,

in a few words - we're finding out some places that were designed not very
well (or our vision changed) and so we'd like to update the API to have a
much better interface to work with Sahara. The blueprint you've listed was
created in Atlanta summit timeframe and so it's not actual now.

My personal opinion for API 2.0 - we should discuss design of all object
and endpoint, review how they are used from Horizon or python-saharaclient
and improve them as much as possible. For example, it includes:

* get rid of tons of extra optional fields
* rename Job - Job Template, Job Execution - Job
* better support for Horizon needs
* hrefs

If you have any ideas ideas about 2.0 - please write them up, there is a
99% chance that we'll discuss an API 2.0 a lot on Vancouver summit.

Thanks.

On Mon, Mar 30, 2015 at 5:34 AM, Chen, Ken ken.c...@intel.com wrote:

  Hi all,

 Recently I have read some contents about Sahara API v2 propose, but I am
 still a bit confused why we are doing so at this stage. I read the bp
 https://blueprints.launchpad.net/sahara/+spec/v2-api-impl and the
 involved gerrit reviews (although already abandoned). However, I did not
 find anything new than current v1+v1.1 APIs. So why do we want v2 API? Just
 to combine v1 and v1.1 APIs? Is there any deeper requirement or background
 needs us to do so? Please let me know that if yes.

 Btw, I also see some comments that we may want to introduce PECAN to
 implement Sahara APIs. Will that be soon in Liberty, or not decided yet?



 Thanks a lot.

 -Ken

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-30 Thread Jesse Pretorius
On 25 March 2015 at 15:24, Kevin Carter kevin.car...@rackspace.com wrote:

 I would like to nominate Nolan Brubaker (palendae on IRC) for the
 os-ansible-deployment-core team. Nolan has been involved with the project
 for the last few months and has been an active reviewer with solid reviews.
 IMHO, I think he is ready to receive core powers on the repository.

 References:
   [
 https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,z
 ]

 Please respond with +1/-1s or any other concerns.


+1 Nolan's been an active reviewer, provided good feedback and
contributions.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Liberty Specs are now open!

2015-03-30 Thread Kyle Mestery
Hi all!

The patch to open up Liberty specs in neutron-specs has now merged [1], so
we're now accepting specs for Neutron targeting the Liberty release. Please
note the process has changed slightly, as indicated in this patch [2]. If a
patch was submitted for Kilo and didn't make it, I've got a review out [3]
which moves these specs to a kilo-backlog directory. They will be preserved
there.

If you previously proposed a spec which didn't land in Kilo and you want to
propose it for Liberty, wait for this patch [3] to land (or rebase your
change on top of it) and you can simply propose to move your spec from
kilo-backlog into liberty. We'll review and try to fast-track that one into
Liberty.

Please note Liberty itself isn't open for development yet until we cut the
RC branch sometime soon. I'll send another note when that happens.

Happy coding!

Thanks
Kyle

[1] https://review.openstack.org/#/c/165116/
[2] https://review.openstack.org/#/c/168434/
[3] https://review.openstack.org/#/c/168351/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] meaning of 'Triaged' in bug tracker

2015-03-30 Thread Davanum Srinivas
+1 Sean!

On Mon, Mar 30, 2015 at 10:00 AM, Sean Dague s...@dague.net wrote:
 I've been attempting to clean up the bug tracker, one of the continued
 inconsistencies that are in the Nova tracker is the use of 'Triaged'.


 https://wiki.openstack.org/wiki/BugTriage

 If the bug contains the solution, or a patch, set the bug status to
 Triaged


 In OpenStack the Triaged state means the solution is provided in the bug
 at enough specification that a patch can be spun. We're ignoring the
 Launchpad language here specifically.

 We had about 180 bugs in Triaged this morning, some untouched for 2
 years. I'm moving everything that looks valid without a solution to
 'Confirmed'. A bunch of other issues look like they can be invalidated
 in the process (through code greps).

 In future, please be careful about putting things into Triaged that
 don't have a solution. Triaged should always end up as a pretty small
 number of things.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/26/2015 08:58 PM, Russell Bryant wrote:
 On 03/26/2015 06:31 PM, Michael Still wrote:
 Hi,

 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 Thanks for writing up the status!
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.
 
 Yes, I'm quite convinced that it will end up being a fairly custom
 effort for virtually all deployments complex enough where just starting
 over or cold migration isn't an option.
 
 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. 
 
 I totally get point #1: nova-network has less features, but I don't
 need the rest, and nova-network is rock solid for me.
 
 I'm curious about the second point about Neutron being more difficult to
 deploy than nova-network.  That's interesting because it actually seems
 like Neutron is more flexible when it comes to integration with existing
 networks.  Do you know any more details?  If not, perhaps those with
 that concern could fill in with some detail here?
 
 So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?
 
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.
 
 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
I heartily agree.

Here is my problem. I am getting the feeling from the big tent
discussions (now this could be my fault since I don't know as it is in
the proposal or just the stuff people are making up about it) that we
are allowing more than one networking project in OpenStack. I have been
disappointed with that impression but that has been the impression I
have gotten.

I'm glad to hear you have a different perspective on this, Russell, and
would just like to clarify this point.

Are we saying that OpenStack has one networking option?

Thanks,
Anita.
 
 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration tooling for operators who are keen to move to Neutron.
 However, that migration tooling would be less critical than it is now.
 
 From a purely technical perspective, it seems like quite a bit of work.
  It reminds me of we'll just split the scheduler out, and we see how
 long that's taking in practice.  I really think all of that effort is
 better spent just improving Neutron.
 
 From a community perspective, I'm not thrilled about long term
 fragmentation for such a fundamental piece of our stack.  So, I'd really
 like to dig into the current state of gaps between Neutron and
 nova-network.  If there were no real gaps, there would be no sensible
 argument to keep the 2nd option.
 
 Unfortunately, this has all come to a head at a time when the Nova
 team is heads down getting the Kilo release out the door. We simply
 don't have the time at the moment to properly consider these issues.
 So, I'd like to ask for us to put a pause on this current work until
 we have Kilo done. These issues are complicated and important, so I
 feel we shouldn't rush them at a time we are 

Re: [openstack-dev] [QA] How will Tempest discover/run tests migrated to specific projects?

2015-03-30 Thread Matthew Treinish
On Mon, Mar 30, 2015 at 12:21:18PM +0530, Rohan Kanade wrote:
 Since tests can now be removed from Tempest 
 https://wiki.openstack.org/wiki/QA/Tempest-test-removal and migrated to
 their specific projects.
 
 Does Tempest plan to discover/run these tests in tempest gates? If yes, how
 is that going to be done?  Will there be a discovery mechanism in Tempest
 to discover tests from individual projects?
 

No, the idea behind that wiki page is to outline the procedure for finding
something that is out of scope and doesn't belong in tempest and is also safe
to remove from the tempest jobs. The point of going through that entire
procedure is that the test being removed should not be run in the tempest gates
anymore and will become the domain of the other project.

Also, IMO the moved test ideally won't be in the same pattern of a tempest test
or have the same constraints of a tempest test and would ideally be more coupled
to the project under test's internals. So that wouldn't be appropriate to
include in a tempest run either.

For example, the first test we removed with that procedure was:

https://review.openstack.org/#/c/158852/

which removed the flavor negative tests from tempest. These were just testing
operations that would go no deeper than Nova's DB layer. Which was something
we couldn't verify in tempest. They also didn't really belong in tempest because
they were just implicitly verifying Nova's DB layer through API responses. The
replacement tests:

http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/wsgi/test_flavor_manage.py

were able to verify the state of the DB was correct and ensure the correct
behavior both in the api and nova's internals. This kind of testing is something
which doesn't belong in tempest or any other external test suite. It is also
what I feel we should be targeting for with project specific in-tree functional
testing and the kind of thing we should be using the removal process on that
wiki page for.


-Matt Treinish



pgpHrbiGjLdjV.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Assaf Muller


- Original Message -
 On 03/30/2015 09:25 AM, Assaf Muller wrote:
  
  
  - Original Message -
  On 03/27/2015 11:48 AM, Assaf Muller wrote:
 
 
  - Original Message -
  On 03/27/2015 05:22 AM, Thierry Carrez wrote:
  snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make
  sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and
  outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he
  was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather
  than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net
  development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally
  not
  desirable.
 
  I think if you boil everything down, you end up with 3 really important
  differences.
 
  1) neutron is a fleet of services (it's very micro service) and every
  service requires multiple and different config files. Just configuring
  the fleet is a beast if it not devstack (and even if it is)
 
  2) neutron assumes a primary interesting thing to you is tenant secured
  self service networks. This is actually explicitly not interesting to a
  lot of deploys for policy, security, political reasons/restrictions.
 
  3) neutron open source backend defaults to OVS (largely because #2). OVS
  is it's own complicated engine that you need to learn to debug. While
  Linux bridge has challenges, it's also something that anyone who's
  worked with Linux  Virtualization for the last 10 years has some
  experience with.
 
  (also, the devstack setup code for neutron is a rats nest, as it was
  mostly not paid attention to. This means it's been 0 help in explaining
  anything to people trying to do neutron. For better or worse devstack is
  our executable manual for a lot of these things)
 
  so that being said, I think we need to talk about minimum viable
  neutron as a model and figure out how far away that is from n-net. This
  week at the QA Sprint, Dean, Sean Collins, and I have spent some time
  hashing it out, hopefully with something to show the end of the week.
  This will be the new devstack code for neutron (the old lib/neutron is
  moved to lib/neutron-legacy).
 
  Default setup will be provider networks (which means no tenant
  isolation). For that you should only need neutron-api, -dhcp, and -l2.
  So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
  like to revert back to linux bridge for the base case (though first code
  will probably be OVS because that's the happy path today).
 
 
  Looking at the latest user survey, OVS looks to be 3 times as popular as
  Linux bridge for production deployments. Having LB as the default seems
  like an odd choice. You also wouldn't want to change the default before
  LB is tested at the gate.
 
  Sure, actually testing defaults is presumed here. I didn't think it
  needed to be called out separately.
  
  Quick update about OVS vs LB:
  Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
  https://review.openstack.org/#/c/168423/
  
  So far it's failing pretty badly.
 That is the nature of development.
 
 Let's also note that is patchset 1 of a patch marked work in progress.
 
 If we start to make decisions about whether or not a direction is a
 reasonable direction on a patch which is expected to fail this early in
 the development process we serious injure our ability to foster development.
 
 Please understand and respect the development process prior to expecting
 others to make decisions prematurely.
 

I was providing a status report, nothing more.

 Thank you,
 Anita.
  
 
 -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
  __
  OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [kolla] Re: Questions about kolla

2015-03-30 Thread Steven Dake (stdake)


From: Chmouel Boudjnah chmo...@chmouel.commailto:chmo...@chmouel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 30, 2015 at 6:52 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Chylinski, Arek 
arek.chylin...@intel.commailto:arek.chylin...@intel.com, Stachowski, 
Michal michal.stachow...@intel.commailto:michal.stachow...@intel.com
Subject: Re: [openstack-dev] [kolla] Re: Questions about kolla


On Mon, Mar 30, 2015 at 3:42 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:
We plan to dockerize any service needed to deploy OpenStack.  We haven’t 
decided if that includes ceph, since ceph may already be dockerized by someone 
else.  But it does include the HA services we need as well as the rest of the 
OpenStack services.


it is and available here, https://github.com/ceph/ceph-docker

Looks pretty good.  Too bad it uses Ubuntu 14.04 as a userspace – in Kolla we 
want to support both CentOS and Ubuntu as a userspace.  But this gap should be 
easy to solve.

I really dislike the bindmounting of /etc and /var/lib/ceph.  /etc should be 
passed via environment and /var/lib/ceph should be a data container to maintain 
the idempotency, immutability, and declarative nature of containers.

Thanks for the link!

Regards
-steve


(Seb in Cc of this email is the one who have been working on this)

Chmouel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Kolla milestone 3 was busted for Ubuntu 14.04 - fixed in master

2015-03-30 Thread Steven Dake (stdake)
Hey folks,

We got a lot of complaints for folks trying to run Kolla on Ubuntu 14.04 with 
3.13 kernel.  Most of the problems were related to kernel bugs or docker bugs.  
We have worked around them, and now Kolla launches like a champ for me on 14.04.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-30 Thread Kevin Carter
Please join me in welcoming Nolan Brubaker (palendae) to the 
os-ansible-deployment core team.

—

Kevin Carter


 On Mar 30, 2015, at 06:54, Jesse Pretorius jesse.pretor...@gmail.com wrote:
 
 On 25 March 2015 at 15:24, Kevin Carter kevin.car...@rackspace.com wrote:
 I would like to nominate Nolan Brubaker (palendae on IRC) for the 
 os-ansible-deployment-core team. Nolan has been involved with the project for 
 the last few months and has been an active reviewer with solid reviews. IMHO, 
 I think he is ready to receive core powers on the repository.
 
 References:
   [ 
 https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,z
  ]
 
 Please respond with +1/-1s or any other concerns.
 
 +1 Nolan's been an active reviewer, provided good feedback and contributions.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Usage of mode attribute in storing and order the secret

2015-03-30 Thread Asha Seshagiri
Hi All ,

What is the use of the mode attribute ? what does the value of this
attribute signify and what are the possible values of this attribute?
For ex :Consider the order request to create the secret :

POST v1/orders

Header: content-type=application/json
X-Project-Id: {project_id}
{
  type: key,
  meta: {
name: secretname,
algorithm: AES,
bit_length: 256,
mode: cbc,
payload_content_type: application/octet-stream
  }
}


What does the mode  value cbc  indicate ?
-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Akihiro Motoki
Hi Neutron folks
(API folks may be interested on this)

We have another discussion on Core vs extension in the subnet pool
feature reivew
https://review.openstack.org/#/c/157597/.
We did the similar discussion on VLAN transparency and MTU for a
network model last week.
I would like to share my concerns on changing the core API directly.
I hope this help us make the discussion productive.
Note that I don't want to discuss the micro-versioning because it
mainly focues on Kilo FFE BP.

I would like to discuss this topic in today's neutron meeting,
but I am not so confident I can get up in time, I would like to send this mail.


The extension mechanism in Neutron provides two points for extensibility:
- (a) visibility of features in API (users can know which features are
available through the API)
- (b) opt-in mechanism in plugins (plugin maintainers can decide to
support some feature after checking the detail)

My concerns mainly comes from the first point (a).
If we have no way to detect it, users (including Horizon) need to do a
dirty work around
to determine whether some feature is available. I believe this is one
important point in API.

On the second point, my only concern (not so important) is that we are
making the core
API change at this moment of the release. Some plugins do not consume
db_base_plugin and
such plugins need to investigate the impact from now on.
On the other hand, if we use the extension mechanism all plugins need to update
their extension list in the last moment :-(


My vote at this moment is still to use an extension, but an extension
layer can be a shim.
The idea is to that all implementation can be as-is and we just add an
extension module
so that the new feature is visible thru the extension list.
It is not perfect but I think it is a good compromise regarding the first point.


I know there was a suggestion to change this into the core API in the
spec review
and I didn't notice it at that time, but I would like to raise this
before releasing it.

For longer term (and Liberty cycle),  we need to define more clear guideline
on Core vs extension vs micro-versioning in spec reviews.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] FFE Request: glusterfs_native: negotiate volumes with glusterd

2015-03-30 Thread Csaba Henk
Hi,

I'm applying for an FFE for change

https://review.openstack.org/162542 ,
glusterfs_native: negotiate volumes with glusterd.

The change in question is in the grey zone between
a bugfix and a feature. So, having it discussed with
the Manila community and our PTL, Ben Swartzlander,
we decided to defeat ambiguity by putting it
forward as an FFE.

While there is no explicit errant behavior with the
current glusterfs_native driver code that would be
addressed by this change, the situation is that the
current version of the driver is conceptually buggy
-- it does not meet consensual expectations what one
has against a driver. (It would be an overstatement
to call current create_share implementation a stub,
but the issue is something similar.)

One aspect of the limitation of create_share is
captured by this bug:

https://bugs.launchpad.net/manila/+bug/1437176 ,
glusterfs_native: Unable to create shares using newly
available GlusterFS volumes without restarting manila
share service

We are submitting the change as a fix for this.

Impact: the code adds a new, more general mechanism for
picking GlusterFS volumes for backing shares. The new code
is basically contained in one function, _pop_gluster_vol();
the rest of the diff is refactor to adjust the driver to
the new internal API. The impact is isolated and limited
to the glusterfs_native driver. The operation logic of the
driver is not affected beyond backing resource allocation of
in create_share. Unit test coverage is good.

Csaba Henk
Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 03/30/2015

2015-03-30 Thread Renat Akhmerov
Thanks for joining today’s team meeting!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-30-16.23.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-30-16.23.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-30-16.23.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-30-16.23.log.html

The next meeting is scheduled for April 6 at 16.20 UTC (temporarily new time)

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Usage of mode attribute in storing and order the secret

2015-03-30 Thread Asha Seshagiri
Any help would be appreciated ?
Thanks in advance !

Thanks and Regards,
Asha Seshagiri

On Mon, Mar 30, 2015 at 12:45 PM, Asha Seshagiri asha.seshag...@gmail.com
wrote:

 Hi All ,

 What is the use of the mode attribute ? what does the value of this
 attribute signify and what are the possible values of this attribute?
 For ex :Consider the order request to create the secret :

 POST v1/orders

 Header: content-type=application/json
 X-Project-Id: {project_id}
 {
   type: key,
   meta: {
 name: secretname,
 algorithm: AES,
 bit_length: 256,
 mode: cbc,
 payload_content_type: application/octet-stream
   }
 }


 What does the mode  value cbc  indicate ?
 --
 *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Usage of mode attribute in storing and order the secret

2015-03-30 Thread Douglas Mendizabal
Hi Asha,

Barbican Orders of type “key” are intended to generate keys suitable for 
encryption.  The metadata associated with the key order defines the encryption 
scheme in which the key will be used.  In the example you provided, the order 
is requesting a key that is suitable for use in a block cipher.  Specifically 
you’re requesting a key that will be used with the “AES” block cipher, so the 
“mode describes the mode of operation to be used, which in this case is Cipher 
Block Chaining or “CBC”.

Acceptable values for “mode” are dependent on the value of the “algorithm” 
attribute.  When requesting orders for keys to be used in AES encryption, the 
values for “mode” correspond to the other possible modes of operation for AES, 
such as “ECB”, “CTR”, etc.

-Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

On Mar 30, 2015, at 12:46 PM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:

Any help would be appreciated ?
Thanks in advance !

Thanks and Regards,
Asha Seshagiri

On Mon, Mar 30, 2015 at 12:45 PM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Hi All ,

What is the use of the mode attribute ? what does the value of this attribute 
signify and what are the possible values of this attribute?
For ex :Consider the order request to create the secret :


POST v1/orders

Header: content-type=application/json
X-Project-Id: {project_id}
{
  type: key,
  meta: {
name: secretname,
algorithm: AES,
bit_length: 256,
mode: cbc,
payload_content_type: application/octet-stream
  }
}

What does the mode  value cbc  indicate ?
--
Thanks and Regards,
Asha Seshagiri



--
Thanks and Regards,
Asha Seshagiri

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE Request: glusterfs_native: negotiate volumes with glusterd

2015-03-30 Thread Ben Swartzlander

On 03/30/2015 12:21 PM, Csaba Henk wrote:

Hi,

I'm applying for an FFE for change

https://review.openstack.org/162542 ,
glusterfs_native: negotiate volumes with glusterd.

The change in question is in the grey zone between
a bugfix and a feature. So, having it discussed with
the Manila community and our PTL, Ben Swartzlander,
we decided to defeat ambiguity by putting it
forward as an FFE.

While there is no explicit errant behavior with the
current glusterfs_native driver code that would be
addressed by this change, the situation is that the
current version of the driver is conceptually buggy
-- it does not meet consensual expectations what one
has against a driver. (It would be an overstatement
to call current create_share implementation a stub,
but the issue is something similar.)

One aspect of the limitation of create_share is
captured by this bug:

https://bugs.launchpad.net/manila/+bug/1437176 ,
glusterfs_native: Unable to create shares using newly
available GlusterFS volumes without restarting manila
share service

We are submitting the change as a fix for this.

Impact: the code adds a new, more general mechanism for
picking GlusterFS volumes for backing shares. The new code
is basically contained in one function, _pop_gluster_vol();
the rest of the diff is refactor to adjust the driver to
the new internal API. The impact is isolated and limited
to the glusterfs_native driver. The operation logic of the
driver is not affected beyond backing resource allocation of
in create_share. Unit test coverage is good.

Csaba Henk
Red Hat, Inc.


Thanks for going through the formal request process with this change.

One question I have that's not answered here is: what is the risk of 
delaying this fix to Liberty? Clearly it needs to be fixed eventually, 
but if we hold off and allow Kilo to ship as-is, will anything bad 
happen? From the description above it sounds like the driver is 
functional, and a somewhat awkward workaround (restarting the backend) 
is required to deal with bug 1437176.


Will users be subjected to any upgrade problems going from Kilo to 
Liberty if we don't fix this in Kilo? Will there be any significant 
maintenance problems in the Kilo code if we don't change it?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 6.1 Soft Code Freeze moved to April 7th

2015-03-30 Thread Eugene Bogdanov

Hello everyone,

We 
http://lp-reports.vm.mirantis.net/custom_report/6.1?status=Newstatus=Incompletestatus=Confirmedstatus=Triagedstatus=In%20Progressimportance=Mediumcurrently 
have 500+ medium priority bugs assigned to 6.1 release. Obviously we 
won't be able to make a big difference within the next 24 hours, so 
let's move the Soft Code Freeze date (last date when medium bug fixes 
are accepted) [1] to April 7th. This will give us some time to apply 
important bug fixes and complete triaging as appropriate. The shift is 
only about Soft Code Freeze (no changes for Hard Code Freeze / GA 
dates). I have updated the release schedule [2] accordingly.


Thank you.

[1] Soft Code Freeze definition: 
https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze
[2] Release schedule: 
https://wiki.openstack.org/wiki/Fuel/6.1_Release_Schedule


--
EugeneB


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Re: 6.1 Soft Code Freeze moved to April 7th

2015-03-30 Thread Christopher Aedo
Note: Modifying the subject line to add the tag Fuel as this message
has gone out to the OpenStack Dev mailing list.

On Mon, Mar 30, 2015 at 10:50 AM, Eugene Bogdanov
ebogda...@mirantis.com wrote:
 Hello everyone,

 We currently have 500+ medium priority bugs assigned to 6.1 release.
 Obviously we won't be able to make a big difference within the next 24
 hours, so let's move the Soft Code Freeze date (last date when medium bug
 fixes are accepted) [1] to April 7th. This will give us some time to apply
 important bug fixes and complete triaging as appropriate. The shift is only
 about Soft Code Freeze (no changes for Hard Code Freeze / GA dates). I have
 updated the release schedule [2] accordingly.

 Thank you.

 [1] Soft Code Freeze definition:
 https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze
 [2] Release schedule:
 https://wiki.openstack.org/wiki/Fuel/6.1_Release_Schedule

 --
 EugeneB



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Russell Bryant
On 03/30/2015 10:34 AM, Anita Kuno wrote:
 On 03/26/2015 08:58 PM, Russell Bryant wrote:
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.

 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 I heartily agree.
 
 Here is my problem. I am getting the feeling from the big tent
 discussions (now this could be my fault since I don't know as it is in
 the proposal or just the stuff people are making up about it) that we
 are allowing more than one networking project in OpenStack. I have been
 disappointed with that impression but that has been the impression I
 have gotten.
 
 I'm glad to hear you have a different perspective on this, Russell, and
 would just like to clarify this point.
 
 Are we saying that OpenStack has one networking option?

I wouldn't say that exactly.  We clearly have two today.  :-)

I don't think anyone intended to have two for as long as we have, and I
feel that has been detrimental to the OpenStack mission.  I'm very
thankful for the ongoing efforts to rectify that situation.

My general feeling about overlap in OpenStack is that it's more costly
the lower we go in the stack.  If we think about the base compute set
of projects (like Nova, Glance, Neutron, Keystone, Cinder), I feel we
should resist overlap there more strongly than we might at the higher
layers.

I think lacking consensus around a networking direction is harmful to
our mission.  I will not say a new networking API should never happen,
but the bar should be high.

In fact, this very debate is happening right now on whether or not the
group based policy project should be accepted as an OpenStack project:

https://review.openstack.org/#/c/161902/

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Steve Wormley
On Mon, Mar 30, 2015 at 4:49 AM, Jesse Pretorius jesse.pretor...@gmail.com
wrote:

 On 28 March 2015 at 00:41, Steve Wormley openst...@wormley.com wrote:

 2. Floating IPs managed at each compute node(multi-host) and via the
 standard nova API calls.



 2 meant we can't use pure provider VLAN networks so we had to wait for DVR
 VLAN support to work.


 I'm always confused when I see operators mention that provider VLANs can't
 be used in a Neutron configuration. While at my former employer we had that
 setup with Grizzly, and also note that any instance attached to a VLAN
 tagged tenant network did not go via the L3 agent... the traffic was tagged
 and sent directly from the compute node onto the VLAN.

 All we had to do to make this work was to allow VLAN tagged networks and
 the cloud admin had to setup the provider network with the appropriate VLAN
 tag.

As you say, provider networks and VLANs work fine. Provider networks, VLANs
and Openstack managed Floating IP addresses for the same instances do not.

-Steve wormley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/30/2015 12:35 PM, Russell Bryant wrote:
 On 03/30/2015 10:34 AM, Anita Kuno wrote:
 On 03/26/2015 08:58 PM, Russell Bryant wrote:
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.

 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 I heartily agree.

 Here is my problem. I am getting the feeling from the big tent
 discussions (now this could be my fault since I don't know as it is in
 the proposal or just the stuff people are making up about it) that we
 are allowing more than one networking project in OpenStack. I have been
 disappointed with that impression but that has been the impression I
 have gotten.

 I'm glad to hear you have a different perspective on this, Russell, and
 would just like to clarify this point.

 Are we saying that OpenStack has one networking option?
 
 I wouldn't say that exactly.  We clearly have two today.  :-)
 
 I don't think anyone intended to have two for as long as we have, and I
 feel that has been detrimental to the OpenStack mission.  I'm very
 thankful for the ongoing efforts to rectify that situation.
 
 My general feeling about overlap in OpenStack is that it's more costly
 the lower we go in the stack.  If we think about the base compute set
 of projects (like Nova, Glance, Neutron, Keystone, Cinder), I feel we
 should resist overlap there more strongly than we might at the higher
 layers.
 
 I think lacking consensus around a networking direction is harmful to
 our mission.  I will not say a new networking API should never happen,
 but the bar should be high.
 
 In fact, this very debate is happening right now on whether or not the
 group based policy project should be accepted as an OpenStack project:
 
 https://review.openstack.org/#/c/161902/
 
Thank you, Russell. I agree with you and I am grateful that you took the
time to spell it out for the mailing list.

Lack of clarity hurts our users, every decision we make should keep our
users best interests in mind going forward, as you outline in your reply.

Thanks Russell,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Steve Wormley
On Sun, Mar 29, 2015 at 6:45 AM, Kevin Benton blak...@gmail.com wrote:

 Does the decision about the floating IP have to be based on the use of the
 private IP in the original destination, or could you get by with rules on
 the L3 agent to avoid NAT just based on the destination being in a
 configured set of CIDRs?

 If you could get by with the latter it would be a much simpler problem to
 solve. However, I suspect you will want the former to be able to connect to
 floating IPs internally as well.

That's one issue. Having systems like monitoring accessing both addresses.
The other, like many other large organizations, is that we have a fairly
large number of disjoint address spaces between all the groups accessing
our cloud. So trying to create and maintain that sort of list, short of a
routing protocol feed, is not easy.

-Steve Wormley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Find your discount code for Vancouver and register *now*

2015-03-30 Thread Stefano Maffulli
Folks,

If you have received your discount code to OpenStack summit Vancouver,
stop whatever you're doing and


register *now* 


Admission prices go up to $900 tomorrow and you'll have to pay the
difference. *There will be absolutely no exceptions*.

If you're in doubt if you should have received an invite check

https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/

and if you think you qualify for an invite reply to
communitym...@openstack.org and provide a URL to your merged
contributions. No link to bugs or blueprints: *only merged contributions
matter* and URL to https://review.openstack.org

thanks,
stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telco][NFV] Update to weekly meeting time commencing this week.

2015-03-30 Thread Steve Gordon
Hi all,

Since the Paris summit the Telco Working Group has been meeting on an 
alternating basis at 1400 UTC and 2200 UTC. As discussed in the last two 
meetings due to low attendance in the 2200 UTC slot, particularly as DST has 
now kicked in for many participants, we are going to trial a slightly earlier 
slot - 1900 UTC - starting this week. Hopefully this will help out those who 
try to attend both while also still catering to those who can't make the 
earlier meeting.

As a result the upcoming schedule is:

* Wednesday 1st April 2015  1900 UTC#openstack-meeting-alt
* Wednesday 8th April 2015  1400 UTC#openstack-meeting-alt
* Wednesday 15th April 2015 1900 UTC#openstack-meeting-alt

I have updated https://wiki.openstack.org/wiki/Meetings and 
https://wiki.openstack.org/wiki/TelcoWorkingGroup#Upcoming_Meetings with these 
details. Feel free to reach out either here or in #openstack-nfv if there are 
any concerns/questions.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Irina Povolotskaya for fuel-docs core

2015-03-30 Thread Dmitry Borodaenko
I think it's safe to conclude that we have a strong consensus in
favor. Congratulations Irina!

All that's left is to merge https://review.openstack.org/168182 so
that we can actually assign core reviewers separately for fuel-docs.

On Sun, Mar 29, 2015 at 11:13 PM, Nikolay Markov nmar...@mirantis.com wrote:
 +1

 29 Мар 2015 г. 20:42 пользователь Sergey Vasilenko
 svasile...@mirantis.com написал:

 +1


 /sv

 On Fri, Mar 27, 2015 at 5:31 PM, Anastasia Urlapova
 aurlap...@mirantis.com wrote:

 + 10

 On Fri, Mar 27, 2015 at 4:28 AM, Igor Zinovik izino...@mirantis.com
 wrote:

 +1

 On 26 March 2015 at 19:26, Fabrizio Soppelsa fsoppe...@mirantis.com
 wrote:
  +1 definitely
 
 
  On 03/25/2015 10:10 PM, Dmitry Borodaenko wrote:
 
  Fuelers,
 
  I'd like to nominate Irina Povolotskaya for the fuel-docs-core team.
  She has contributed thousands of lines of documentation to Fuel over
  the past several months, and has been a diligent reviewer:
 
 
 
  http://stackalytics.com/?user_id=ipovolotskayarelease=allproject_type=allmodule=fuel-docs
 
  I believe it's time to grant her core reviewer rights in the
  fuel-docs
  repository.
 
  Core reviewer approval process definition:
  https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Igor Zinovik
 Deployment Engineer at Mirantis, Inc
 izino...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] upgrades from juno to kilo

2015-03-30 Thread William M Edmonds


I tracked down the cause of the check-grenade-dsvm failure on
https://review.openstack.org/#/c/167370 . As I understand it, grenade is
taking the previous stable release, deploying it, then upgrading to the
current master (plus the proposed changeset) without changing any of the
config from the stable deployment. Thus the policy.json file used in that
test is the file from stable-juno. Then if we look at oslo_policy/policy.py
we see that if the rule being looked for is missing then the default rule
will be used, but then if that default rule is also missing a KeyError is
thrown. Since the default rule was missing with ceilometer's policy.json
file in Juno, that's what would happen here. I assume that KeyError then
gets turned into the 403 Forbidden that is causing check-grenade-dsvm
failure.

I suspect the author of the already-merged
https://review.openstack.org/#/c/115717 did what they did in
ceilometer/api/rbac.py rather than what is proposed in
https://review.openstack.org/#/c/167370 just to get the grenade tests to
pass. I think they got lucky (unlucky for us), too, because I think they
actually did break what the grenade tests are meant to catch. The patch set
which was merged under https://review.openstack.org/#/c/115717 changed the
rule that is checked in get_limited_to() from context_is_admin to
segregation. But the segregation rule didn't exist in the Juno version
of ceilometer's policy.json, so if a method that calls get_limited_to() was
tested after an upgrade, I believe it would fail with a 403 Forbidden
tracing back to a KeyError looking for the segregation rule... very
similar to what we're seeing in https://review.openstack.org/#/c/167370

Am I on the right track here? How should we handle this? Is there a way to
maintain backward compatibility while fixing what is currently broken (as a
result of https://review.openstack.org/#/c/115717 ) and allowing for a fix
for https://bugs.launchpad.net/ceilometer/+bug/1435855 (the goal of
https://review.openstack.org/#/c/167370)? Or will we need to document in
the release notes that the manual step of modifying ceilometer's
policy.json is required when upgrading from Juno, and then correspondingly
modify grenade's upgrade_ceilometer file?


W. Matthew Edmonds
IBM Systems  Technology Group
Email: edmon...@us.ibm.com
Phone: (919) 543-7538 / Tie-Line: 441-7538__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Salvatore Orlando
Akihiro,
thanks for sharing this on the mailing list.

I have some answers inline, but from a community process perspective I have
a feeling that a majority of contributors feel like well established
guidelines have been violated. This has been exacerbated by the fact that
these changes are landing at the end of the release cycle.
If there is a consensus that no change in our API evolution strategy should
occur in Kilo - because no change in this strategy has been agree upon -
then there is little to discuss - any new addition should be an extension.

I reckon the shortcomings of this approach have been communicated enough so
far, but if we are in a stall condition regarding how to evolve the API
then we should resort to the only mechanism that we've used in the past
extensions. This also means - in my opinion - that we should provisionally
revert or hide all core API changes until they're reproposed as extension.
But your proposal about using the extension mechanism to mark that the
feature is enabled for the sake of the API client makes sense and is worth
exploring too.

Salvatore

On 30 March 2015 at 19:35, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool
 feature reivew
 https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting,
 but I am not so confident I can get up in time, I would like to send this
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features are
 available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do a
 dirty work around
 to determine whether some feature is available. I believe this is one
 important point in API.


This is true regarding VLAN transparency and MTU.
For the latter, it is clearly something that might be not supported. In the
absence of a better mechanism to detect enabled features, I agree it must
be an extension.
For VLAN transparency, the authors claim (from what I understand) that it's
always ok to set it. For deployments with ML2 an exception will be thrown
if no driver is available for implementing a vlan transparent network.
Plugins that do not support it should just ignore the setting. I don't know
how Horizon (or any other client for that matter) will ever realize that
the settings had no effect.

The subnetpool support instead has been implemented in the IPAM logic
contained in db_base_plugin_v2. This is why I supported its addition to the
core API. Basically, since it does not require specific plugin support, it
should always be available, and implemented by all plugins satisfying both
your criteria.
However, there is always an exception represented by plugins which either
override the base class or do not use it at all. When I reviewed the subnet
pool spec there was no plugin in the first category (at least no known
plugin), while I believe only a single plugin in the latter. My thought was
that I would not worry about plugins not included in the repository, but
now that most are not anymore in openstac/neutron this does not apply,
probably.
I still believe that it is ok to assume subnetpools are part of the core
API, but, as stated earlier, if we feel like we are unable to agree
anything about how evolve the API, then the only alternative is to keep
doing things as we've done today - only by extensions.




 On the second point, my only concern (not so important) is that we are
 making the core
 API change at this moment of the release. Some plugins do not consume
 db_base_plugin and
 such plugins need to investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need to
 update
 their extension list in the last moment :-(


Indeed it is always challenging when API changes land at the last milestone
- and it's probably even harder to handle now that the plugins have been
moved out of the main repo. I think this has been a failure of the drivers
and core team and we should address it with the appropriate changes for the
next release cycle.




 My vote at this moment is still to use an extension, but an extension
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add an
 extension module
 so that the new feature is visible thru the extension list.

It is not perfect but I think it is a good compromise 

Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-30 Thread Mathieu Rohon
hi henry,

thanks for this interesting idea. It would be interesting to think about
how external gateway could leverage the l2pop framework.

Currently l2pop sends its fdb messages once the status of the port is
modified. AFAIK, this status is only modified by agents which send
update_devce_up/down().
This issue has also to be addressed if we want agent less equipments to be
announced through l2pop.

Another way to do it is to introduce some bgp speakers with e-vpn
capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
[1] is an opensource bgp speaker which is able to do that.
BGP is standardized so equipments might already have it embedded.

last summit, we talked about this kind of idea [2]. We were going further
by introducing the bgp speaker on each compute node, in use case B of [2].

[1]https://github.com/Orange-OpenSource/bagpipe-bgp
[2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe

On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:

 Hi ML2er,

 Today we use agent_ip in L2pop to store endpoints for ports on a
 tunnel type network, such as vxlan or gre. However this has some
 drawbacks:

 1) It can only work with backends with agents;
 2) Only one fixed ip is supported per-each agent;
 3) Difficult to interact with other backend and world outside of Openstack.

 L2pop is already widely accepted and deployed in host based overlay,
 however because it use agent_ip to populate tunnel endpoint, it's very
 hard to co-exist and inter-operating with other vxlan backend,
 especially agentless MD.

 A small change is suggested that the tunnel endpoint should not be the
 attribute of *agent*, but be the attribute of *port*, so if we store
 it in something like *binding:tun_ip*, it is much easier for different
 backend to co-exists. Existing ovs agent and bridge need a small
 patch, to put the local agent_ip into the port context binding fields
 when doing port_up rpc.

 Several extra benefits may also be obtained by this way:

 1) we can easily and naturally create *external vxlan/gre port* which
 is not attached by an Nova booted VM, with the binding:tun_ip set when
 creating;
 2) we can develop some *proxy agent* which manage a bunch of remote
 external backend, without restriction of its agent_ip.

 Best Regards,
 Henry

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Usage of mode attribute in storing and order the secret

2015-03-30 Thread Asha Seshagiri
Thanks a lot  Douglas for your response. It helped me .
What are the possible values of algorithm attribute? Is AES only supported
algorithm type for  Barbican or does it support any other algorithm type?

Thanks and Regards,
Asha Seshagiri

On Mon, Mar 30, 2015 at 1:04 PM, Douglas Mendizabal 
douglas.mendiza...@rackspace.com wrote:

  Hi Asha,

  Barbican Orders of type “key” are intended to generate keys suitable for
 encryption.  The metadata associated with the key order defines the
 encryption scheme in which the key will be used.  In the example you
 provided, the order is requesting a key that is suitable for use in a block
 cipher.  Specifically you’re requesting a key that will be used with the
 “AES” block cipher, so the “mode describes the mode of operation to be
 used, which in this case is Cipher Block Chaining or “CBC”.

  Acceptable values for “mode” are dependent on the value of the
 “algorithm” attribute.  When requesting orders for keys to be used in AES
 encryption, the values for “mode” correspond to the other possible modes of
 operation for AES, such as “ECB”, “CTR”, etc.

  -Doug

 
 Douglas Mendizábal
 IRC: redrobot
 PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

  On Mar 30, 2015, at 12:46 PM, Asha Seshagiri asha.seshag...@gmail.com
 wrote:

  Any help would be appreciated ?
 Thanks in advance !

  Thanks and Regards,
 Asha Seshagiri

 On Mon, Mar 30, 2015 at 12:45 PM, Asha Seshagiri asha.seshag...@gmail.com
  wrote:

 Hi All ,

  What is the use of the mode attribute ? what does the value of this
 attribute signify and what are the possible values of this attribute?
 For ex :Consider the order request to create the secret :

  POST v1/orders

 Header: content-type=application/json
 X-Project-Id: {project_id}
 {
   type: key,
   meta: {
 name: secretname,
 algorithm: AES,
 bit_length: 256,
 mode: cbc,
 payload_content_type: application/octet-stream
   }
 }


  What does the mode  value cbc  indicate ?
 --
  *Thanks and Regards,*
 *Asha Seshagiri*




  --
  *Thanks and Regards,*
 *Asha Seshagiri*





-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Carl Baldwin
Akihiro,

If we go with the empty extension you proposed in the patch will that be
acceptable?

We've got to stop killing new functionality on the very last day like this
. It just kills progress.  This proposal isn't new.

Carl
On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool
 feature reivew
 https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting,
 but I am not so confident I can get up in time, I would like to send this
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features are
 available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do a
 dirty work around
 to determine whether some feature is available. I believe this is one
 important point in API.

 On the second point, my only concern (not so important) is that we are
 making the core
 API change at this moment of the release. Some plugins do not consume
 db_base_plugin and
 such plugins need to investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need to
 update
 their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add an
 extension module
 so that the new feature is visible thru the extension list.
 It is not perfect but I think it is a good compromise regarding the first
 point.


 I know there was a suggestion to change this into the core API in the
 spec review
 and I didn't notice it at that time, but I would like to raise this
 before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear
 guideline
 on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Coordinator(s)/PTL tasks description

2015-03-30 Thread Emilien Macchi


On 03/26/2015 01:08 PM, Sebastien Badia wrote:
 Hi,
 
 Following our Tuesday meeting, we decided to create a sort of PTL /
 Coordinators tasks list with the essence of
 https://wiki.openstack.org/wiki/PTL_Guide
 
 I list some point here, but feel free to add discuss about them, it's
 not (yet)
 written in stone.
 
 - Community (group) manager:
  The PTL keeps abreast of upcoming meetings ops/other where it would be
 interesting
  for our community to be represented. Maintain an ical?
 
 - Meetings organisation
  We need a chair to ensure that the meeting is correctly orchestrated,
 the PTL
  publish also a meeting agenda a minimum 5days before the meeting (see this
  example in openstack-tc¹). The PTL also publish meeting notes on the ML +
  wiki (for archive / easy search purposes).
 
 - Bug triage / management (bug squashing party ?)
  The subject was raised during the last meeting, maybe a good form was
  something like a BSP (like we made in Debian), or a PR triage like
 puppetlabs
  one's². (This task was not necessarily managed by PTL)
 
 - Maintain a list of active subject and directions like a backlog :)
  This was already managed by our trello board, and have a clear vision
 of where
  we are going on.
 
 By stepping back, maybe we must elect a PTL to fit with OpenStack «
 standards »
 and act in intern with something like a « scrum master » (changing every
 week)

What do you mean by 'changing every week'?

 firstly to distribute tasks, and secondly to involve everyone in the
 process.
 
 These points are just ideas, a kind of cornerstone to discuss.

I +1 all of them.

 
 Seb
 
 ¹http://lists.openstack.org/pipermail/openstack-tc/2015-March/000940.html
 ²https://github.com/puppet-community/community-triage/blob/master/core/notes/2015-03-25.md
 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators][Openstack-dev][all] how to apply security/back-ported release to Icehouse production

2015-03-30 Thread Daniel Comnea
No thoughts?



On Sat, Mar 28, 2015 at 10:35 PM, Daniel Comnea comnea.d...@gmail.com
wrote:

 Hi all,

 Can anyone shed some light as to how you upgrade/ applies the
 security/back-ported patches?

 E.g  - let's say i already have a production environment running Icehouse
 2014.1 as per the link [1] and i'd like to upgrade it to latest Icehouse
 release 2014.1.4.

 Also do you have to go via sequential process like

 2014.1 - 2014.1.1 - 2014.1.2 - 2014.1.3 - 2014.4 or i can jump from
 2014.1 to 2014.1.4?

 And the last questions is: can i cherry pick which bugs part of a project
 to pull? Can i pull only 1 project - e.g HEAT from latest release 2014.1.4?


 Thanks,
 Dani

 [1] https://wiki.openstack.org/wiki/Releases

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] heat delete woes in Juno

2015-03-30 Thread Zane Bitter

On 26/03/15 15:30, Georgy Okrokvertskhov wrote:

I attached an example of the template which is hanging right now in my
Juno environment. I believe it hangs because of floating ip stuff and
the way how it is attached to a VM.
It is autogenerated, so please don't be disturbed by strange resource names.


There were a bunch of bug fixes required around this, due to the Neutron 
API not being designed for orchestration:


https://bugs.launchpad.net/heat/+bug/1299259
https://bugs.launchpad.net/heat/+bug/1399699
https://bugs.launchpad.net/heat/+bug/1399702

All of those were eventually backported to stable/juno, so it may be 
worth checking that you have the latest stable release. If there's still 
issues then you may need to add dependencies to the template yourself - 
in particularly complex cases, Heat just can't get enough information to 
do the Right Thing.


cheers,
Zane.



Thanks
Gosha

On Thu, Mar 26, 2015 at 12:15 PM, Ala Rezmerita
ala.rezmer...@cloudwatt.com mailto:ala.rezmer...@cloudwatt.com wrote:

Hi Matt

I had similar problems with heat, and the work-around that i used is
to abandon the stack (heat stack-abandon),
and then I delete stack resources created  one by one.

Hope this helps.

Ala Rezmerita
Software Engineer || Cloudwatt
M: (+33) 06 77 43 23 91 tel:%28%2B33%29%2006%2077%2043%2023%2091
Immeuble Etik
892 rue Yves Kermen
92100 Boulogne-Billancourt – France


*De: *Matt Fischer m...@mattfischer.com
mailto:m...@mattfischer.com
*À: *openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
*Envoyé: *Jeudi 26 Mars 2015 19:17:08
*Objet: *[openstack-dev] [heat] heat delete woes in Juno


Nobody on the operators list had any ideas on this, so re-posting here.

We've been having some issues with heatdelete-stack in Juno. The
issues generally fall into three categories:

1) it takes multiple calls to heat to delete a stack. Presumably due
to heat being unable to figure out the ordering on deletion and
resources being in use.

2) undeleteable stacks. Stacks that refuse to delete, get stuck in
DELETE_FAILED state. In this case, they show up in stack-list and
stack-show, yet resource-list and stack-delete deny their existence.
This means I can't be sure whether they have any real resources very
easily.

3) As a corollary to item 1, stacks for which heat can never unwind
the dependencies and stay in DELETE_IN_PROGRESS forever.

Does anyone have any work-arounds for these or recommendations on
cleanup? My main worry is removing a stack from the database that is
still consuming the customer's resources. I also don't just want to
remove stacks from the database and leave orphaned records in the DB.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com http://www.mirantis.com/
Tel. +1 650 963 9828
Mob. +1 650 996 3284


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] python-barbicanclient 3.0.3 released

2015-03-30 Thread Douglas Mendizabal
The Barbican Project Team would like to announce the release of 
python-barbicanclient 3.0.3.

The release is available via PyPI

* https://pypi.python.org/pypi/python-barbicanclient 
https://pypi.python.org/pypi/python-barbicanclient

For detailed release notes, please visit the milestone page in Launchpad

* https://launchpad.net/python-barbicanclient/+milestone/3.0.3 
https://launchpad.net/python-barbicanclient/+milestone/3.0.3

Many thanks to all the contributors who made this release possible!

- Doug Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Tidwell, Ryan
I will quickly spin another patch set with the shim extension.  Hopefully this 
will be all it takes to get subnet allocation merged.

-Ryan

-Original Message-
From: Akihiro Motoki [mailto:amot...@gmail.com] 
Sent: Monday, March 30, 2015 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool 
feature

Hi Carl,

I am now reading the detail from Salvatore, but would like to response this 
first.

I don't want to kill this useful feature too and move the thing forward.
I am fine with the empty/shim extension approach.
The subnet pool is regarded as a part of Core API, so I think this extension 
can be always enabled even if no plugin declares to use it.
Sorry for interrupting the work at the last stage, and thank for understanding.

Akihiro

2015-03-31 5:28 GMT+09:00 Carl Baldwin c...@ecbaldwin.net:
 Akihiro,

 If we go with the empty extension you proposed in the patch will that 
 be acceptable?

 We've got to stop killing new functionality on the very last day like this .
 It just kills progress.  This proposal isn't new.

 Carl

 On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool 
 feature reivew https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a 
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it 
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting, but I 
 am not so confident I can get up in time, I would like to send this 
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features 
 are available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to 
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do 
 a dirty work around to determine whether some feature is available. I 
 believe this is one important point in API.

 On the second point, my only concern (not so important) is that we 
 are making the core API change at this moment of the release. Some 
 plugins do not consume db_base_plugin and such plugins need to 
 investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need 
 to update their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension 
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add 
 an extension module so that the new feature is visible thru the 
 extension list.
 It is not perfect but I think it is a good compromise regarding the 
 first point.


 I know there was a suggestion to change this into the core API in the 
 spec review and I didn't notice it at that time, but I would like to 
 raise this before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear 
 guideline on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 _
 _ OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 A few reasons, I’m sure there are others:

 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
Although that's not actually the case in this particular context as there
are a handful of third party devices that run the full set of tests that
the ref driver runs with no additional skips or modifications.
​


 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.


​This may be something specific to Neutron perhaps?  In Cinder LVM is
pretty much the lowest common denominator.  I'm not aware of any volume
tests in Tempest that rely on optional features that don't pick this up
automatically out of the config (like multi-backend for example).
​


 - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.


​Yeah, certainly I think this highlights some of the differences between
Cinder and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set
up a bit different here in terms of expectations for base functionality
requirements and compatibility but your points are definitely well taken. ​


 Thanks,
 doug


 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:

 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?

 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.

 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?

 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.

 Thanks,
 John
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Network name as a Server properties

2015-03-30 Thread Zane Bitter

On 30/03/15 03:51, BORTMAN, Limor (Limor) wrote:

Hi,
I noticed the we can't use network name under OS::Neutron::Port (only 
network_id) as a valid neutron property, and I was wondering why?


IIRC it was something weird about how python-neutronclient worked at the 
time.



I expected it to be like image under OS::Nova::Server:
  The property name should be network, and it should accept both id and name


It is and it does as of the 2014.2 (Juno) release.

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Port-props

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Akihiro Motoki
Hi Carl,

I am now reading the detail from Salvatore, but would like to response
this first.

I don't want to kill this useful feature too and move the thing forward.
I am fine with the empty/shim extension approach.
The subnet pool is regarded as a part of Core API, so I think this
extension can be
always enabled even if no plugin declares to use it.
Sorry for interrupting the work at the last stage, and thank for understanding.

Akihiro

2015-03-31 5:28 GMT+09:00 Carl Baldwin c...@ecbaldwin.net:
 Akihiro,

 If we go with the empty extension you proposed in the patch will that be
 acceptable?

 We've got to stop killing new functionality on the very last day like this .
 It just kills progress.  This proposal isn't new.

 Carl

 On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool
 feature reivew
 https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting,
 but I am not so confident I can get up in time, I would like to send this
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features are
 available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do a
 dirty work around
 to determine whether some feature is available. I believe this is one
 important point in API.

 On the second point, my only concern (not so important) is that we are
 making the core
 API change at this moment of the release. Some plugins do not consume
 db_base_plugin and
 such plugins need to investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need to
 update
 their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add an
 extension module
 so that the new feature is visible thru the extension list.
 It is not perfect but I think it is a good compromise regarding the first
 point.


 I know there was a suggestion to change this into the core API in the
 spec review
 and I didn't notice it at that time, but I would like to raise this
 before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear
 guideline
 on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
This may have already been raised/discussed, but I'm kinda confused so
thought I'd ask on the ML here.  The whole point of third party CI as I
recall was to run the same tests that we run in the official Gate against
third party drivers.  To me that would imply that a CI system/device that
marks itself as GOOD doesn't do things like add skips locally that aren't
in the tempest code already?

In other words, seems like cheating to say My CI passes and all is good,
except for the tests that don't work which I skip... but pay no attention
to those please.

Did I miss something, isn't the whole point of Third Party CI to
demonstrate that a third parties backend is tested and functions to the
same degree that the reference implementations do? So the goal (using
Cinder for example) was to be able to say that any API call that works on
the LVM reference driver will work on the drivers listed in driverlog; and
that we know this because they run the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be
marked as no good... but if the practice is to skip what you can't do then
maybe that should be documented in the driverlog submission, as opposed to
just stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to reterieve asymmetric order request for rsa algorithm type

2015-03-30 Thread Asha Seshagiri
Hi All ,

I would like to know whether Barbican supports asymmetric order request .
Please find the curl command and response for creating the order and
retrieving  the order

root@barbican:~# curl -X POST -H 'content-type:application/json' -H
'X-Project-Id: 12345' -d '{type : asymmetric, meta: {name:
secretnamepk2, algorithm: rsa, bit_length: 256, mode: cbc,
payload_content_type: application/octet-stream}}'
http://localhost:9311/v1/orders
{order_ref: 
http://localhost:9311/v1/orders/f9870bb5-4ba3-4b19-9fe3-bb0c2a53557c
}root@barbican:~#

root@barbican:~# curl -H 'Accept: application/json' -H 'X-Project-Id:12345'
http://localhost:9311/v1/orders/f9870bb5-4ba3-4b19-9fe3-bb0c2a53557c
{status: ERROR, updated: 2015-03-30T21:36:38.102832, created:
2015-03-30T21:36:38.083428, order_ref: 
http://localhost:9311/v1/orders/f9870bb5-4ba3-4b19-9fe3-bb0c2a53557c;,
meta: {name: secretnamepk2, algorithm: rsa,
payload_content_type: application/octet-stream, mode: cbc,
bit_length: 256, expiration: null}, *error_status_code: 400,
error_reason: Process TypeOrder issue seen - No plugin was found that
could support your request., type: asymmetric}*root@barbican:~#

Could any one please help.
Thanks in Advance
-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Doug Wiegley
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.
- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

Thanks,
doug


 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com wrote:
 
 This may have already been raised/discussed, but I'm kinda confused so 
 thought I'd ask on the ML here.  The whole point of third party CI as I 
 recall was to run the same tests that we run in the official Gate against 
 third party drivers.  To me that would imply that a CI system/device that 
 marks itself as GOOD doesn't do things like add skips locally that aren't 
 in the tempest code already?
 
 In other words, seems like cheating to say My CI passes and all is good, 
 except for the tests that don't work which I skip... but pay no attention to 
 those please.
 
 Did I miss something, isn't the whole point of Third Party CI to demonstrate 
 that a third parties backend is tested and functions to the same degree that 
 the reference implementations do? So the goal (using Cinder for example) was 
 to be able to say that any API call that works on the LVM reference driver 
 will work on the drivers listed in driverlog; and that we know this because 
 they run the same Tempest API tests?
 
 Don't get me wrong, certainly not saying there's malice or things should be 
 marked as no good... but if the practice is to skip what you can't do then 
 maybe that should be documented in the driverlog submission, as opposed to 
 just stating Yeah, we run CI successfully.
 
 Thanks,
 John
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Core groups split

2015-03-30 Thread Aleksandra Fedorova
Hi, everyone,

as a follow-up for discussion [1] we've added per project core groups for
all main fuel-* repositories (see [2]):

fuel-astute-core
fuel-devops-core
fuel-docs-core
fuel-library-core
fuel-main-core
fuel-ostf-core
fuel-plugins-core
fuel-qa-core
fuel-stats-core
fuel-web-core

The original fuel-core group is included in each of those new groups, so
nothing has actually changed for now.

But as we are unblocked now, we can start reorganizing our core groups and
clarify core-reviewers responsibilities.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055038.html
[2] https://review.openstack.org/#/c/168182/

-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Sean M. Collins
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/
 
 So far it's failing pretty badly.


I haven't had a chance to debug the failures - it is my hope that
perhaps there are just more changes I need to make to DevStack to make
LinuxBridge work correctly. If anyone is successfully using LinuxBridge
with DevStack, please do review that patch and offer suggestions or
share their local.conf file. :)

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] [nova] The risk of hanging when shutdown instance.

2015-03-30 Thread Eric Blake
On 03/30/2015 06:08 AM, Michal Privoznik wrote:
 On 30.03.2015 11:28, zhang bo wrote:
 On 2015/3/28 18:06, Rui Chen wrote:

 snip/

   The API virDomainShutdown's description is out of date, it's not correct.
   In fact, virDomainShutdown would block or not, depending on its mode. If 
 it's in mode *agent*, then it would be blocked until qemu founds that the 
 guest actually got down.
 Otherwise, if it's in mode *acpi*, then it would return immediately.
   Thus, maybe further more work need to be done in Openstack.

   What's your opinions, Michal and Daniel (from libvirt.org), and Chris 
 (from openstack.org) :)

 
 
 Yep, the documentation could be better in that respect. I've proposed a
 patch on the libvirt upstream list:
 
 https://www.redhat.com/archives/libvir-list/2015-March/msg01533.html

I don't think a doc patch is right.  If you don't pass any flags, then
it is up to the hypervisor which method it will attempt (agent or ACPI).
 Yes, explicitly requesting an agent as the only method to attempt might
be justifiable as a reason to block, but the overall API contract is to
NOT block indefinitely.  I think that rather than a doc patch, we need
to fix the underlying bug, and guarantee that we return after a finite
time even when the agent is involved.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Use of consumer resource

2015-03-30 Thread Asha Seshagiri
Including Alee and Paul in the loop

Refining the above question :

The consumer resource allows the clients to register with container
resources. Please find the command and response below

POST v1/containers/888b29a4-c7cf-49d0-bfdf-bd9e6f26d718/consumers

Header: content-type=application/json
X-Project-Id: {project_id}
{
name: foo-service,
URL: https://www.fooservice.com/widgets/1234;
}

I would like to know the following :

*1. Who  does the client here refers to ? Openstack Services or any
other services as well?*

*2. Once the client gets registered through the consumer resource ,
How does client consume or use the consumer resource*

Any Help would be appreciated.

Thanks Asha.





On Mon, Mar 30, 2015 at 12:05 AM, Asha Seshagiri asha.seshag...@gmail.com
wrote:

 Hi All,

 Once the consumer resource registers to the containers , how does the
 consumer resource consume the container resource?
 Is there any API supporting the above operation.

 Could any one please help on this?

 --
 *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Preparing for 2014.2.3 -- branches freeze April 2nd

2015-03-30 Thread Adam Gandelman
Hi All-

We'll be freezing the stable/juno branches for integrated Juno projects this
Thursday April 2nd in preparation for the 2014.2.3 stable release on
Thursday April 9th.  You can view the current queue of proposed patches
on gerrit [1].  I'd like to request all interested parties review current
bugs affecting Juno and help ensure any relevant fixes be proposed
soon and merged by Thursday, or notify the stable-maint-core team of
anything critical that may land late and require a freeze exception.

Thanks,
Adam

[1] https://review.openstack.org/#/q/status:open+branch:stable/juno,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Carl Baldwin
Thanks for your support, Akihiro.  We will get this up for review very soon.

Carl

On Mon, Mar 30, 2015 at 2:59 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi Carl,

 I am now reading the detail from Salvatore, but would like to response
 this first.

 I don't want to kill this useful feature too and move the thing forward.
 I am fine with the empty/shim extension approach.
 The subnet pool is regarded as a part of Core API, so I think this
 extension can be
 always enabled even if no plugin declares to use it.
 Sorry for interrupting the work at the last stage, and thank for 
 understanding.

 Akihiro

 2015-03-31 5:28 GMT+09:00 Carl Baldwin c...@ecbaldwin.net:
 Akihiro,

 If we go with the empty extension you proposed in the patch will that be
 acceptable?

 We've got to stop killing new functionality on the very last day like this .
 It just kills progress.  This proposal isn't new.

 Carl

 On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool
 feature reivew
 https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting,
 but I am not so confident I can get up in time, I would like to send this
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features are
 available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do a
 dirty work around
 to determine whether some feature is available. I believe this is one
 important point in API.

 On the second point, my only concern (not so important) is that we are
 making the core
 API change at this moment of the release. Some plugins do not consume
 db_base_plugin and
 such plugins need to investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need to
 update
 their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add an
 extension module
 so that the new feature is visible thru the extension list.
 It is not perfect but I think it is a good compromise regarding the first
 point.


 I know there was a suggestion to change this into the core API in the
 spec review
 and I didn't notice it at that time, but I would like to raise this
 before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear
 guideline
 on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Akihiro Motoki amot...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Rochelle Grober
Top posting… I believe the main issue was a problem with snapshots that caused 
false negatives for most cinder drivers.  But, that got fixed.  Unfortunately, 
we haven’t yet established a good process to notify third parties when skipped 
tests are fixed and should be “unskipped”.  Maybe tagging the tests can help on 
this.  But, I really do think this round was a bit of first run gotchas and 
rookie mistakes on all sides.  A good post mortem on how to better communicate 
changes and deadlines may go a long way to smooth these out in the next round.

--Rocky

John Griffith on Monday, March 30, 2015 15:36 wrote:

On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
​Certainly... but that's relatively easy to fix (bug/patch to Tempest).  
Although that's not actually the case in this particular context as there are a 
handful of third party devices that run the full set of tests that the ref 
driver runs with no additional skips or modifications.
​

- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.

​This may be something specific to Neutron perhaps?  In Cinder LVM is pretty 
much the lowest common denominator.  I'm not aware of any volume tests in 
Tempest that rely on optional features that don't pick this up automatically 
out of the config (like multi-backend for example).
​

- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

​Yeah, certainly I think this highlights some of the differences between Cinder 
and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set up a 
bit different here in terms of expectations for base functionality requirements 
and compatibility but your points are definitely well taken. ​

Thanks,
doug


On Mar 30, 2015, at 2:54 PM, John Griffith 
john.griffi...@gmail.commailto:john.griffi...@gmail.com wrote:

This may have already been raised/discussed, but I'm kinda confused so thought 
I'd ask on the ML here.  The whole point of third party CI as I recall was to 
run the same tests that we run in the official Gate against third party 
drivers.  To me that would imply that a CI system/device that marks itself as 
GOOD doesn't do things like add skips locally that aren't in the tempest code 
already?

In other words, seems like cheating to say My CI passes and all is good, 
except for the tests that don't work which I skip... but pay no attention to 
those please.

Did I miss something, isn't the whole point of Third Party CI to demonstrate 
that a third parties backend is tested and functions to the same degree that 
the reference implementations do? So the goal (using Cinder for example) was to 
be able to say that any API call that works on the LVM reference driver will 
work on the drivers listed in driverlog; and that we know this because they run 
the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be 
marked as no good... but if the practice is to skip what you can't do then 
maybe that should be documented in the driverlog submission, as opposed to just 
stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] meaning of 'Triaged' in bug tracker

2015-03-30 Thread Tony Breeds
On Mon, Mar 30, 2015 at 10:00:44AM -0400, Sean Dague wrote:
 I've been attempting to clean up the bug tracker, one of the continued
 inconsistencies that are in the Nova tracker is the use of 'Triaged'.
 
 
 https://wiki.openstack.org/wiki/BugTriage
 
 If the bug contains the solution, or a patch, set the bug status to
 Triaged

So I'll stick my hand up and say that I was/am using:
The bug comments contain a full analysis on how to properly fix the issue
from: https://wiki.openstack.org/wiki/Bugs

to differentiate between Confirmed and Triaged.  Certainly there are a few bugs
I'm on that fall into the 'the bug has enough information to fix it' category
without having a bug attached.

So I'd like to suggest 2 things

1) We link directly to https://wiki.openstack.org/wiki/BugTriage from
   https://launchpad.net/~nova-bugs.  This is the path I used to get started
   with bug work.  Perhaps I'm strange ;P
2) [ and this is harder as it's project wide :/ ] We pick one of the two
preceding definitions and use it in both places.  We should probably check the
other state descriptions.

I'm not advocating for definition or t'other merely trying to reduce points of
confusion.

Yours Tony.


pgpzemAkZtxDR.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 7:26 PM, arkady_kanev...@dell.com wrote:

 Another scenario.

 The default LVM driver is local to cinder service.  Thus, it may work fine
 as soon as you go outside controller node it does not.

 We had a discussion on choosing different default driver and expect that
 discussion to continue.



 Not all drivers support all features. We have a table that list which
 features each driver support.



 The question I would ask is setting which test to skip in the driver is
 the right place?

 Why not specify it in the Tempest which driver run against.

 Then we can setup rules when drivers should remove themselves from that
 blackout list.

 That is easier to track, can be cleanly used by defcore and for tagging.



 Thanks,

 Arkady



 *From:* John Griffith [mailto:john.griffi...@gmail.com]
 *Sent:* Monday, March 30, 2015 8:12 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Dev] [third-party-ci]
 Clarifications on the goal and skipping tests







 On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober 
 rochelle.gro...@huawei.com wrote:

 Top posting… I believe the main issue was a problem with snapshots that
 caused false negatives for most cinder drivers.  But, that got fixed.
 Unfortunately, we haven’t yet established a good process to notify third
 parties when skipped tests are fixed and should be “unskipped”.  Maybe
 tagging the tests can help on this.  But, I really do think this round was
 a bit of first run gotchas and rookie mistakes on all sides.  A good post
 mortem on how to better communicate changes and deadlines may go a long way
 to smooth these out in the next round.



 --Rocky



 John Griffith on Monday, March 30, 2015 15:36 wrote:

 On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 A few reasons, I’m sure there are others:



 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

 ​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
 Although that's not actually the case in this particular context as there
 are a handful of third party devices that run the full set of tests that
 the ref driver runs with no additional skips or modifications.

 ​



 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.



 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).

 ​



 - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.



 ​Yeah, certainly I think this highlights some of the differences between
 Cinder and Neutron perhaps and the differences in complexity.

 Thanks for the feedback... I don't disagree per say, however Cinder is set
 up a bit different here in terms of expectations for base functionality
 requirements and compatibility but your points are definitely well taken.
 ​



 Thanks,

 doug





 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:



 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?



 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.



 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?



 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.



 Thanks,

 John

 __
 OpenStack Development 

Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-03-30 Thread Dmitri Zimine
Thanks Winson for the summary. 

@Lingxian Kong
  The context for a task is used
 internally, I know the aim for this feature is to make it very easy
 and convinient for users to see the details for the workflow exection,
 but what users can do next with the context? Do you have a plan to
 change that context for a task by users? if the answer is no, I think
 it is not very necessary to expose the context endpoint.

I think the answer is “yes users will change the context” this falls out of use 
case #3. 
Let’s be specific: a create_vm task failed due to, say, network connection. 
As a user, I created the VM manually, now want to continue the workflow. 
Next step is attach storage to VM, needs VM ID published variable. So a user 
needs to 
modify outgoing context of create_vm task.

May be use case 2 be sufficient? 
We are also likely to specify multiple tasks: in case a parallel execution of 
two tasks
(create VM, create DNS record) failed again due to network conditions - than 
network 
is back I want to continue, but re-run those two exact tasks. 

Another point, may be obvious but let’s articulate it: we re-run task, not 
individual action within task.
In context of with_items, retry, repeat, it will lead to running actions 
multiple times.

Finally, workflow execution traceability. We need to get to the point of 
tracing pause and resume as workflow events. 

@Lingxian Kong
  we can introduce the notification
 system to Mistral, which is heavily used in other OpenStack projects.
care to elaborare? Thanks! 

DZ  


On Mar 26, 2015, at 10:29 PM, Lingxian Kong anlin.k...@gmail.com wrote:

 On Fri, Mar 27, 2015 at 11:20 AM, W Chan m4d.co...@gmail.com wrote:
 We assume WF is in paused/errored state when 1) user manually pause the WF,
 2) pause is specified on transition (on-condition(s) such as on-error), and
 3) task errored.
 
 The resume feature will support the following use cases.
 1) User resumes WF from manual pause.
 2) In the case of task failure, user fixed the problem manually outside of
 Mistral, and user wants to re-run the failed task.
 3) In the case of task failure, user fixed the problem manually outside of
 Mistral, and user wants to resume from the next task.
 this use case does really make sense to me.
 
 Resuming from #1 should be straightforward.
 Resuming from #2, user may want to change the inbound context.
 Resuming from #3, users is required to manually provide the published vars
 for the failed task(s).
 
 In our offline discussion, there's ambiguity with on-error clause and
 whether a task failure has already been addressed by the WF itself.  In many
 cases, the on-error tasks may just be logging, email notification, and/or
 other non-recovery procedures.  It's hard to determine that automatically,
 so we let users decide where to resume the WF instead.  Mistral will let
 user resume a WF from specific point. The resume function will determine the
 requirements needed to successfully resume.  If requirements are not met,
 then resume returns an error saying what requirements are missing.  In the
 case where there are failures in multiple parallel branches, the
 requirements may include more than one tasks.  For cases where user
 accidentally resume from an earlier task that is already successfully
 completed, the resume function should detect that and throw an exception.
 
 Also, the current change to separate task from action execution should be
 sufficient for traceability.
 
 We also want to expose an endpoint to let users view context for a task.
 This is to let user have a reference of the current task context to
 determine the delta they need to change for a successful resume.
 IMHO, I'm afraid I can't agree here. The context for a task is used
 internally, I know the aim for this feature is to make it very easy
 and convinient for users to see the details for the workflow exection,
 but what users can do next with the context? Do you have a plan to
 change that context for a task by users? if the answer is no, I think
 it is not very necessary to expose the context endpoint.
 
 However, considering the importance of context for the task
 execution(the resuming feature), we can introduce the notification
 system to Mistral, which is heavily used in other OpenStack projects.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Regards!
 ---
 Lingxian Kong
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober rochelle.gro...@huawei.com
 wrote:

  Top posting… I believe the main issue was a problem with snapshots that
 caused false negatives for most cinder drivers.  But, that got fixed.
 Unfortunately, we haven’t yet established a good process to notify third
 parties when skipped tests are fixed and should be “unskipped”.  Maybe
 tagging the tests can help on this.  But, I really do think this round was
 a bit of first run gotchas and rookie mistakes on all sides.  A good post
 mortem on how to better communicate changes and deadlines may go a long way
 to smooth these out in the next round.



 --Rocky



 John Griffith on Monday, March 30, 2015 15:36 wrote:

  On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 A few reasons, I’m sure there are others:



 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

 ​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
 Although that's not actually the case in this particular context as there
 are a handful of third party devices that run the full set of tests that
 the ref driver runs with no additional skips or modifications.

 ​



  - Test relies on some “optional” feature, like overlapping IP subnets
 that the backend doesn’t support.  I’d argue it’s another case of broken
 tests if they require an optional feature, but it still needs skipping in
 the meantime.



 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).

 ​



  - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.



 ​Yeah, certainly I think this highlights some of the differences between
 Cinder and Neutron perhaps and the differences in complexity.

 Thanks for the feedback... I don't disagree per say, however Cinder is set
 up a bit different here in terms of expectations for base functionality
 requirements and compatibility but your points are definitely well taken.
 ​



 Thanks,

 doug





   On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:



 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?



 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.



 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?



 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.



 Thanks,

 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Not top posting...

 I believe the main issue was a problem with snapshots that caused false
negatives for most cinder drivers.  But, that got fixed

​Huh?  What was the problem, where was the problem, who/what fixed it, was
there a bug logged somewhere, what comprises *most* Cinder drivers?


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Arkady_Kanevsky
Another scenario.
The default LVM driver is local to cinder service.  Thus, it may work fine as 
soon as you go outside controller node it does not.
We had a discussion on choosing different default driver and expect that 
discussion to continue.

Not all drivers support all features. We have a table that list which features 
each driver support.

The question I would ask is setting which test to skip in the driver is the 
right place?
Why not specify it in the Tempest which driver run against.
Then we can setup rules when drivers should remove themselves from that 
blackout list.
That is easier to track, can be cleanly used by defcore and for tagging.

Thanks,
Arkady

From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: Monday, March 30, 2015 8:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on 
the goal and skipping tests



On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:
Top posting… I believe the main issue was a problem with snapshots that caused 
false negatives for most cinder drivers.  But, that got fixed.  Unfortunately, 
we haven’t yet established a good process to notify third parties when skipped 
tests are fixed and should be “unskipped”.  Maybe tagging the tests can help on 
this.  But, I really do think this round was a bit of first run gotchas and 
rookie mistakes on all sides.  A good post mortem on how to better communicate 
changes and deadlines may go a long way to smooth these out in the next round.

--Rocky

John Griffith on Monday, March 30, 2015 15:36 wrote:
On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
​Certainly... but that's relatively easy to fix (bug/patch to Tempest).  
Although that's not actually the case in this particular context as there are a 
handful of third party devices that run the full set of tests that the ref 
driver runs with no additional skips or modifications.
​

- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.

​This may be something specific to Neutron perhaps?  In Cinder LVM is pretty 
much the lowest common denominator.  I'm not aware of any volume tests in 
Tempest that rely on optional features that don't pick this up automatically 
out of the config (like multi-backend for example).
​

- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

​Yeah, certainly I think this highlights some of the differences between Cinder 
and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set up a 
bit different here in terms of expectations for base functionality requirements 
and compatibility but your points are definitely well taken. ​

Thanks,
doug


On Mar 30, 2015, at 2:54 PM, John Griffith 
john.griffi...@gmail.commailto:john.griffi...@gmail.com wrote:

This may have already been raised/discussed, but I'm kinda confused so thought 
I'd ask on the ML here.  The whole point of third party CI as I recall was to 
run the same tests that we run in the official Gate against third party 
drivers.  To me that would imply that a CI system/device that marks itself as 
GOOD doesn't do things like add skips locally that aren't in the tempest code 
already?

In other words, seems like cheating to say My CI passes and all is good, 
except for the tests that don't work which I skip... but pay no attention to 
those please.

Did I miss something, isn't the whole point of Third Party CI to demonstrate 
that a third parties backend is tested and functions to the same degree that 
the reference implementations do? So the goal (using Cinder for example) was to 
be able to say that any API call that works on the LVM reference driver will 
work on the drivers listed in driverlog; and that we know this because they run 
the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be 
marked as no good... but if the practice is to skip what you can't do then 
maybe that should be documented in the driverlog submission, as opposed to just 
stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-03-30 Thread Dugger, Donald D
I actually prefer to use the term Gantt, it neatly encapsulates the discussions 
and it doesn't take much effort to realize that Gantt refers to the scheduler 
and, if you feel there is confusion, we can clarify things in the wiki page to 
emphasize the process: clean up the current scheduler interfaces and then split 
off the scheduler.  The end goal will be the Gantt scheduler and I'd prefer not 
to change the discussion.

Bottom line is I don't see a need to drop the Gantt reference.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com] 
Sent: Monday, March 30, 2015 8:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt 
for discussing about Nova scheduler

Hi,

tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if we could 
do this for the last time until we spin-off the project.

  As it is confusing for many people to understand the difference in between 
the future Gantt project and the Nova scheduler effort we're doing, I'm 
proposing to stop using that name for all the efforts related to reducing the 
technical debt and splitting out the scheduler. That includes, not 
exhaustively, the topic name for our IRC weekly meetings on Tuesdays, any ML 
thread related to the Nova scheduler or any discussed related to the scheduler 
happening on IRC.
Instead of using [gantt], please use [nova] [scheduler] tags.

That said, any discussion related to the real future of a cross-project 
scheduler based on the existing Nova scheduler makes sense to be tagged as 
Gantt, of course.


-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-30 Thread Russell Bryant
On 03/26/2015 07:54 PM, Russell Bryant wrote:
 Gary and Kyle, I saw in my IRC backlog that you guys were briefly
 talking about testing the Neutron ovn ml2 driver.  I suppose it's time
 to add some more code to the devstack integration to install the current
 ovn branch and set up ovsdb-server to serve up the right database for
 this.  I'll try to work on that tomorrow.  Of course, note that all we
 can set up right now is the northbound database.  None of the code that
 reacts to updates to that database is merged yet.  We can still go ahead
 and test our code and make sure the expected data makes it there, though.

With help from Kyle Mestery, Gary Kotton, and Gal Sagie, some great
progress has been made over the last few days.  Devstack support has
merged and the ML2 driver seems to be doing the right thing.

After devstack runs, you can see that the default networks created by
devstack are in the OVN db:

 $ neutron net-list
 +--+-+--+
 | id   | name| subnets
   |
 +--+-+--+
 | 1c4c9a38-afae-40aa-a890-17cd460b314b | private | 
 115f27d1-5330-489e-b81f-e7f7da123a31 10.0.0.0/24 |
 | 69fc7d7c-6906-43e7-b5e2-77c059cf4143 | public  | 
 6b5c1597-4af8-4ad3-b28b-a4e83a07121b |
 +--+-+--+

 $ ovn-nbctl lswitch-list
 47135494-6b36-4db9-8ced-3bdc9b711ca9 
 (neutron-1c4c9a38-afae-40aa-a890-17cd460b314b)
 03494923-48cf-4af5-a391-ed48fe180c0b 
 (neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143)

 $ ovn-nbctl lswitch-get-external-id 
 neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
 neutron:network_id=1c4c9a38-afae-40aa-a890-17cd460b314b
 neutron:network_name=private

 $ ovn-nbctl lswitch-get-external-id 
 neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143
 neutron:network_id=69fc7d7c-6906-43e7-b5e2-77c059cf4143
 neutron:network_name=public

You can also create ports and see those reflected in the OVN db:

 $ neutron port-create 1c4c9a38-afae-40aa-a890-17cd460b314b
 Created a new port:
 +---+-+
 | Field | Value   
 |
 +---+-+
 | admin_state_up| True
 |
 | allowed_address_pairs | 
 |
 | binding:vnic_type | normal  
 |
 | device_id | 
 |
 | device_owner  | 
 |
 | fixed_ips | {subnet_id: 
 115f27d1-5330-489e-b81f-e7f7da123a31, ip_address: 10.0.0.3} |
 | id| e7c080ad-213d-4839-aa02-1af217a6548c
 |
 | mac_address   | fa:16:3e:07:9e:68   
 |
 | name  | 
 |
 | network_id| 1c4c9a38-afae-40aa-a890-17cd460b314b
 |
 | security_groups   | be68fd4e-48d8-46f2-8204-8a916ea6f348
 |
 | status| DOWN
 |
 | tenant_id | ed782253a54c4e0a8b46e275480896c9
 |
 +---+-+

List ports on the logical switch named neutron-1c4c9a38...:

 $ ovn-nbctl lport-list neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
 ...
 96432697-df3c-472a-b48a-9f844764d4bf 
 (neutron-e7c080ad-213d-4839-aa02-1af217a6548c)

We can also see that the proper MAC address was set on that port:

 $ ovn-nbctl lport-get-macs neutron-e7c080ad-213d-4839-aa02-1af217a6548c
 fa:16:3e:07:9e:68

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [devstack] neutron router ID is not set

2015-03-30 Thread Guo, Ruijing
Hi, All,

When Q_USE_NAMESPACE=False, router id is set by

create_neutron_initial_network
_neutron_configure_router_v4
_neutron_set_router_id

function _neutron_set_router_id {
if [[ $Q_USE_NAMESPACE == False ]]; then
iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
fi
}

create_neutron_initial_network is called after neturon service is enabled.

However, router_id in l3 conf is expected to be set before neutron service is 
enabled.

Is it a bug?


Thanks,
-Ruijing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Attaching extra-spec to vol-type using Cinder py-client

2015-03-30 Thread Pradip Mukhopadhyay
Hello,

I am trying to create and type-set some parameters to a volume-type as
follows:

cinder type-create nfs
cinder type-key nfs set volume_backend_name=myNFSBackend

The same thing I want to achieve through python client.

I can create the type as follows:

from cinderclient import client
cinder = client.Client('2', 'admin', 'pw', 'demo',
'http://127.0.0.1:5000/v2.0', service_type=volumev2)
cinder.volume_types.create('nfs')

However how can I associate the extra-spec through python-client code to
the 'nfs' volume (same impact as the CLI 'cinder type-key nfs set
volume_backend_name=myNFSBackend' does)?

The 'set_keys' etc. methods are there in the v2/volume_types.py in
python-cinderclient codebase. How to call it? (it's part of VolumeType
class, not VolumeTypeManager).

Any help would be great.

Thanks, Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Use of consumer resource

2015-03-30 Thread John Wood
(Including Adam, who implemented this feature last year to make sure I'm not 
misspeaking here :)

Hello Asha,

The consumers feature allows clients/services to register 'interest' in a given 
secret or container. The URL provided is unrestricted. Clients that wish to 
delete a secret or consumer may add logic to hold off deleting if other 
services have registered their interest in the resource. However for Barbican 
this data is only informational, with no business logic (such as rejecting 
delete attempts) associated with it.

I hope that helps.

Thanks,
John


From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Monday, March 30, 2015 at 5:04 PM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com, 
Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
a...@redhat.commailto:a...@redhat.com 
a...@redhat.commailto:a...@redhat.com, Paul Kehrer 
paul.keh...@rackspace.commailto:paul.keh...@rackspace.com
Subject: Re: Barbican : Use of consumer resource

Including Alee and Paul in the loop

Refining the above question :

The consumer resource allows the clients to register with container resources. 
Please find the command and response below


POST v1/containers/888b29a4-c7cf-49d0-bfdf-bd9e6f26d718/consumers

Header: content-type=application/json
X-Project-Id: {project_id}
{
name: foo-service,
URL: https://www.fooservice.com/widgets/1234;
}

I would like to know the following :

1. Who  does the client here refers to ? Openstack Services or any other 
services as well?

2. Once the client gets registered through the consumer resource , How does 
client consume or use the consumer resource

Any Help would be appreciated.

Thanks Asha.




On Mon, Mar 30, 2015 at 12:05 AM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Hi All,

Once the consumer resource registers to the containers , how does the consumer 
resource consume the container resource?
Is there any API supporting the above operation.

Could any one please help on this?

--
Thanks and Regards,
Asha Seshagiri



--
Thanks and Regards,
Asha Seshagiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]

2015-03-30 Thread wei hu
 Hi, all.
Recently, I have requirement to establish ipsec connection with virtual
tunnel interface(VTI).
But I found the existing vpnaas in neutron does not support ipsec with
virtual tunnel interface.
Do  we have plan to let vpnaas support ipsec with vti or gre over ipsec ?

With this feature, we can add route rules in the vpn gateways, so that we
can not only connect
two private subnets with each ipsec connection, but no limit(just add route
rule in the gateway).

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/P2P_GRE_IPSec/P2P_GRE_IPSec/2_p2pGRE_Phase2.html

http://www.cisco.com/c/en/us/td/docs/ios/sec_secure_connectivity/configuration/guide/15_0/sec_secure_connectivity_15_0_book/sec_ipsec_virt_tunnl.html


--
huwei@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] [nova] The risk of hanging when shutdown instance.

2015-03-30 Thread zhang bo
On 2015/3/31 4:36, Eric Blake wrote:

 On 03/30/2015 06:08 AM, Michal Privoznik wrote:
 On 30.03.2015 11:28, zhang bo wrote:
 On 2015/3/28 18:06, Rui Chen wrote:

 snip/

   The API virDomainShutdown's description is out of date, it's not correct.
   In fact, virDomainShutdown would block or not, depending on its mode. If 
 it's in mode *agent*, then it would be blocked until qemu founds that the 
 guest actually got down.
 Otherwise, if it's in mode *acpi*, then it would return immediately.
   Thus, maybe further more work need to be done in Openstack.

   What's your opinions, Michal and Daniel (from libvirt.org), and Chris 
 (from openstack.org) :)



 Yep, the documentation could be better in that respect. I've proposed a
 patch on the libvirt upstream list:

 https://www.redhat.com/archives/libvir-list/2015-March/msg01533.html
 
 I don't think a doc patch is right.  If you don't pass any flags, then
 it is up to the hypervisor which method it will attempt (agent or ACPI).
  Yes, explicitly requesting an agent as the only method to attempt might
 be justifiable as a reason to block, but the overall API contract is to
 NOT block indefinitely.  I think that rather than a doc patch, we need
 to fix the underlying bug, and guarantee that we return after a finite
 time even when the agent is involved.
 

So, may we get to a final decision? :) Shall we timeout in virDomainShutdown() 
or leave it to openstack?
The 2 solutions I can see are:
1) timeout in virDomainShutdown() and virDomainReboot(). in libvirt.
2) spawn a new thread to monitor the guest's status, if it's not shutoff after 
dom.shutdown() for a while,
   call dom.destroy() to force shut it down.  in openstack.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-30 Thread Jay Pipes

On 03/27/2015 03:03 PM, Chris Friesen wrote:

On 03/27/2015 12:44 PM, Dan Smith wrote:

To quote John from an earlier email in this thread:

Its worth noting, we do have the experimental flag:

The first header specifies the version number of the API which was
executed. Experimental is only returned if the operator has made a
modification to the API behaviour that is non standard. This is only
intended to be a transitional mechanism while some functionality used
by cloud operators is upstreamed and it will be removed within a small
number of releases.


So if you have an extension that gets accepted upstream you can use the
experimental flag until you migrate to the upstream version of the
extension.


Yes, but please note the last sentence in the quoted bit. This is to
help people clean their dirty laundry. Going forward, you shouldn't
expect to deliver features to your customers via this path.


That is *not* what I would call interoperability, this is exactly what
we do not want.


+1.


So for the case where a customer really wants some functionality, and
wants it *soon* rather than waiting for it to get merged upstream, what
is the recommended implementation path for a vendor?


Get it merged upstream and then backported to stable branches. Yes, this 
takes time. Yes, it's a pain. Yes, it takes nagging sometimes. Yes, it 
annoys product managers.



And what about stuff that's never going to get merged upstream because
it's too specialized or too messy or depends on proprietary stuff?


See below.


I ask this as an employee of a vendor that provides some modifications
that customers seem to find useful (using the existing extensions
mechanism to control them) and we want to do the right thing here.  Some
of the modifications could make sense upstream and we are currently
working on pushing those, but it's not at all clear how we're supposed
to handle the above scenarios once the existing extension code gets
removed.


Anything that affects the public compute API should be done from the 
start in the open in upstream.


True extensions or vendor add-ons should be done in an entirely separate 
REST API endpoint, IMO.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-30 Thread George Shuklin



On 03/30/2015 11:18 AM, Kevin Benton wrote:
What does fog do? Is it just a client to the Neutron HTTP API? If so, 
it should not have broken like that because the API has remained 
pretty stable. If it's a deployment tool, then I could see that 
because the configuration options to tend to suffer quite a bit of 
churn as tools used by the reference implementation evolve.


As far as I understand (I'm not ruby guy, I'm openstack guy, but I 
peeking to ruby guys attempts to use openstack with fog as replacement 
for vagrant/virtualbox), the problem lies  in the default network selection.


Fog expects to have one network and use it, and neutron network-rich 
environment is simply too complex for it. May be it is fog to blame, but 
result is simple: some user library worked fine with nova networks but 
struggling after update to neutron.


Linux usually covers all those cases to make transition between versions 
very smooth. Openstack is not.


I agree that these changes are an unpleasant experience for the end 
users, but that's what the deprecation timeline is for. This feature 
won't break in L, it will just result in deprecation warnings. If we 
get feedback from users that this serves an important use case that 
can't be addressed another way, we can always stop the deprecation at 
that point.


In my opinion it happens too fast and cruel. For example: It deprecates 
in 'L' release and will be kept only of 'L' users complains. But for 
that many users should switch from havana to newer version. But it not 
true, many skips few versions before moving to the new one.


Openstack releases are too wild and untested to be used 'after release' 
(simple example: VLAN id bug in neutron, which completely breaks hard 
reboots in neutron, was fixed in last update of havana, that means all 
havanas was broken from the moment of release to the very last moment), 
so users wait until bugs are fixed. And they deploy new version after 
that. So it is something like half of the year between new version and 
deployment. And no one wants to do upgrade right after they done 
deployment. Add one or two more years. And only than user find that 
everything is deprecated and removed and openstack is new and shiny 
again, and everyone need to learn it from scratches. I'm exaggerating a 
bit, but that's true - the older and mature installation (like big 
public cloud) the less they want to upgrade every half of the year to 
the shiny new bugs.


TL;DR: Deprecation cycle should take at least few years to get proper 
feedback from real heavy users.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Use of consumer resource

2015-03-30 Thread Adam Harwell
As John said, the URI is unrestricted (intentionally so) -- this could be 
'mailto:s...@person.com' just as easily as a reference to another OpenStack or 
external service. Originally, the idea was that Loadbalancers would need to use 
a Container for TLS purposes, so we'd put the LB's URI in there as a 
back-reference 
(https://loadbalancers.myservice.com/lbaas/v2/loadbalancers/12345). That way, 
you could easily show in Horizon that LB 12345 is using this container.

Registering with that POST has the side-effect of receiving the container's 
data as though you'd just done a GET - so, the design was that any time a 
service needed to GET the container data, it would do a POST to register 
instead - which would give you the data, but also mark interest. The 
registration action is idempotent, so you can register once, twice, or a 
hundred times and it has the same effect. The only tricky part is making sure 
that your service de-registers when you stop using the container.

--Adam


From: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com
Date: Tuesday, March 31, 2015 12:06 AM
To: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com, 
openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
a...@redhat.commailto:a...@redhat.com 
a...@redhat.commailto:a...@redhat.com, Paul Kehrer 
paul.keh...@rackspace.commailto:paul.keh...@rackspace.com, Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com
Subject: Re: Barbican : Use of consumer resource

(Including Adam, who implemented this feature last year to make sure I'm not 
misspeaking here :)

Hello Asha,

The consumers feature allows clients/services to register 'interest' in a given 
secret or container. The URL provided is unrestricted. Clients that wish to 
delete a secret or consumer may add logic to hold off deleting if other 
services have registered their interest in the resource. However for Barbican 
this data is only informational, with no business logic (such as rejecting 
delete attempts) associated with it.

I hope that helps.

Thanks,
John


From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Monday, March 30, 2015 at 5:04 PM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com, 
Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
a...@redhat.commailto:a...@redhat.com 
a...@redhat.commailto:a...@redhat.com, Paul Kehrer 
paul.keh...@rackspace.commailto:paul.keh...@rackspace.com
Subject: Re: Barbican : Use of consumer resource

Including Alee and Paul in the loop

Refining the above question :

The consumer resource allows the clients to register with container resources. 
Please find the command and response below


POST v1/containers/888b29a4-c7cf-49d0-bfdf-bd9e6f26d718/consumers

Header: content-type=application/json
X-Project-Id: {project_id}
{
name: foo-service,
URL: https://www.fooservice.com/widgets/1234;
}

I would like to know the following :

1. Who  does the client here refers to ? Openstack Services or any other 
services as well?

2. Once the client gets registered through the consumer resource , How does 
client consume or use the consumer resource

Any Help would be appreciated.

Thanks Asha.




On Mon, Mar 30, 2015 at 12:05 AM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Hi All,

Once the consumer resource registers to the containers , how does the consumer 
resource consume the container resource?
Is there any API supporting the above operation.

Could any one please help on this?

--
Thanks and Regards,
Asha Seshagiri



--
Thanks and Regards,
Asha Seshagiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EC2 API Sub team weekly meeting

2015-03-30 Thread M Ranga Swami Reddy
We'll have the weekly EC2 API sub team meeting [1] at 1400UTC in
#openstack-meeting-3 today. We'll likely spend the majority of the
time going over current status along with critical bugs, as well as
covering BPs.

Please feel free to add other items in the  agenda [2] section.

Thanks!
Swami

[1] https://wiki.openstack.org/wiki/Meetings/EC2API

[2] https://wiki.openstack.org/wiki/Meetings/EC2API#Agenda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-30 Thread Ken'ichi Ohmichi
2015-03-28 4:03 GMT+09:00 Chris Friesen chris.frie...@windriver.com:
 On 03/27/2015 12:44 PM, Dan Smith wrote:

 To quote John from an earlier email in this thread:

 Its worth noting, we do have the experimental flag:
 
 The first header specifies the version number of the API which was
 executed. Experimental is only returned if the operator has made a
 modification to the API behaviour that is non standard. This is only
 intended to be a transitional mechanism while some functionality used
 by cloud operators is upstreamed and it will be removed within a small
 number of releases.
 

 So if you have an extension that gets accepted upstream you can use the
 experimental flag until you migrate to the upstream version of the
 extension.


 Yes, but please note the last sentence in the quoted bit. This is to
 help people clean their dirty laundry. Going forward, you shouldn't
 expect to deliver features to your customers via this path.

 That is *not* what I would call interoperability, this is exactly what
 we do not want.


 +1.


 So for the case where a customer really wants some functionality, and wants
 it *soon* rather than waiting for it to get merged upstream, what is the
 recommended implementation path for a vendor?

 And what about stuff that's never going to get merged upstream because it's
 too specialized or too messy or depends on proprietary stuff?

 I ask this as an employee of a vendor that provides some modifications that
 customers seem to find useful (using the existing extensions mechanism to
 control them) and we want to do the right thing here.  Some of the
 modifications could make sense upstream and we are currently working on
 pushing those, but it's not at all clear how we're supposed to handle the
 above scenarios once the existing extension code gets removed.


Nova is just code, it is possible to extend APIs as vendors want.
However, I'm not sure why community/upstream needs to provide the
vendor customization way as the standard one.
If your use case is private cloud, interoperability is not so
important and you can customize APIs without concerns related to
interoperability.

Now we are trying to deny unexpected attributes (including vendor
specific attributes) on Tempest[1], so refstack which is using Tempest
will deny these customized APIs.

Thanks
Ken Ohmichi

---
[1]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-February/057613.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-30 Thread Nolan Brubaker
Thanks Kevin, and thanks to the other cores for their votes of confidence.

On Mar 30, 2015, at 11:50 AM, Kevin Carter kevin.car...@rackspace.com wrote:

 Please join me in welcoming Nolan Brubaker (palendae) to the 
 os-ansible-deployment core team.
 
 —
 
 Kevin Carter
 
 
 On Mar 30, 2015, at 06:54, Jesse Pretorius jesse.pretor...@gmail.com wrote:
 
 On 25 March 2015 at 15:24, Kevin Carter kevin.car...@rackspace.com wrote:
 I would like to nominate Nolan Brubaker (palendae on IRC) for the 
 os-ansible-deployment-core team. Nolan has been involved with the project 
 for the last few months and has been an active reviewer with solid reviews. 
 IMHO, I think he is ready to receive core powers on the repository.
 
 References:
  [ 
 https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,z
  ]
 
 Please respond with +1/-1s or any other concerns.
 
 +1 Nolan's been an active reviewer, provided good feedback and contributions.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Question about Sahara API v2

2015-03-30 Thread Sergey Lukjanov
Agree with Mike, thx for the link.

On Mon, Mar 30, 2015 at 4:55 PM, michael mccune m...@redhat.com wrote:

 On 03/30/2015 07:02 AM, Sergey Lukjanov wrote:

 My personal opinion for API 2.0 - we should discuss design of all object
 and endpoint, review how they are used from Horizon or
 python-saharaclient and improve them as much as possible. For example,
 it includes:

 * get rid of tons of extra optional fields
 * rename Job - Job Template, Job Execution - Job
 * better support for Horizon needs
 * hrefs

 If you have any ideas ideas about 2.0 - please write them up, there is a
 99% chance that we'll discuss an API 2.0 a lot on Vancouver summit.


 +1

 i've started a pad that we can use to collect ideas for the discussion:
 https://etherpad.openstack.org/p/sahara-liberty-api-v2

 things that i'd like to see from the v2 discussion

 * a full endpoint review, some of the endpoints might need to be
 deprecated or adjusted slightly (for example, job-binary-internals)

 * a technology review, should we consider Pecan or stay with Flask?

 * proposals for more radical changes to the api; use of micro-versions
 akin to nova's plan, migrating the project id into the headers, possible
 use of swagger to aid in auto-generation of api definitions.

 i think we will have a good amount to discuss and i will be migrating some
 of my local notes into the pad over this week and the next. i invite
 everyone to add their thoughts to the pad for ideas.

 mike


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev