Re: [openstack-dev] Hierarchical Multitenancy

2014-12-23 Thread Tim Bell

It would be great if we can get approval for the Hierachical Quota handling in 
Nova too (https://review.openstack.org/#/c/129420/).

Tim

From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
Sent: 23 December 2014 01:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Hierarchical Multitenancy

Hi Raildo,

Thanks for putting this post together. I really appreciate all the work you 
guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
into Keystone. It’s great to have the base implementation merged into Keystone 
for the K1 milestone. I look forward to seeing the rest of the development land 
during the rest of this cycle and what the other OpenStack projects build 
around the HMT functionality.

Cheers,
Morgan



On Dec 22, 2014, at 1:49 PM, Raildo Mascena 
rail...@gmail.commailto:rail...@gmail.com wrote:

Hello folks, My team and I developed the Hierarchical Multitenancy concept for 
Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
implemented? What are the next steps for kilo?
To answers these questions, I created a blog post 
http://raildo.me/hierarchical-multitenancy-in-openstack/

Any question, I'm available.

--
Raildo Mascena
Software Engineer.
Bachelor of Computer Science.
Distributed Systems Laboratory
Federal University of Campina Grande
Campina Grande, PB - Brazil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16

2014-12-23 Thread Evgeny Fedoruk
Thanks Brandon

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Sunday, December 21, 2014 7:50 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16

The extensions are remaining in neutron until the Neutron WSGI Refactor is 
completed so it's easier for them to test all extensions and that nothing 
breaks.  I do believe the plan is to move the extensions into the service repos 
once this is completed.

Thanks,
Brandon
On Sun, 2014-12-21 at 10:14 +, Evgeny Fedoruk wrote:
 Hi Doug,
 How are you? 
 I have a question regarding https://review.openstack.org/#/c/141247/ 
 change set Extension changes are not part of this change. I also see the 
 whole extension mechanism is out of the new repository.
 I may be missed something. Are we replacing  the mechanism with something 
 else? Or we will add it separately in other change set?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Doug Wiegley [mailto:do...@a10networks.com]
 Sent: Sunday, December 14, 2014 7:46 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 
 12/16
 
 Unless someone has an urgent agenda item, and due to the mid-cycle for 
 Octavia, which has a bunch of overlap with the lbaas team, let’s cancel this 
 week. If you have post-split lbaas v2 questions, please find me in 
 #openstack-lbaas.
 
 The only announcement was going to be: If you are waiting to re-submit/submit 
 lbaasv2 changes for the new repo, please monitor this review, or make your 
 change dependent on it:
 
 https://review.openstack.org/#/c/141247/
 
 
 Thanks,
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt]How to customize cpu features in nova

2014-12-23 Thread CloudBeyond
Dear Developers,

Sorry for interrupting if i sent to wrong email group, but i got a problem on
running Solaris 10 on icehouse openstack.
 I found it is need to disable CPU feature x2apic so that solaris 10 NIC
could work in KVM as following code in libvirt.xml

  cpu mode=custom match=exact
modelSandyBridge/model
vendorIntel/vendor
feature policy='disable' name='x2apic'/
  /cpu

if without line
  feature policy='disable' name='x2apic'/
the NIC in Solaris could not work well.

And I try to migrate the KVM libvirt xml to Nova. I found only two options
to control the result.

First I used default setting cpu_mdoe = None in nova.conf , the Solaris 10
would keep rebooting before enter desktop env.

And then I set cpu_mode = custom, cpu_model = SandyBridge. Solaris 10 could
start up but NIC not work.

I also set cpu_mode = host-model, cpu_model = None. Solaris 10 could work
but NIC not.

I read the code located
in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py. Is that
possible to do some hacking to customize the cpu feature?

Thank you and I am really looking forward your reply.
Have a nice day and Merry Christmas!

Best Regard.
Elbert Wang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ThirdPartyCI][PCI CI] comments to Nova

2014-12-23 Thread yongli he

Hi, Joe Gordon and all

recently Intel is setting up a HW based Third Part CI.   it's already  
running a set of basic
PCI test cases  for several  weeks, but do not sent out comments, just 
log the result.

the log server and these test cases seems stable.  here is one sample log:

http://192.55.68.190/138795/6/

for now, the test cases land in the github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases

to begin given comments  to nova repository,  what other necessary work 
need to be address?


some notes:
* the test cases just cover basic PCI pass through testing.
* after it begin to working, more test cases will be added , 
include basic SRIOV


Thanks
Yongli He

More logs:
http://192.55.68.190/138795/6

http://192.55.68.190/74423/6

http://192.55.68.190/141115/6

http://192.55.68.190/142565/2

http://192.55.68.190/142835/3

http://192.55.68.190/74423/5

http://192.55.68.190/142835/2

http://192.55.68.190/140739/3

.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework

2014-12-23 Thread Vikram Choudhary
Hi Keshava,

Please find my answer inline…

From: A, Keshava [mailto:keshav...@hp.com]
Sent: 22 December 2014 20:10
To: Vikram Choudhary; Murali B
Cc: openstack-dev@lists.openstack.org; yuriy.babe...@telekom.de; 
stephen.kf.w...@gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi; A, 
Keshava
Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Vikram,

1.   In this solution it is assumed that all the OpenStack services are 
available/enabled on all the CNs ?

Vikram: The SFC NB API should be independent of where the advanced services are 
deployed, the API should infact hide this information by design.

In the BP, we have proposed an idea of service pool which will be populated by 
the user with their respective advance service instance/s. Our solution will 
just try to find out the best service instance to be used and dictate the flow 
path which the data traffic needs to follow for accomplishing SFC.

Let’s say the service pool contains 2 LB and 2 FW services and the user want a 
SFC as LB-FW. In such scenario a flow rule mentioning the details about LB and 
FW instance with flow path details will be downloaded to the OVS.

Please note the details about the advanced services (like IP details i.e. where 
the service is running and etc) will be fetched from the neutron db.



2.   Consider a  scenario: For a particular Tennant traffic  the  flows are 
chained across a set of CNs .

Then if one of the  VM (of that Tennant) migrates to a new CN, where that 
Tennant was not there earlier on that CN, what will be the impact ?

How to control the chaining of flows in these kind of scenario ? so that packet 
will reach that Tennant VM on new CN ?

Vikram: If the deployment of advanced services change, the neutron db would be 
updated and corresponding actions (selection of advance service instance and 
corresponding change in OVS dataflow would be done). This is hidden to the user.



Here this Tennant VM be a NFV Service-VM (which should be transparent to 
OpenStack).

keshava



From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com]
Sent: Monday, December 22, 2014 12:28 PM
To: Murali B
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de; A, Keshava; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; Dhruv Dhody; 
Dongfeng (C); Kalyankumar Asangi
Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Sorry for the incontinence. We will sort the issue at the earliest.
Please find the BP attached with the mail!!!

From: Murali B [mailto:mbi...@gmail.com]
Sent: 22 December 2014 12:20
To: Vikram Choudhary
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de; 
keshav...@hp.commailto:keshav...@hp.com; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; Dhruv Dhody; 
Dongfeng (C); Kalyankumar Asangi
Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Thank you Vikram,

Could you or somebody please provide the access the full specification document

Thanks
-Murali

On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary 
vikram.choudh...@huawei.commailto:vikram.choudh...@huawei.com wrote:
Hi Murali,

We have proposed service function chaining idea using open flow.
https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow

Will submit the same for review soon.

Thanks
Vikram

From: yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de 
[mailto:yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de]
Sent: 18 December 2014 19:35
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should 

[openstack-dev] #PERSONAL# : Horizon -- File permission error in Horizon

2014-12-23 Thread Swati Shukla1
 Hi All,

I am getting this error when I run horizon-

horizon.utils.secret_key.FilePermissionError: Insecure key file permissions!

Can you please guide me to debug this?

Thnaks and Regards,
Swati Shukla
Tata Consultancy Services
Mailto: swati.shuk...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting


-
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] cancel the next 2 weekly meetings

2014-12-23 Thread Angus Salkeld
Hi

Lets cancel the next 2 weekly meetings as they neatly fall on
Christmas eve and new years day.

Happy holidays!

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-12-23 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 23/12/14 08:17, Thomas Goirand wrote:
 On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote:
 Note that OSLO_PACKAGE_VERSION is not public.
 
 Well, it used to be public, it has been added and discussed a few 
 years ago because of issues I had with packaging.
 
 Instead, we should use PBR_VERSION:
 
 http://docs.openstack.org/developer/pbr/packagers.html#versioning



 
 I don't mind switching, though it's going to be a slow process 
 (because I'm using OSLO_PACKAGE_VERSION in all packages).
 
 Are we at least *sure* that using OSLO_PACKAGE_VERSION is now 
 deprecated?

I haven't said anyone should go forward and switch all existing build
manifests. ;) I think Doug Hellmann should be able to answer to your
question more formally. Adding him to CC.

 
 Thomas
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUmVZYAAoJEC5aWaUY1u57E8UH/1lekBJpSRFQsRCvudG2dIyr
lBk+W9ZsfzsOPbdOEf1xtZprU2qz6SwV6zZ6vD/qy7pXoFQe9Z8DP1K2VjPc1JGC
G+maz8HWYxCjgc7UL8nKWqh1gIzSvYN0KkNJYBAHmn39bV7EjSnHJ2y7o2vG57bE
nJA6DlRw3oDdfagWwZr3E2A1+WDDkoAImkj9XZeYQjzal5EMsHyMrWMWlcMvt3Sg
x4SGtxxRmYOkzARpZrtCfrsm5JZAC21mX8aJJdoRVwOwCPUHZi9mG1X821NJdvh6
fLVnpu6dCFTo+oKZyESRoPu6BUZOKGxElV2pp2UrIJJEJ3t3mHrPGKXOde28KPg=
=ydXr
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-23 Thread Robert Li (baoli)
Hi Danny,

check this link out.
https://wiki.openstack.org/wiki/Scheduler_Filters

Add the following into your /etc/nova/nova.conf before starting the nova 
service.

scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, 
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter

Or, You can do so in your local.conf
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_alias={name:cisco,vendor_id:8086,product_id:10ed}
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, 
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter


—Robert

On 12/22/14, 9:53 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:

Hi Joe,

No, I did not.  I’m not aware of this.

Can you tell me exactly what needs to be done?

Thanks,
Danny

--

Date: Sun, 21 Dec 2014 11:42:02 -0600
From: Joe Cropper cropper@gmail.commailto:cropper@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] host aggregate's availability zone
Message-ID: 
b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.commailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
Content-Type: text/plain; charset=utf-8

Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? 
 And enable the FilterScheduler?  These are two common issues related to this.

- Joe

On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:
Hi,
I have a multi-node setup with 2 compute hosts, qa5 and qa6.
I created 2 host-aggregate, each with its own availability zone, and assigned 
one compute host:
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 9  | host-aggregate-zone-1 | az-1  | 'qa5' | 
'availability_zone=az-1' |
++---+---+---+--+
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 10 | host-aggregate-zone-2 | az-2  | 'qa6' | 
'availability_zone=az-2' |
++---+---+---+?+
My intent is to control at which compute host to launch a VM via the 
host-aggregate?s availability-zone parameter.
To test, for vm-1, I specify --availiability-zone=az-1, and 
--availiability-zone=az-2 for vm-2:
localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0066  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| kxot3ZBZcBH6   
|
| config_drive

Re: [openstack-dev] Cross distribution talks on Friday

2014-12-23 Thread Doug Hellmann

On Dec 23, 2014, at 2:17 AM, Thomas Goirand z...@debian.org wrote:

 On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote:
 Note that OSLO_PACKAGE_VERSION is not public.
 
 Well, it used to be public, it has been added and discussed a few years
 ago because of issues I had with packaging.
 
 Instead, we should use
 PBR_VERSION:
 
 http://docs.openstack.org/developer/pbr/packagers.html#versioning
 
 I don't mind switching, though it's going to be a slow process (because
 I'm using OSLO_PACKAGE_VERSION in all packages).
 
 Are we at least *sure* that using OSLO_PACKAGE_VERSION is now deprecated?

It’s not marked as deprecated [1], but I think we added PBR_VERSION because the 
name OSLO_PACKAGE_VERSION made less sense to someone outside of OpenStack who 
isn’t familiar with the fact that the Oslo program manages pbr.

Doug

[1] http://git.openstack.org/cgit/openstack-dev/pbr/tree/pbr/packaging.py#n641

 
 Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-23 Thread Sylvain Bauza


Le 23/12/2014 15:42, Robert Li (baoli) a écrit :

Hi Danny,

check this link out.
https://wiki.openstack.org/wiki/Scheduler_Filters

Add the following into your /etc/nova/nova.conf before starting the 
nova service.


scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, 
RamFilter, ComputeFilter, ComputeCapabilitiesFilter, 
ImagePropertiesFilter, ServerGroupAntiAffinityFilter, 
ServerGroupAffinityFilter, AvailabilityZoneFilter


Or, You can do so in your local.conf
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_alias={name:cisco,vendor_id:8086,product_id:10ed}
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, 
RamFilter, ComputeFilter, ComputeCapabilitiesFilter, 
ImagePropertiesFilter, ServerGroupAntiAffinityFilter, 
ServerGroupAffinityFilter, AvailabilityZoneFilter





That's weird because the default value for scheduler_default_filters is :

cfg.ListOpt('scheduler_default_filters',
default=[
  'RetryFilter',
  'AvailabilityZoneFilter',
  'RamFilter',
  'ComputeFilter',
  'ComputeCapabilitiesFilter',
  'ImagePropertiesFilter',
  'ServerGroupAntiAffinityFilter',
  'ServerGroupAffinityFilter',
  ],

The AZ filter is present, so I suspect something is wrong elsewhere.


Could you maybe paste your log files for the nova-scheduler log ?

Also, please stop posting to the -dev ML, I think it's more appropriate 
to the openstack@ ML.

We need more details before creating a bug.

-Sylvain



—Robert

On 12/22/14, 9:53 AM, Danny Choi (dannchoi) dannc...@cisco.com 
mailto:dannc...@cisco.com wrote:


Hi Joe,

No, I did not.  I’m not aware of this.

Can you tell me exactly what needs to be done?

Thanks,
Danny

--

Date: Sun, 21 Dec 2014 11:42:02 -0600
From: Joe Cropper cropper@gmail.com
mailto:cropper@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] host aggregate's availability zone
Message-ID: b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
mailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
Content-Type: text/plain; charset=utf-8

Did you enable the AvailabilityZoneFilter in nova.conf that the
scheduler uses?  And enable the FilterScheduler?  These are two
common issues related to this.

- Joe

On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi)
dannc...@cisco.com mailto:dannc...@cisco.com wrote:
Hi,
I have a multi-node setup with 2 compute hosts, qa5 and qa6.
I created 2 host-aggregate, each with its own availability
zone, and assigned one compute host:
localadmin@qa4:~/devstack$ nova aggregate-details
host-aggregate-zone-1

++---+---+---+--+
| Id | Name  | Availability Zone | Hosts |
Metadata |

++---+---+---+--+
| 9  | host-aggregate-zone-1 | az-1  | 'qa5' |
'availability_zone=az-1' |

++---+---+---+--+
localadmin@qa4:~/devstack$ nova aggregate-details
host-aggregate-zone-2

++---+---+---+--+
| Id | Name  | Availability Zone | Hosts |
Metadata |

++---+---+---+--+
| 10 | host-aggregate-zone-2 | az-2  | 'qa6' |
'availability_zone=az-2' |
++---+---+---+?+
My intent is to control at which compute host to launch a VM
via the host-aggregate?s availability-zone parameter.
To test, for vm-1, I specify --availiability-zone=az-1, and
--availiability-zone=az-2 for vm-2:
localadmin@qa4:~/devstack$ nova boot --image
cirros-0.3.2-x86_64-uec --flavor 1 --nic
net-id=5da9d715-19fd-47c7-9710-e395b5b90442
--availability-zone az-1 vm-1

+--++
| Property |
Value  |

+--++
| OS-DCF:diskConfig| MANUAL |
| OS-EXT-AZ:availability_zone  | nova |
| OS-EXT-SRV-ATTR:host |
-   

Re: [openstack-dev] [nova][libvirt]How to customize cpu features in nova

2014-12-23 Thread Steve Gordon
- Original Message -
 From: CloudBeyond cloudbey...@gmail.com
 To: openstack-dev@lists.openstack.org
 
 Dear Developers,
 
 Sorry for interrupting if i sent to wrong email group, but i got a problem on
 running Solaris 10 on icehouse openstack.
  I found it is need to disable CPU feature x2apic so that solaris 10 NIC
 could work in KVM as following code in libvirt.xml
 
   cpu mode=custom match=exact
 modelSandyBridge/model
 vendorIntel/vendor
 feature policy='disable' name='x2apic'/
   /cpu
 
 if without line
   feature policy='disable' name='x2apic'/
 the NIC in Solaris could not work well.
 
 And I try to migrate the KVM libvirt xml to Nova. I found only two options
 to control the result.
 
 First I used default setting cpu_mdoe = None in nova.conf , the Solaris 10
 would keep rebooting before enter desktop env.
 
 And then I set cpu_mode = custom, cpu_model = SandyBridge. Solaris 10 could
 start up but NIC not work.
 
 I also set cpu_mode = host-model, cpu_model = None. Solaris 10 could work
 but NIC not.
 
 I read the code located
 in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py. Is that
 possible to do some hacking to customize the cpu feature?

It's possible though as you note requires modification of the driver, if you 
want to try and do this in a way that is compatible with other efforts to 
handle guest OS specific customizations you might want to review this proposal:

https://review.openstack.org/#/c/133945/4

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No meetings on Christmas or New Year's Days

2014-12-23 Thread Gary Kotton
Two reasons to celebrate. Its Elvis¹s birthday!

On 12/22/14, 10:46 PM, Carl Baldwin c...@ecbaldwin.net wrote:

The L3 sub team meeting [1] will not be held until the 8th of January,
2015.  Enjoy your time off.  I will try to move some of the
refactoring patches along as I can but will be down to minimal hours.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-23 Thread Asselin, Ramy
You should use 14.04 for the slave. The limitation for using 12.04 is only for 
the master since zuul’s apache configuration is WIP on 14.04 [1], and zuul does 
not run on the slave.
Ramy
[1] https://review.openstack.org/#/c/141518/
From: Punith S [mailto:punit...@cloudbyte.com]
Sent: Monday, December 22, 2014 11:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi Asselin,

i'm following your readme https://github.com/rasselin/os-ext-testing
for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave)

currently the scripts and setup went fine as followed with the document.

now both master and slave have been connected successfully, but in order to run 
the tempest integration test against our proposed cloudbyte cinder driver for 
kilo, we need to have devstack installed in the slave.(in my understanding)

but on installing the master devstack i'm getting permission issues in 12.04 in 
executing ./stack.sh since master devstack suggests the 14.04 or 13.10 ubuntu. 
and on contrary running install_slave.sh is failing on 13.10 due to puppet 
modules on found error.

 is there a way to get this work ?

thanks in advance

On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Eduard,

A few items you can try:

1.   Double-check that the job is in Jenkins

a.   If not, then that’s the issue

2.   Check that the processes are running correctly

a.   ps -ef | grep zuul

   i.  Should 
have 2 zuul-server  1 zuul-merger

b.  ps -ef | grep Jenkins

   i.  Should 
have 1 /usr/bin/daemon --name=jenkins  1 /usr/bin/java

3.   In Jenkins, Manage Jenkins, Gearman Plugin Config, “Test Connection”

4.   Stop and Zuul  Jenkins. Start Zuul  Jenkins

a.   service Jenkins stop

b.  service zuul stop

c.   service zuul-merger stop

d.  service Jenkins start

e.  service zuul start

f.service zuul-merger start

Otherwise, I suggest you ask in #openstack-infra irc channel.

Ramy

From: Eduard Matei 
[mailto:eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com]
Sent: Sunday, December 21, 2014 11:01 PM

To: Asselin, Ramy
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Thanks Ramy,

Unfortunately i don't see dsvm-tempest-full in the status output.
Any idea how i can get it registered?

Thanks,
Eduard

On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Eduard,

If you run this command, you can see which jobs are registered:
telnet localhost 4730

status

There are 3 numbers per job: queued, running and workers that can run job. Make 
sure the job is listed  last ‘workers’ is non-zero.

To run the job again without submitting a patch set, leave a “recheck” comment 
on the patch  make sure your zuul layout.yaml is configured to trigger off 
that comment. For example [1].
Be sure to use the sandbox repository. [2]
I’m not aware of other ways.

Ramy

[1] 
https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20
[2] https://github.com/openstack-dev/sandbox




From: Eduard Matei 
[mailto:eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com]
Sent: Friday, December 19, 2014 3:36 AM
To: Asselin, Ramy
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi all,
After a little struggle with the config scripts i managed to get a working 
setup that is able to process openstack-dev/sandbox and run 
noop-check-comunication.

Then, i tried enabling dsvm-tempest-full job but it keeps returning 
NOT_REGISTERED

2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change Change 
0x7fe5ec029b50 139585,9 depends on changes []
2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job noop-check-communication 
for change Change 0x7fe5ec029b50 139585,9 with dependent changes []
2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full for 
change Change 0x7fe5ec029b50 139585,9 with dependent changes []
2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job gear.Job 0x7fe5ec2e2f10 
handle: None name: build:dsvm-tempest-full unique: 
a9199d304d1140a8bf4448dfb1ae42c1 is not registered with Gearman
2014-12-19 12:07:14,694 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2f10 
handle: None name: build:dsvm-tempest-full unique: 
a9199d304d1140a8bf4448dfb1ae42c1 complete, result NOT_REGISTERED
2014-12-19 12:07:14,765 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2d10 
handle: H:127.0.0.1:2http://127.0.0.1:2 name: build:noop-check-communication 
unique: 333c6ea077324a788e3c37a313d872c5 started
2014-12-19 

Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-23 Thread Alex Xu
2014-12-22 21:50 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 22/12/2014 13:37, Alex Xu a écrit :



 2014-12-22 10:36 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:

 2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com:
 
 
  2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:
 

 
  but what if the compute node is back to normal? There will be
  instances in the same server group with affinity policy, but located
  in different hosts.
 
 
  If operator decide to evacuate the instance from the failed host, we
 should
  fence the failed host first.

 Yes, actually. I mean the recommandation or prerequisite should be
 emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
 things more confused. But the issue you are working around is indeed a
 problem we should solve.


  Yea, you are right, we should doc it if we think this make sense. Thanks!



 As I said, I'm not in favor of adding more complexity in the instance
 group setup that is done in the conductor for basic race condition reasons.


Emm...anyway we can resolve it for now?



 If I understand correctly, the problem is when there is only one host for
 all the instances belonging to a group with affinity filter and this host
 is down, then the filter will deny any other host and consequently the
 request will fail while it should succeed.


Yes, you understand correctly. Thanks for explain that, that's good for
other people to understand what we talking about.



 Is this really a problem ? I mean, it appears to me that's a normal
 behaviour because a filter is by definition an *hard* policy.


Yea, it isn't problem for normal case. But it's problem for VM HA. So I
want to ask whether we should tell user if you use *hard* policy, that
means you lose the VM HA. If we choice that, maybe we should doc at
somewhere to notice user. But if user can have *hard* policy and VM HA at
sametime and we aren't break anything(except a little complex code), that's
sounds good for user.



 So, provided you would like to implement *soft* policies, that sounds more
 likely a *weigher* that you would like to have : ie. make sure that hosts
 running existing instances in the group are weighted more than other ones
 so they'll be chosen every time, but in case they're down, allow the
 scheduler to pick other hosts.


yes, soft policy didn't have this problem.



 HTH,
 -Sylvain




 --
 Regards!
 ---
 Lingxian Kong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2014-12-23 Thread Peter Pouliot
Hi All,

With the pending holidays we have a lack of quorum for today.  We'll resume 
meetings after the New Year.

p

Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Cancelling next week's meeting

2014-12-23 Thread Devananda van der Veen
With the winter break coming up (or already here, for some folks) I am
cancelling next week's meeting on Dec 29.

I had not cancelled last night's meeting ahead of time, but very few people
attended, and with so few core reviewers present there wasn't much we could
get done. We did not have a formal meeting, and just hung out in channel
for about 15 minutes.

This means our next meeting will be Jan 6th at 0500 UTC (Jan 5th at 9pm US
west coast).

See you all again after the break!

Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Bug in federation

2014-12-23 Thread David Chadwick
Hi guys

we now have the ABFAB federation protocol working with Keystone, using a
modified mod_auth_kerb plugin for Apache (available from the project
Moonshot web site). However, we did not change Keystone configuration
from its original SAML federation configuration, when it was talking to
SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
(which I believe had to be done for OpenID connect.) We simply replaced
mod_shibboleth with mod_auth_kerb and talked to a completely different
IDP with a different protocol. And everything worked just fine.

Consequently Keystone is broken, since you can configure it to trust a
particular IDP, talking a particular protocol, but Apache will happily
talk to another IDP, using a different protocol, and Keystone cannot
tell the difference and will happily accept the authenticated user.
Keystone should reject any authenticated user who does not come from the
trusted IDP talking the correct protocol. Otherwise there is no point in
configuring Keystone with this information, if it is ignored by Keystone.

BTW, we are using the Juno release. We should fix this bug in Kilo.

As I have been saying for many months, Keystone does not know anything
about SAML or ABFAB or OpenID Connect protocols, so there is currently
no point in configuring this information into Keystone. Keystone is only
aware of environmental parameters coming from Apache. So this is the
protocol that Keystone recognises. If you want Keystone to try to
control the federation protocol and IDPs used by Apache, then you will
need the Apache plugins to pass the name of the IDP and the protocol
being used as environmental parameters to Keystone, and then Keystone
can check that the ones that it has been configured to trust, are
actually being used by Apache.

regards

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical Multitenancy

2014-12-23 Thread Joe Gordon
On Dec 23, 2014 12:26 AM, Tim Bell tim.b...@cern.ch wrote:



 It would be great if we can get approval for the Hierachical Quota
handling in Nova too (https://review.openstack.org/#/c/129420/).

Nova's spec deadline has passed, but I think this is a good candidate for
an exception.  We will announce the process for asking for a formal spec
exception shortly after new years.




 Tim



 From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
 Sent: 23 December 2014 01:22
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Hierarchical Multitenancy



 Hi Raildo,



 Thanks for putting this post together. I really appreciate all the work
you guys have done (and continue to do) to get the Hierarchical
Mulittenancy code into Keystone. It’s great to have the base implementation
merged into Keystone for the K1 milestone. I look forward to seeing the
rest of the development land during the rest of this cycle and what the
other OpenStack projects build around the HMT functionality.



 Cheers,

 Morgan







 On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:



 Hello folks, My team and I developed the Hierarchical Multitenancy
concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What
have we implemented? What are the next steps for kilo?

 To answers these questions, I created a blog post
http://raildo.me/hierarchical-multitenancy-in-openstack/



 Any question, I'm available.



 --

 Raildo Mascena

 Software Engineer.

 Bachelor of Computer Science.

 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano Agent

2014-12-23 Thread Timur Nurlygayanov
Hi,

murano python client allows to work with Murano API from the console
interface (instead of Web UI).

To install Murano python client on Ubuntu you can execute the following
commands:

apt-get update
apt-get install python-pip
pip install -U pip setuptools
pip install python-muranoclient

On Tue, Dec 16, 2014 at 1:39 PM, raghavendra@accenture.com wrote:





  Hi Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and would like to
 know what components need to be installed in a separate VM for Murano
 agent.

 Please let me kow why Murano-agent is required and the components that
 needs to be installed in it.



 Warm Regards,

 *Raghavendra Lad*





 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc

My OpenStack summit schedule:
http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs

2014-12-23 Thread Aleksandra Fedorova
Blueprint 
https://blueprints.launchpad.net/fuel/+spec/fuel-dev-docs-merge-fuel-docs
suggests us to move all documentation from fuel-web to fuel-docs
repository.

While I agree that moving Developer Guide to fuel-docs is a good idea,
there is an issue with autodocs which currently blocks the whole
process.

If we move dev docs to fuel-docs as suggested by Christopher in [1] we
will make it impossible to build fuel-docs without cloning fuel-web
repository and installing all nailgun dependencies into current
environment. And this is bad from both CI and user point of view.

I think we should keep fuel-docs repository self-contained, i.e. one
should be able to build docs without any external code. We can add a
switch or separate make target to build 'addons' to this documentation
when explicitly requested, but it shouldn't be default behaviour.

Thus I think we need to split documentation in fuel-web/ repository
and move the static part to fuel-docs, but keep dynamic
auto-generated part in fuel-web repo. See patch [2].

Then to move docs from fuel-web to fuel-docs we need to perform following steps:

1) Merge/abandon all docs-related patches to fuel-web, see full list [3]
2) Merge updated patch [2] which removes docs from fuel-web repo,
leaving autogenerated api docs only.
3) Disable docs CI for fuel-web
4) Add building of api docs to fuel-web/run_tests.sh.
5) Update fuel-docs repository with new data as in patch [4] but
excluding anything related to autodocs.
6) Implement additional make target in fuel-docs to download and build
autodocs from fuel-web repo as a separate chapter.
7) Add this make target in fuel-docs CI.


[1] https://review.openstack.org/#/c/124551/
[2] https://review.openstack.org/#/c/143679/
[3] 
https://review.openstack.org/#/q/project:stackforge/fuel-web+status:open+file:%255Edoc.*,n,z
[4] https://review.openstack.org/#/c/125234/

-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-23 Thread Adam Young

On 12/23/2014 11:34 AM, David Chadwick wrote:

Hi guys

we now have the ABFAB federation protocol working with Keystone, using a
modified mod_auth_kerb plugin for Apache (available from the project
Moonshot web site). However, we did not change Keystone configuration
from its original SAML federation configuration, when it was talking to
SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
(which I believe had to be done for OpenID connect.) We simply replaced
mod_shibboleth with mod_auth_kerb and talked to a completely different
IDP with a different protocol. And everything worked just fine.

Consequently Keystone is broken, since you can configure it to trust a
particular IDP, talking a particular protocol, but Apache will happily
talk to another IDP, using a different protocol, and Keystone cannot
tell the difference and will happily accept the authenticated user.
Keystone should reject any authenticated user who does not come from the
trusted IDP talking the correct protocol. Otherwise there is no point in
configuring Keystone with this information, if it is ignored by Keystone.
The IDP and the Protocol should be passed from HTTPD in env vars. Can 
you confirm/deny that this is the case now?


On the Apache side we are looking to expand the set of variables set.
http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables


mod_shib does support Shib-Identity-Provider :


https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables

Which should be sufficient: if the user is coming in via mod_shib, they 
are using SAML.






BTW, we are using the Juno release. We should fix this bug in Kilo.

As I have been saying for many months, Keystone does not know anything
about SAML or ABFAB or OpenID Connect protocols, so there is currently
no point in configuring this information into Keystone. Keystone is only
aware of environmental parameters coming from Apache. So this is the
protocol that Keystone recognises. If you want Keystone to try to
control the federation protocol and IDPs used by Apache, then you will
need the Apache plugins to pass the name of the IDP and the protocol
being used as environmental parameters to Keystone, and then Keystone
can check that the ones that it has been configured to trust, are
actually being used by Apache.

regards

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical Multitenancy

2014-12-23 Thread Tim Bell
Joe,

Thanks… there seems to be good agreement on the spec and the matching 
implementation is well advanced with BARC so the risk is not too high.

Launching HMT with quota in Nova in the same release cycle would also provide a 
more complete end user experience.

For CERN, this functionality is very interesting as it allows the central cloud 
providers to delegate the allocation of quotas to the LHC experiments. Thus, 
from a central perspective, we are able to allocate N thousand cores to an 
experiment and delegate their resource co-ordinator to prioritise the work 
within the experiment. Currently, we have many manual helpdesk tickets with 
significant latency to adjust the quotas.

Tim

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 23 December 2014 17:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Hierarchical Multitenancy


On Dec 23, 2014 12:26 AM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:



 It would be great if we can get approval for the Hierachical Quota handling 
 in Nova too (https://review.openstack.org/#/c/129420/).

Nova's spec deadline has passed, but I think this is a good candidate for an 
exception.  We will announce the process for asking for a formal spec exception 
shortly after new years.




 Tim



 From: Morgan Fainberg 
 [mailto:morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com]
 Sent: 23 December 2014 01:22
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Hierarchical Multitenancy



 Hi Raildo,



 Thanks for putting this post together. I really appreciate all the work you 
 guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
 into Keystone. It’s great to have the base implementation merged into 
 Keystone for the K1 milestone. I look forward to seeing the rest of the 
 development land during the rest of this cycle and what the other OpenStack 
 projects build around the HMT functionality.



 Cheers,

 Morgan







 On Dec 22, 2014, at 1:49 PM, Raildo Mascena 
 rail...@gmail.commailto:rail...@gmail.com wrote:



 Hello folks, My team and I developed the Hierarchical Multitenancy concept 
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
 implemented? What are the next steps for kilo?

 To answers these questions, I created a blog post 
 http://raildo.me/hierarchical-multitenancy-in-openstack/



 Any question, I'm available.



 --

 Raildo Mascena

 Software Engineer.

 Bachelor of Computer Science.

 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

2014-12-23 Thread Alon Marx
Hi All,

In IBM we have several cinder drivers, with a number of CI accounts. In 
order to improve the CI management and maintenance, we decided to build a 
single Jenkins master that will run several jobs for the drivers we own. 
Adding the jobs to the jenkins master went ok, but we encountered a 
problem with the CI accounts. We have several drivers and several 
accounts, but in the Jenkins master, the Zuul configuration has only one 
gerrit account that reports.

So there are several questions:
1. Was this problem encountered by others? How did they solve it?
2. Is there a way to configure Zuul on the Jenkins master to report 
different jobs with different CI accounts?
3. If there is no way to configure the master to use several CI accounts, 
should we build a Jenkins master per driver? 
4. Or another alternative, should we use a single CI account for all 
drivers we own, and report all results under that account?

We'll appreciate any input.

Thanks,
Alon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-23 Thread David Chadwick
Hi Adam

On 23/12/2014 17:34, Adam Young wrote:
 On 12/23/2014 11:34 AM, David Chadwick wrote:
 Hi guys

 we now have the ABFAB federation protocol working with Keystone, using a
 modified mod_auth_kerb plugin for Apache (available from the project
 Moonshot web site). However, we did not change Keystone configuration
 from its original SAML federation configuration, when it was talking to
 SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
 (which I believe had to be done for OpenID connect.) We simply replaced
 mod_shibboleth with mod_auth_kerb and talked to a completely different
 IDP with a different protocol. And everything worked just fine.

 Consequently Keystone is broken, since you can configure it to trust a
 particular IDP, talking a particular protocol, but Apache will happily
 talk to another IDP, using a different protocol, and Keystone cannot
 tell the difference and will happily accept the authenticated user.
 Keystone should reject any authenticated user who does not come from the
 trusted IDP talking the correct protocol. Otherwise there is no point in
 configuring Keystone with this information, if it is ignored by Keystone.
 The IDP and the Protocol should be passed from HTTPD in env vars. Can
 you confirm/deny that this is the case now?

What is passed from Apache is the 'PATH_INFO' variable, and it is set to
the URL of Keystone that is being protected, which in our case is
/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth

There are also the following arguments passed to Keystone
'wsgiorg.routing_args': (routes.util.URLGenerator object at
0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol': u'saml2'})

and

'PATH_TRANSLATED':
'/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'

So Apache is telling Keystone that it has protected the URL that
Keystone has configured to be protected.

However, Apache has been configured to protect this URL with the ABFAB
protocol and the local Radius server, rather than the KentProxy IdP and
the SAML2 protocol. So we could say that Apache is lying to Keystone,
and because Keystone trusts Apache, then Keystone trusts Apache's lies
and wrongly thinks that the correct IDP and protocol were used.

The only sure way to protect Keystone from a wrongly or mal-configured
Apache is to have end to end security, where Keystone gets a token from
the IDP that it can validate, to prove that it is the trusted IDP that
it is talking to. In other words, if Keystone is given the original
signed SAML assertion from the IDP, it will know for definite that the
user was authenticated by the trusted IDP using the trusted protocol

regards

David

 
 On the Apache side we are looking to expand the set of variables set.
 http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables
 
 

The original SAML assertion
 
 mod_shib does support Shib-Identity-Provider :
 
 
 https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables
 
 
 Which should be sufficient: if the user is coming in via mod_shib, they
 are using SAML.
 
 
 

 BTW, we are using the Juno release. We should fix this bug in Kilo.

 As I have been saying for many months, Keystone does not know anything
 about SAML or ABFAB or OpenID Connect protocols, so there is currently
 no point in configuring this information into Keystone. Keystone is only
 aware of environmental parameters coming from Apache. So this is the
 protocol that Keystone recognises. If you want Keystone to try to
 control the federation protocol and IDPs used by Apache, then you will
 need the Apache plugins to pass the name of the IDP and the protocol
 being used as environmental parameters to Keystone, and then Keystone
 can check that the ones that it has been configured to trust, are
 actually being used by Apache.

 regards

 David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] simulate examples

2014-12-23 Thread Tim Hinrichs
Here's a description.  We need to get this added to the docs.


Below is a full description of how you might utilize the Action-centric version 
of simulate.  The idea is that if you describe the effects that an 
action/API-call will have on the basic tables of nova/neutron/etc. (below 
called an Action Description policy) then you can ask Congress to simulate the 
execution of that action and answer a query in the resulting state.  The only 
downside to the action-centric application of simulate is writing the Action 
Policy for all of the actions you care about.

The other way to utilize simulate is to give it the changes in 
nova/neutron/etc. directly that you’d like to make.  That is, instead of an 
action, you’ll tell simulate what rows should be inserted and which ones should 
be deleted.  An insertion is denoted with a plus (+) and deletion is denoted 
with a minus (-).

For example, to compute all the errors after

  1.  inserting a row into the nova:servers table with ID uuid1, 2TB of disk, 
and 10GB of memory (this isn’t the actual schema BTW) and
  2.  deleting the row from neutron:security_groups with the ID “uuid2” and 
name “alice_default_group” (again not the real schema),

you’d write something like the following.

openstack congress policy simulate classification 'error(x)’ 
‘nova:servers+(“uuid1”, “2TB”, “10 GB”) neutron:security_groups-(“uuid2”, 
“alice_default_group”)' action

But I’d suggest reading the following to see some of the options.

=
1. CREATE ACTION DESCRIPTION POLICY
=

Suppose the table 'p' is a collection of key-value pairs:  p(key, value).

Suppose we have a single action 'set(key, newvalue)’ that changes the existing 
value of 'key' to 'newvalue' or sets the value of 'key' to 'newvalue' if 'key' 
was not already assigned.  We can describe the effects of ‘set’ using the 
following 3 Datalog rules.

p+(x,y) :- set(x,y)
p-(x,oldy) :- set(x,y), p(x,oldy)
action(set)

The first thing we do is add each of these 3 rules to the policy named 'action'.

$ openstack congress policy rule create action 'p+(x,y) :- set(x,y)'
$ openstack congress policy rule create action 'p-(x,oldy) :- set(x,y), 
p(x,oldy)'
$ openstack congress policy rule create action 'action(set)'


=
2. ADD SOME KEY/VALUE PAIRS FOR TESTING
=

Here’s we’ll populate the ‘classification’ policy with a few key/value pairs.

$ openstack congress policy rule create classification 'p(101, 0)'
$ openstack congress policy rule create classification 'p(202, abc)'
$ openstack congress policy rule create classification 'p(302, 9)'


==
3. DEFINE POLICY
==

There's an error if a key's value is 9.

$ openstack congress policy rule create classification 'error(x) :- p(x, 9)'


===
4. RUN SIMULATION QUERIES
===

Each of the following is an example of a simulation query you might want to run.

a) Simulate changing the value of key 101 to 5 and query the contents of p.

$ openstack congress policy simulate classification 'p(x,y)' 'set(101, 5)' 
action
p(101, 5)
p(202, abc)
p(302, 9)


b) Simulate changing the value of key 101 to 5 and query the error table

$ openstack congress policy simulate classification 'error(x)' 'set(101, 5)' 
action
error(302)


c) Simulate changing the value of key 101 to 9 and query the error table.

$ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' 
action
error(302)
error(101)


d) Simulate changing the value of key 101 to 9 and query the *change* in the 
error table.

$ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' 
action --delta
error+(101)


e) Simulate changing 101:9, 202:9, 302:1 and query the *change* in the error 
table.

$ openstack congress policy simulate classification 'error(x)' 'set(101, 9) 
set(202, 9) set(302, 1)' action --delta
error+(202)
error+(101)
error-(302)


f) Simulate changing 101:9, 202:9, 302:1, and finally 101:15 (in that order).  
Then query the *change* in the error table.

$ openstack congress policy simulate classification 'error(x)' 'set(101, 9) 
set(202, 9) set(302, 1) set(101, 15)' action --delta
error+(202)
error-(302)


g) Simulate changing 101:9 and query the *change* in the error table, while 
asking for a debug trace of the computation.

$ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' 
action --delta --trace

error+(101)
RT: ** Simulate: Querying error(x)
Clas  : Call: error(x)
Clas  : | Call: p(x, 9)
Clas  : | Exit: p(302, 9)
Clas  : Exit: error(302)
Clas  : Redo: error(302)
Clas  : | Redo: p(302, 9)
Clas  : | Fail: p(x, 9)
Clas  : Fail: error(x)
Clas  : Found answer [error(302)]
RT: Original result of error(x) is [error(302)]
RT: ** Simulate: Applying sequence [set(101, 9)]
Action: Call: action(x)
...

Tim





Re: [openstack-dev] [Keystone] Bug in federation

2014-12-23 Thread Dolph Mathews
On Tue, Dec 23, 2014 at 1:33 PM, David Chadwick d.w.chadw...@kent.ac.uk
wrote:

 Hi Adam

 On 23/12/2014 17:34, Adam Young wrote:
  On 12/23/2014 11:34 AM, David Chadwick wrote:
  Hi guys
 
  we now have the ABFAB federation protocol working with Keystone, using a
  modified mod_auth_kerb plugin for Apache (available from the project
  Moonshot web site). However, we did not change Keystone configuration
  from its original SAML federation configuration, when it was talking to
  SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
  (which I believe had to be done for OpenID connect.) We simply replaced
  mod_shibboleth with mod_auth_kerb and talked to a completely different
  IDP with a different protocol. And everything worked just fine.
 
  Consequently Keystone is broken, since you can configure it to trust a
  particular IDP, talking a particular protocol, but Apache will happily
  talk to another IDP, using a different protocol, and Keystone cannot
  tell the difference and will happily accept the authenticated user.
  Keystone should reject any authenticated user who does not come from the
  trusted IDP talking the correct protocol. Otherwise there is no point in
  configuring Keystone with this information, if it is ignored by
 Keystone.
  The IDP and the Protocol should be passed from HTTPD in env vars. Can
  you confirm/deny that this is the case now?

 What is passed from Apache is the 'PATH_INFO' variable, and it is set to
 the URL of Keystone that is being protected, which in our case is
 /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth

 There are also the following arguments passed to Keystone
 'wsgiorg.routing_args': (routes.util.URLGenerator object at
 0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol': u'saml2'})

 and

 'PATH_TRANSLATED':

 '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'

 So Apache is telling Keystone that it has protected the URL that
 Keystone has configured to be protected.

 However, Apache has been configured to protect this URL with the ABFAB
 protocol and the local Radius server, rather than the KentProxy IdP and
 the SAML2 protocol. So we could say that Apache is lying to Keystone,
 and because Keystone trusts Apache, then Keystone trusts Apache's lies
 and wrongly thinks that the correct IDP and protocol were used.

 The only sure way to protect Keystone from a wrongly or mal-configured
 Apache is to have end to end security, where Keystone gets a token from
 the IDP that it can validate, to prove that it is the trusted IDP that
 it is talking to. In other words, if Keystone is given the original
 signed SAML assertion from the IDP, it will know for definite that the
 user was authenticated by the trusted IDP using the trusted protocol


So the bug is a misconfiguration, not an actual bug. The goal was to
trust and leverage httpd, not reimplement it and all it's extensions.



 regards

 David

 
  On the Apache side we are looking to expand the set of variables set.
 
 http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables
 
 

 The original SAML assertion
 
  mod_shib does support Shib-Identity-Provider :
 
 
 
 https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables
 
 
  Which should be sufficient: if the user is coming in via mod_shib, they
  are using SAML.
 
 
 
 
  BTW, we are using the Juno release. We should fix this bug in Kilo.
 
  As I have been saying for many months, Keystone does not know anything
  about SAML or ABFAB or OpenID Connect protocols, so there is currently
  no point in configuring this information into Keystone. Keystone is only
  aware of environmental parameters coming from Apache. So this is the
  protocol that Keystone recognises. If you want Keystone to try to
  control the federation protocol and IDPs used by Apache, then you will
  need the Apache plugins to pass the name of the IDP and the protocol
  being used as environmental parameters to Keystone, and then Keystone
  can check that the ones that it has been configured to trust, are
  actually being used by Apache.
 
  regards
 
  David
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] No meetings for two weeks.

2014-12-23 Thread Nikhil Komawar
Hi all,

In the spirit of the holiday season, the next two meetings for Glance have been 
cancelled i.e. the ones on Dec 25th and Jan 1st [0]. Let's meet back on the 
7th. Of couse, please feel free to ping me on IRC/email if you've any 
questions, concerns or suggestions.

Happy holidays!

[0] https://etherpad.openstack.org/p/glance-team-meeting-agenda

Cheers!
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-23 Thread Morgan Fainberg

 On Dec 23, 2014, at 1:08 PM, Dolph Mathews dolph.math...@gmail.com wrote:
 
 
 On Tue, Dec 23, 2014 at 1:33 PM, David Chadwick d.w.chadw...@kent.ac.uk 
 mailto:d.w.chadw...@kent.ac.uk wrote:
 Hi Adam
 
 On 23/12/2014 17:34, Adam Young wrote:
  On 12/23/2014 11:34 AM, David Chadwick wrote:
  Hi guys
 
  we now have the ABFAB federation protocol working with Keystone, using a
  modified mod_auth_kerb plugin for Apache (available from the project
  Moonshot web site). However, we did not change Keystone configuration
  from its original SAML federation configuration, when it was talking to
  SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
  (which I believe had to be done for OpenID connect.) We simply replaced
  mod_shibboleth with mod_auth_kerb and talked to a completely different
  IDP with a different protocol. And everything worked just fine.
 
  Consequently Keystone is broken, since you can configure it to trust a
  particular IDP, talking a particular protocol, but Apache will happily
  talk to another IDP, using a different protocol, and Keystone cannot
  tell the difference and will happily accept the authenticated user.
  Keystone should reject any authenticated user who does not come from the
  trusted IDP talking the correct protocol. Otherwise there is no point in
  configuring Keystone with this information, if it is ignored by Keystone.
  The IDP and the Protocol should be passed from HTTPD in env vars. Can
  you confirm/deny that this is the case now?
 
 What is passed from Apache is the 'PATH_INFO' variable, and it is set to
 the URL of Keystone that is being protected, which in our case is
 /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth
 
 There are also the following arguments passed to Keystone
 'wsgiorg.routing_args': (routes.util.URLGenerator object at
 0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol': u'saml2'})
 
 and
 
 'PATH_TRANSLATED':
 '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'
 
 So Apache is telling Keystone that it has protected the URL that
 Keystone has configured to be protected.
 
 However, Apache has been configured to protect this URL with the ABFAB
 protocol and the local Radius server, rather than the KentProxy IdP and
 the SAML2 protocol. So we could say that Apache is lying to Keystone,
 and because Keystone trusts Apache, then Keystone trusts Apache's lies
 and wrongly thinks that the correct IDP and protocol were used.
 
 The only sure way to protect Keystone from a wrongly or mal-configured
 Apache is to have end to end security, where Keystone gets a token from
 the IDP that it can validate, to prove that it is the trusted IDP that
 it is talking to. In other words, if Keystone is given the original
 signed SAML assertion from the IDP, it will know for definite that the
 user was authenticated by the trusted IDP using the trusted protocol
 
 So the bug is a misconfiguration, not an actual bug. The goal was to trust 
 and leverage httpd, not reimplement it and all it's extensions.

Fixing this “bug” would be moving towards Keystone needing to implement all of 
the various protocols to avoid “misconfigurations”. There are probably some 
more values that can be passed down from the Apache layer to help provide more 
confidence in the IDP that is being used. I don’t see a real tangible benefit 
to moving away from leveraging HTTPD for handling the heavy lifting when 
handling federated Identity. 

—Morgan

 
 regards
 
 David
 
 
  On the Apache side we are looking to expand the set of variables set.
  http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables
   
  http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables
 
 
 
 The original SAML assertion
 
  mod_shib does support Shib-Identity-Provider :
 
 
  https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables
   
  https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables
 
 
  Which should be sufficient: if the user is coming in via mod_shib, they
  are using SAML.
 
 
 
 
  BTW, we are using the Juno release. We should fix this bug in Kilo.
 
  As I have been saying for many months, Keystone does not know anything
  about SAML or ABFAB or OpenID Connect protocols, so there is currently
  no point in configuring this information into Keystone. Keystone is only
  aware of environmental parameters coming from Apache. So this is the
  protocol that Keystone recognises. If you want Keystone to try to
  control the federation protocol and IDPs used by Apache, then you will
  need the Apache plugins to pass the name of the IDP and the protocol
  being used as environmental parameters to Keystone, and then Keystone
  can check that the ones that it has been configured to trust, are
  actually being used by Apache.
 
  regards
 
  David
 
  

[openstack-dev] [Keystone] Next IRC Meeting January 6th

2014-12-23 Thread Morgan Fainberg
The Keystone IRC meetings will be on hiatus over the holidays. They will resume 
as per normal on January 6th.

Have a good end of the year!

Cheers,
Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-23 Thread W Chan
After some online discussions with Renat, the following is a revision of
the proposal to address the following related blueprints.
*
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
* https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
*
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
* https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context

Please refer to the following threads for backgrounds.
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html


*Workflow Context Scope*
1. context to workflow is passed to all its subflows and subtasks/actions
(aka children) only explicitly via inputs
2. context are passed by value (copy.deepcopy) to children
3. change to context is passed to parent only when it's explicitly
published at the end of the child execution
4. change to context at the parent (after a publish from a child) is passed
to subsequent children

*Environment Variables*
Solves the problem for quickly passing pre-defined inputs to a WF
execution.  In the WF spec, environment variables are referenced as
$.env.var1, $.env.var2, etc.  We should implement an API and DB model where
users can pre-defined different environments with their own set of
variables.  An environment can be passed either by name from the DB or
adhoc by dict in start_workflow.  On workflow execution, a copy of the
environment is saved with the execution object.  Action inputs are still
declared explicitly in the WF spec.  This does not solve the problem where
common inputs are specified over and over again.  So if there are multiple
SQL tasks in the WF, the WF author still needs to supply the conn_str
explicitly for each task.  In the example below, let's say we have a SQL
Query Action that takes a connection string and a query statement as
inputs.  The WF author can specify that the conn_str input is supplied from
the $.env.conn_str.

*Example:*

# Assume this SqlAction is registered as std.sql in Mistral's Action table.
class SqlAction(object):
def __init__(self, conn_str, query):
...

...

version: 2.0
workflows:
demo:
type: direct
input:
- query
output:
- records
tasks:
query:
action: std.sql conn_str={$.env.conn_str} query={$.query}
publish:
records: $

...

my_adhoc_env = {
conn_str: mysql://admin:secrete mysql://admin:secrete@@localhost
/test
}

...

# adhoc by dict
start_workflow(wf_name, wf_inputs, env=my_adhoc_env)

OR

# lookup by name from DB model
start_workflow(wf_name, wf_inputs, env=my_lab_env)


*Define Default Action Inputs as Environment Variables*
Solves the problem where we're specifying the same inputs to subflows and
subtasks/actions over and over again.  On command execution, if action
inputs are not explicitly supplied, then defaults will be lookup from the
environment.

*Example:*
Using the same example from above, the WF author can still supply both
conn_str and query inputs in the WF spec.  However, the author also has the
option to supply that as default action inputs.  An example environment
structure is below.  __actions should be reserved and immutable.  Users
can specific one or more default inputs for the sql action as nested dict
under __actions.  Recursive YAQL eval should be supported in the env
variables.

version: 2.0
workflows:
demo:
type: direct
input:
- query
output:
- records
tasks:
query:
action: std.sql query={$.query}
publish:
records: $

...

my_adhoc_env = {
sql_server: localhost,
__actions: {
std.sql: {
conn_str: mysql://admin:secrete@{$.env.sql_server}/test
 }
}
}


*Default Input Values Supplied Explicitly in WF Spec*
Please refer to this blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
for background.  This is a different use case.  To support, we just need to
set the correct order of precedence in applying values.
1. Input explicitly given to the sub flow/task in the WF spec
2. Default input supplied from env
3. Default input supplied at WF spec

*Putting this together...*
At runtime, the WF context would be similar to the following example.  This
will be use to recursively eval the inputs for subflow/tasks/actions.

ctx = {
   var1: …,
   var2: …,
   my_server_ip: 10.1.23.250,
   env: {
sql_server: localhost,
__actions: {
std.sql: {
conn: mysql://admin:secrete@{$.env.sql_server}/test
},
my.action: {
endpoint: http://{$.my_server_ip}/v1/foo;
}
}
   }
}

*Runtime Context*

[openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core

2014-12-23 Thread Michael Krotscheck
Hello everyone!

StoryBoard is the much anticipated successor to Launchpad, and is a
component of the Infrastructure Program. The storyboard-core group is
intended to be a superset of the infra-core group, with additional
reviewers who specialize in the field.

Yolanda has been working on StoryBoard ever since the Atlanta Summit, and
has provided a diligent and cautious voice to our development effort. She
has consistently provided feedback on our reviews, and is neither afraid of
asking for clarification, nor of providing constructive criticism. In
return, she has been nothing but gracious and responsive when improvements
were suggested to her own submissions.

Furthermore, Yolanda has been quite active in the infrastructure team as a
whole, and provides valuable context for us in the greater realm of infra.

Please respond within this thread with either supporting commentary, or
concerns about her promotion. Since many western countries are currently
celebrating holidays, the review period will remain open until January 9th.
If the consensus is positive, we will promote her then!

Thanks,

Michael


References:
https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z

http://stackalytics.com/?user_id=yolanda.roblametric=marks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

2014-12-23 Thread John Griffith
On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx alo...@il.ibm.com wrote:
 Hi All,

 In IBM we have several cinder drivers, with a number of CI accounts. In
 order to improve the CI management and maintenance, we decided to build a
 single Jenkins master that will run several jobs for the drivers we own.
 Adding the jobs to the jenkins master went ok, but we encountered a problem
 with the CI accounts. We have several drivers and several accounts, but in
 the Jenkins master, the Zuul configuration has only one gerrit account that
 reports.

 So there are several questions:
 1. Was this problem encountered by others? How did they solve it?
 2. Is there a way to configure Zuul on the Jenkins master to report
 different jobs with different CI accounts?
 3. If there is no way to configure the master to use several CI accounts,
 should we build a Jenkins master per driver?
 4. Or another alternative, should we use a single CI account for all drivers
 we own, and report all results under that account?

 We'll appreciate any input.

 Thanks,
 Alon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


If you have a look at a review in gerrit you can see others appear to
have a single account with multiple tests/results submitted.  HP, EMC
and NetApp all appear to be pretty clear examples of how to go about
doing this.  My personal preference on this has always been a single
CI account anyway with the different drivers consolidated under it; if
nothing else it reduces clutter in the review posting and makes it
easier to find what you might be looking for.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-23 Thread Mike Scherbakov
Igor,
would that be possible?

On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova aurlap...@mirantis.com
wrote:

 Mike, Dmitry, team,
 let me add 5 cents - tests per feature have to run on CI before SCF, it is
 mean that jobs configuration also should be implemented.

 On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 I fully support the idea.

 Feature Lead has to know, that his feature is under threat if it's not
 yet covered by system tests (unit/integration tests are not enough!!!), and
 should proactive work with QA engineers to get tests implemented and
 passing before SCF.

 On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Guys,

 we've done a good job in 6.0. Most of the features were merged before
 feature freeze. Our QA were involved in testing even earlier. It was much
 better than before.

 We had a discussion with Anastasia. There were several bug reports for
 features yesterday, far beyond HCF. So we still have a long way to be
 perfect. We should add one rule: we need to have automated tests before HCF.

 Actually, we should have results of these tests just after FF. It is
 quite challengeable because we have a short development cycle. So my
 proposal is to require full deployment and run of automated tests for each
 feature before soft code freeze. And it needs to be tracked in checklists
 and on feature syncups.

 Your opinion?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-23 Thread Igor Shishkin
I believe yes.

With jenkins job builder we could create jobs faster, QA can be involved in 
that or even create jobs on their own.

I think we have to try it during next release cycle, currently I can’t see 
blockers/problems here.
-- 
Igor Shishkin
DevOps



 On 24 Dec 2014, at 2:20 am GMT+3, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 
 Igor,
 would that be possible?
 
 On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova aurlap...@mirantis.com 
 wrote:
 Mike, Dmitry, team,
 let me add 5 cents - tests per feature have to run on CI before SCF, it is 
 mean that jobs configuration also should be implemented.
 
 On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 I fully support the idea.
 
 Feature Lead has to know, that his feature is under threat if it's not yet 
 covered by system tests (unit/integration tests are not enough!!!), and 
 should proactive work with QA engineers to get tests implemented and passing 
 before SCF.
 
 On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 Guys,
 
 we've done a good job in 6.0. Most of the features were merged before feature 
 freeze. Our QA were involved in testing even earlier. It was much better than 
 before.
 
 We had a discussion with Anastasia. There were several bug reports for 
 features yesterday, far beyond HCF. So we still have a long way to be 
 perfect. We should add one rule: we need to have automated tests before HCF.
 
 Actually, we should have results of these tests just after FF. It is quite 
 challengeable because we have a short development cycle. So my proposal is to 
 require full deployment and run of automated tests for each feature before 
 soft code freeze. And it needs to be tracked in checklists and on feature 
 syncups.
 
 Your opinion?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Mike Scherbakov
 #mihgen
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Mike Scherbakov
 #mihgen
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core

2014-12-23 Thread Zaro
+1

On Tue, Dec 23, 2014 at 2:34 PM, Michael Krotscheck krotsch...@gmail.com
wrote:

 Hello everyone!

 StoryBoard is the much anticipated successor to Launchpad, and is a
 component of the Infrastructure Program. The storyboard-core group is
 intended to be a superset of the infra-core group, with additional
 reviewers who specialize in the field.

 Yolanda has been working on StoryBoard ever since the Atlanta Summit, and
 has provided a diligent and cautious voice to our development effort. She
 has consistently provided feedback on our reviews, and is neither afraid of
 asking for clarification, nor of providing constructive criticism. In
 return, she has been nothing but gracious and responsive when improvements
 were suggested to her own submissions.

 Furthermore, Yolanda has been quite active in the infrastructure team as a
 whole, and provides valuable context for us in the greater realm of infra.

 Please respond within this thread with either supporting commentary, or
 concerns about her promotion. Since many western countries are currently
 celebrating holidays, the review period will remain open until January 9th.
 If the consensus is positive, we will promote her then!

 Thanks,

 Michael


 References:

 https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z

 http://stackalytics.com/?user_id=yolanda.roblametric=marks


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Relocation of freshly deployed OpenStack by Fuel

2014-12-23 Thread Andrew Woodward
Pawel,

I'm not sure that it's common at all to move the deployed cloud.
Hopefully fuel made it easy enough to deploy that you could simply
reset the cluster and re-deploy with the new network settings. I'd be
interested in understanding why this would be more painful than
re-configuring the public network settings.

Things that need to be changed:
all of the keystone public endpoints
all of the config files using the public endpoints, so anything that
speaks with another endpoint (usually nova [compute  controller],
neutron, possibly others)
corosync config for public vip
(6.0) corosync config for ping_public_gw
host-os nic settings ie /etc/networking/interfaces.d/

now with all that said, I think rather than updating these by hand, we
could get puppet to update these values for us.
The non-repeatable way is to hack on /etc/astute.yaml and then
re-apply puppet (/etc/puppet/manifests/site.pp for each role:  you
would have had for /etc/astute.yaml

the more-repeatable way is to hack out the public range in the nailgun
database, as well as replace the public_vip value once these are
changed, you should be able to manually apply puppet using the deploy
api (fuelclient can call this) 'fuel --env 1 --node 1,2,3 --deploy'

I've never done this before, but it should be that simple, and puppet
will re-apply based on the current value in the database (as long as
you didn't upload custom node yaml prior to your initial deployment)

On Sat, Dec 20, 2014 at 11:27 AM, Skowron, Pawel
pawel.skow...@intel.com wrote:
 -Need a little guidance with Mirantis version of OpenStack.



 We want move freshly deployed cloud, without running instances but with HA
 option to other physical location.

 The other location means different ranges of public network. And I really
 want move my installation without cloud redeployment.



 What I think is required to change is public network settings. The public
 network settings can be divided in two different areas:

 1) Floating ip range for external access to running VM instances

 2) Fuel reserved pool for service endpoints (virtual ips and staticly
 assigned ips)



 The first one 1) I believe but I haven't tested that _is not a problem_ but
 any insight will be invaluable.

 I think it would be possible change to floating network ranges, as an admin
 in OpenStack itself. I will just add another network as external network.



 But the second issue 2) is I am worried about. What I found the virtual ips
 (vip) are assigned to one of controller (primary role of HA)

 and written in haproxy/pacemaker configuration. To allow access from public
 network by this ips I would probably need

 to reconfigure all HA support services which have hardcoded vips in its
 configuration files, but it looks very complicated and fragile.



 I have even found that public_vip is used in nova.conf (to get access to
 glance). So the relocation will require reconfiguration of nova and maybe
 other openstack services.

 In the case of KeyStone it would be a real problem (ips are stored in
 database).



 Has someone any experience with this kind of scenario and would be kind to
 share it ? Please help.



 I have used Fuel 6.0 technical preview.



 Pawel Skowron

 pawel.skow...@intel.com






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-23 Thread Yasunori Goto


 Its been discussed at several summits. We have settled on a general solution 
 using Zaqar,
 but no work has been done that I know of. I was just pointing out that 
 similar blueprints/specs
 exist and you may want to look through those to get some ideas about writing 
 your own and/or
 basing your proposal off of one of them.


I see. Thanks for your information.


-- 
Yasunori Goto y-g...@jp.fujitsu.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical Multitenancy

2014-12-23 Thread Michael Dorman
+1 to Nova support for this getting in to Kilo.

We have a similar use case.  I’d really like to doll out quota on a department 
level, and let individual departments manage sub projects and quotas on their 
own.  I agree that HMT has limited value without Nova support.

Thanks!
Mike


From: Tim Bell tim.b...@cern.chmailto:tim.b...@cern.ch
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, December 23, 2014 at 11:01 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Hierarchical Multitenancy

Joe,

Thanks… there seems to be good agreement on the spec and the matching 
implementation is well advanced with BARC so the risk is not too high.

Launching HMT with quota in Nova in the same release cycle would also provide a 
more complete end user experience.

For CERN, this functionality is very interesting as it allows the central cloud 
providers to delegate the allocation of quotas to the LHC experiments. Thus, 
from a central perspective, we are able to allocate N thousand cores to an 
experiment and delegate their resource co-ordinator to prioritise the work 
within the experiment. Currently, we have many manual helpdesk tickets with 
significant latency to adjust the quotas.

Tim

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 23 December 2014 17:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Hierarchical Multitenancy


On Dec 23, 2014 12:26 AM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:



 It would be great if we can get approval for the Hierachical Quota handling 
 in Nova too (https://review.openstack.org/#/c/129420/).

Nova's spec deadline has passed, but I think this is a good candidate for an 
exception.  We will announce the process for asking for a formal spec exception 
shortly after new years.




 Tim



 From: Morgan Fainberg 
 [mailto:morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com]
 Sent: 23 December 2014 01:22
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Hierarchical Multitenancy



 Hi Raildo,



 Thanks for putting this post together. I really appreciate all the work you 
 guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
 into Keystone. It’s great to have the base implementation merged into 
 Keystone for the K1 milestone. I look forward to seeing the rest of the 
 development land during the rest of this cycle and what the other OpenStack 
 projects build around the HMT functionality.



 Cheers,

 Morgan







 On Dec 22, 2014, at 1:49 PM, Raildo Mascena 
 rail...@gmail.commailto:rail...@gmail.com wrote:



 Hello folks, My team and I developed the Hierarchical Multitenancy concept 
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
 implemented? What are the next steps for kilo?

 To answers these questions, I created a blog post 
 http://raildo.me/hierarchical-multitenancy-in-openstack/



 Any question, I'm available.



 --

 Raildo Mascena

 Software Engineer.

 Bachelor of Computer Science.

 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

2014-12-23 Thread Asselin, Ramy
I agree with John. Option 4: one ci account for all drivers.

The only valid reasons I'm aware of to use multiple accounts for a single 
vendor is if the hardware required to run the tests are not accessible from a 
'central' ci system, or if the ci systems are managed by different teams.

Otherwise, as you stated, it's more complicated to manage  maintain.

Ramy

-Original Message-
From: John Griffith [mailto:john.griffi...@gmail.com] 
Sent: Tuesday, December 23, 2014 3:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx alo...@il.ibm.com wrote:
 Hi All,

 In IBM we have several cinder drivers, with a number of CI accounts. 
 In order to improve the CI management and maintenance, we decided to 
 build a single Jenkins master that will run several jobs for the drivers we 
 own.
 Adding the jobs to the jenkins master went ok, but we encountered a 
 problem with the CI accounts. We have several drivers and several 
 accounts, but in the Jenkins master, the Zuul configuration has only 
 one gerrit account that reports.

 So there are several questions:
 1. Was this problem encountered by others? How did they solve it?
 2. Is there a way to configure Zuul on the Jenkins master to report 
 different jobs with different CI accounts?
 3. If there is no way to configure the master to use several CI 
 accounts, should we build a Jenkins master per driver?
 4. Or another alternative, should we use a single CI account for all 
 drivers we own, and report all results under that account?

 We'll appreciate any input.

 Thanks,
 Alon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


If you have a look at a review in gerrit you can see others appear to have a 
single account with multiple tests/results submitted.  HP, EMC and NetApp all 
appear to be pretty clear examples of how to go about doing this.  My personal 
preference on this has always been a single CI account anyway with the 
different drivers consolidated under it; if nothing else it reduces clutter in 
the review posting and makes it easier to find what you might be looking for.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

2014-12-23 Thread Jay S. Bryant

John and Ramy,

Thanks for the feedback.  So, we will create an IBM Storage CI Check 
account and slowly deprecate the multiple accounts as we consolidate the 
hardware.


Jay


On 12/23/2014 08:11 PM, Asselin, Ramy wrote:

I agree with John. Option 4: one ci account for all drivers.

The only valid reasons I'm aware of to use multiple accounts for a single 
vendor is if the hardware required to run the tests are not accessible from a 
'central' ci system, or if the ci systems are managed by different teams.

Otherwise, as you stated, it's more complicated to manage  maintain.

Ramy

-Original Message-
From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: Tuesday, December 23, 2014 3:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts

On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx alo...@il.ibm.com wrote:

Hi All,

In IBM we have several cinder drivers, with a number of CI accounts.
In order to improve the CI management and maintenance, we decided to
build a single Jenkins master that will run several jobs for the drivers we own.
Adding the jobs to the jenkins master went ok, but we encountered a
problem with the CI accounts. We have several drivers and several
accounts, but in the Jenkins master, the Zuul configuration has only
one gerrit account that reports.

So there are several questions:
1. Was this problem encountered by others? How did they solve it?
2. Is there a way to configure Zuul on the Jenkins master to report
different jobs with different CI accounts?
3. If there is no way to configure the master to use several CI
accounts, should we build a Jenkins master per driver?
4. Or another alternative, should we use a single CI account for all
drivers we own, and report all results under that account?

We'll appreciate any input.

Thanks,
Alon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


If you have a look at a review in gerrit you can see others appear to have a single 
account with multiple tests/results submitted.  HP, EMC and NetApp all appear to be 
pretty clear examples of how to go about doing this.  My personal preference on this has 
always been a single CI account anyway with the different drivers consolidated under it; 
if nothing else it reduces clutter in the review posting and makes it easier 
to find what you might be looking for.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical Multitenancy

2014-12-23 Thread James Downs

On Dec 23, 2014, at 5:10 PM, Michael Dorman mdor...@godaddy.com wrote:

 +1 to Nova support for this getting in to Kilo.
 
 We have a similar use case.  I’d really like to doll out quota on a 
 department level, and let individual departments manage sub projects and 
 quotas on their own.  I agree that HMT has limited value without Nova support.

+1, same for the use case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-23 Thread Renat Akhmerov
Thanks Winson,

Since we discussed all this already I just want to confirm that I fully support 
this model, it will significantly help us make much more concise, readable and 
maintainable workflows. I spent a lot of time thinking about it and don’t see 
any problems with it. Nice job!

However, all additional comments and questions are more than welcomed!


Renat Akhmerov
@ Mirantis Inc.



 On 24 Dec 2014, at 04:32, W Chan m4d.co...@gmail.com wrote:
 
 After some online discussions with Renat, the following is a revision of the 
 proposal to address the following related blueprints.
 * 
 https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
 https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
 * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context 
 https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
 * https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values 
 https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
 * https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context 
 https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context
 
 Please refer to the following threads for backgrounds.
 * 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html
 * 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html
 * 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html
 
 
 Workflow Context Scope
 1. context to workflow is passed to all its subflows and subtasks/actions 
 (aka children) only explicitly via inputs
 2. context are passed by value (copy.deepcopy) to children
 3. change to context is passed to parent only when it's explicitly published 
 at the end of the child execution
 4. change to context at the parent (after a publish from a child) is passed 
 to subsequent children
 
 Environment Variables
 Solves the problem for quickly passing pre-defined inputs to a WF execution.  
 In the WF spec, environment variables are referenced as $.env.var1, 
 $.env.var2, etc.  We should implement an API and DB model where users can 
 pre-defined different environments with their own set of variables.  An 
 environment can be passed either by name from the DB or adhoc by dict in 
 start_workflow.  On workflow execution, a copy of the environment is saved 
 with the execution object.  Action inputs are still declared explicitly in 
 the WF spec.  This does not solve the problem where common inputs are 
 specified over and over again.  So if there are multiple SQL tasks in the WF, 
 the WF author still needs to supply the conn_str explicitly for each task.  
 In the example below, let's say we have a SQL Query Action that takes a 
 connection string and a query statement as inputs.  The WF author can specify 
 that the conn_str input is supplied from the $.env.conn_str.
 
 Example:
 
 # Assume this SqlAction is registered as std.sql in Mistral's Action table.
 class SqlAction(object):
 def __init__(self, conn_str, query):
 ...
 
 ...
 
 version: 2.0
 workflows:
 demo:
 type: direct
 input:
 - query
 output:
 - records
 tasks:
 query:
 action: std.sql conn_str={$.env.conn_str} query={$.query}
 publish:
 records: $
 
 ...
 
 my_adhoc_env = {
 conn_str: mysql://admin:secrete 
 mysql://admin:secrete@@localhost/test
 }
 
 ...
 
 # adhoc by dict
 start_workflow(wf_name, wf_inputs, env=my_adhoc_env)
 
 OR
 
 # lookup by name from DB model
 start_workflow(wf_name, wf_inputs, env=my_lab_env)
 
 Define Default Action Inputs as Environment Variables
 Solves the problem where we're specifying the same inputs to subflows and 
 subtasks/actions over and over again.  On command execution, if action inputs 
 are not explicitly supplied, then defaults will be lookup from the 
 environment.
 
 Example:
 Using the same example from above, the WF author can still supply both 
 conn_str and query inputs in the WF spec.  However, the author also has the 
 option to supply that as default action inputs.  An example environment 
 structure is below.  __actions should be reserved and immutable.  Users can 
 specific one or more default inputs for the sql action as nested dict under 
 __actions.  Recursive YAQL eval should be supported in the env variables.
 
 version: 2.0
 workflows:
 demo:
 type: direct
 input:
 - query
 output:
 - records
 tasks:
 query:
 action: std.sql query={$.query}
 publish:
 records: $
 
 ...