Re: [openstack-dev] [nova] Do we need new nova configuration option to show compute nodes with shared_storage or not ?

2015-02-02 Thread Joshua Zhang
长波,
  你对oslo比较熟,能帮我看看这个问题吗,
https://answers.launchpad.net/oslo.config/+question/261550

On Tue, Jan 27, 2015 at 12:13 PM, ChangBo Guo  wrote:

> Hi ALL,
>
> I have been working on bug 1414432 [1] recently,  I would like do more
> discussion before further work.
>
> I'm talking about libvirt driver( maybe fit for other hypervisor),  there
> are two kinds of  storages  which are used for  instance's  root_disk.
> 1) non-shared storage  on each compute nodes
> 2) shared storage  between compute nodes
> 2.1 shared  instance_path and  root_disk  (shared everything ike NFS)
> 2.2 shared  root_disk  without shared instance_path  (shared volume
> backend like ceph)
>
> Current we  check if shared storage or not  in  concrete actions like
> live-migration,  evacuate,  these need pass instance as argument,
> I think the value of  hared storage or not should base on compute node,
> should not base on instance.
> So do we need new nova configuration option to show compute nodes with
> shared_storage or not ?
>
> benefits:
>
>1) Don't need check for each instance while taking action
>2) Can handle similar bug easily like [1] from API level.
>
> If yes,
> some thing like, compute-node-storage-type= [ non-shared |  shared-all |
> shared-volume ]  or part-shared
>  In order to support some compute nodes with shared_storage, and others
> doesn't in one OpenStack development,
>  another configuration   shared_storage_compute_nodes =[node3, node5]
>
> Any thoughts ?
>
>
> [1]https://bugs.launchpad.net/nova/+bug/1414432
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we need new nova configuration option to show compute nodes with shared_storage or not ?

2015-02-02 Thread Joshua Zhang
I'm sorry, my mistake due to my negligence, pls ignore my above reply.
sorry.

On Mon, Feb 2, 2015 at 4:04 PM, Joshua Zhang 
wrote:

> 长波,
>   你对oslo比较熟,能帮我看看这个问题吗,
> https://answers.launchpad.net/oslo.config/+question/261550
>
> On Tue, Jan 27, 2015 at 12:13 PM, ChangBo Guo  wrote:
>
>> Hi ALL,
>>
>> I have been working on bug 1414432 [1] recently,  I would like do more
>> discussion before further work.
>>
>> I'm talking about libvirt driver( maybe fit for other hypervisor),  there
>> are two kinds of  storages  which are used for  instance's  root_disk.
>> 1) non-shared storage  on each compute nodes
>> 2) shared storage  between compute nodes
>> 2.1 shared  instance_path and  root_disk  (shared everything ike NFS)
>> 2.2 shared  root_disk  without shared instance_path  (shared volume
>> backend like ceph)
>>
>> Current we  check if shared storage or not  in  concrete actions like
>> live-migration,  evacuate,  these need pass instance as argument,
>> I think the value of  hared storage or not should base on compute node,
>> should not base on instance.
>> So do we need new nova configuration option to show compute nodes with
>> shared_storage or not ?
>>
>> benefits:
>>
>>1) Don't need check for each instance while taking action
>>2) Can handle similar bug easily like [1] from API level.
>>
>> If yes,
>> some thing like, compute-node-storage-type= [ non-shared |  shared-all |
>> shared-volume ]  or part-shared
>>  In order to support some compute nodes with shared_storage, and others
>> doesn't in one OpenStack development,
>>  another configuration   shared_storage_compute_nodes =[node3, node5]
>>
>> Any thoughts ?
>>
>>
>> [1]https://bugs.launchpad.net/nova/+bug/1414432
>>
>> --
>> ChangBo Guo(gcb)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards
> Zhang Hua(张华)
> Software Engineer | Canonical
> IRC:  zhhuabj
>



-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Kevin Benton
ML2 makes the hostname available in the context it passes to the drivers
via the 'host' attribute.[1] This is the only thing Neutron knows about the
compute node using the port.

1.
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776

On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad  wrote:

> Hi All,
>
> I am developing ml2 driver and I want compute host details while creation
> of ports. I mean to say is, I have multi node setup and when I launch VM I
> want to get deatils on which compute node does this VM got launced while
> creation of ports. Can anyone please help me on this.
>
> Thanks in Advance.
>
> --
> *Regards,*
> *Harshada Kakad*
> **
> *Sr. Software Engineer*
> *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
> 411013, India*
> *Mobile-9689187388*
> *Email-Id : harshada.ka...@izeltech.com *
> *website : www.izeltech.com *
>
> *Disclaimer*
> The information contained in this e-mail and any attachment(s) to this
> message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information of Izel
> Technologies Pvt. Ltd. If you are not the intended recipient, you are
> notified that any review, use, any form of reproduction, dissemination,
> copying, disclosure, modification, distribution and/or publication of this
> e-mail message, contents or its attachment(s) is strictly prohibited and
> you are requested to notify us the same immediately by e-mail and delete
> this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
> virus infected e-mail or errors or omissions or consequences which may
> arise as a result of this e-mail transmission.
> *End of Disclaimer*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-02-02 Thread Julien Danjou
On Fri, Jan 30 2015, Yuriy Taraday wrote:

> That's a great research! Under its impression I've spent most of last
> evening reading PyMySQL sources. It looks like it not as much need C
> speedups currently as plain old Python optimizations. Protocol parsing code
> seems very inefficient (chained struct.unpack's interleaved with data
> copying and util method calls that do the same struct.unpack with
> unnecessary type check... wow...) That's a huge place for improvement.
> I think it worth spending time on coming vacation to fix these slowdowns.
> We'll see if they'll pay back those 10% slowdown people are talking about.

With all my respect, you may be right, but I need to say it'd be better
to profile and then optimize rather than spend time rewriting random
parts of the code then hoping it's going to be faster. :-)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Harshada Kakad
Thanks Kevin for reply.
But 'host' attribute returns me the controller hostname and not compute
host name. I am having multi node setup, and I want to know compute host
where the VM get launch.

On Mon, Feb 2, 2015 at 2:19 PM, Kevin Benton  wrote:

> ML2 makes the hostname available in the context it passes to the drivers
> via the 'host' attribute.[1] This is the only thing Neutron knows about the
> compute node using the port.
>
> 1.
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776
>
> On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad <
> harshada.ka...@izeltech.com> wrote:
>
>> Hi All,
>>
>> I am developing ml2 driver and I want compute host details while creation
>> of ports. I mean to say is, I have multi node setup and when I launch VM I
>> want to get deatils on which compute node does this VM got launced while
>> creation of ports. Can anyone please help me on this.
>>
>> Thanks in Advance.
>>
>> --
>> *Regards,*
>> *Harshada Kakad*
>> **
>> *Sr. Software Engineer*
>> *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
>> 411013, India*
>> *Mobile-9689187388*
>> *Email-Id : harshada.ka...@izeltech.com *
>> *website : www.izeltech.com *
>>
>> *Disclaimer*
>> The information contained in this e-mail and any attachment(s) to this
>> message are intended for the exclusive use of the addressee(s) and may
>> contain proprietary, confidential or privileged information of Izel
>> Technologies Pvt. Ltd. If you are not the intended recipient, you are
>> notified that any review, use, any form of reproduction, dissemination,
>> copying, disclosure, modification, distribution and/or publication of this
>> e-mail message, contents or its attachment(s) is strictly prohibited and
>> you are requested to notify us the same immediately by e-mail and delete
>> this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
>> virus infected e-mail or errors or omissions or consequences which may
>> arise as a result of this e-mail transmission.
>> *End of Disclaimer*
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com *
*website : www.izeltech.com *

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] mid cycle etherpad

2015-02-02 Thread Devananda van der Veen
Hi folks!

I've tossed up an etherpad for folks at (or not at) the mid cycle sprints
to share ideas and such.

This might seem last-minute (and it is). I don't have a specific agenda --
aside from what's on launchpad, what's on gerrit, and what's on your mind.
Really, I'd like us to make progress towards the goals we set for ourselves
at the start of the cycle, and while I'm thrilled with the progress we've
made in the last few weeks, there's a bunch more to do.

So, here's the etherpad:
https://etherpad.openstack.org/p/kilo-ironic-midcycle

I've given it a little bit of structure, and then stuck some of the ideas
that I have at the bottom, in no particular order.

Looking forward to seeing some of you tomorrow, and some more of you in a
week!

Cheers,
Devananda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Abhishek Shrivastava
Hi all,

For the past few days I have been facing the problem of getting the image
upload error while installation of devstack in my CI. The devstack
installation is triggered in CI when someone does the checkin, and the
failure cause comes out to be the same.

Below is the log for the error:

*2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]*
*2015-01-30 11:48:46.205 | +
image=/opt/stack/new/devstack/files/mysql.qcow2*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 =~ openvz
]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 =~ \.vmdk
]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 =~
\.vhd\.tgz ]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 =~
\.xen-raw\.tgz ]]*
*2015-01-30 11:48:46.205 | + local kernel=*
*2015-01-30 11:48:46.205 | + local ramdisk=*
*2015-01-30 11:48:46.206 | + local disk_format=*
*2015-01-30 11:48:46.206 | + local container_format=*
*2015-01-30 11:48:46.206 | + local unpack=*
*2015-01-30 11:48:46.206 | + local img_property=*
*2015-01-30 11:48:46.206 | + case "$image_fname" in*
*2015-01-30 11:48:46.210 | ++ basename
/opt/stack/new/devstack/files/mysql.qcow2 .qcow2*
*2015-01-30 11:48:46.212 | + image_name=mysql*
*2015-01-30 11:48:46.212 | + disk_format=qcow2*
*2015-01-30 11:48:46.212 | + container_format=bare*
*2015-01-30 11:48:46.212 | + is_arch ppc64*
*2015-01-30 11:48:46.215 | ++ uname -m*
*2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]*
*2015-01-30 11:48:46.219 | + '[' bare = bare ']'*
*2015-01-30 11:48:46.219 | + '[' '' = zcat ']'*
*2015-01-30 11:48:46.219 | + openstack --os-token
ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292
 image create mysql --public --container-format=bare
--disk-format qcow2*

*2015-01-30 11:48:47.342 | ERROR: openstack *
*2015-01-30 11:48:47.342 |  *
*2015-01-30 11:48:47.342 |   401 Unauthorized*
*2015-01-30 11:48:47.342 |  *
*2015-01-30 11:48:47.342 |  *
*2015-01-30 11:48:47.342 |   401 Unauthorized*
*2015-01-30 11:48:47.343 |   This server could not verify that you are
authorized to access the document you requested. Either you supplied the
wrong credentials (e.g., bad password), or your browser does not understand
how to supply the credentials required.*
*2015-01-30 11:48:47.343 |*
*2015-01-30 11:48:47.343 |  *
*2015-01-30 11:48:47.343 |  (HTTP 401)*
*2015-01-30 11:48:47.381 | + exit_trap*
*2015-01-30 11:48:47.381 | + local r=1*
*2015-01-30 11:48:47.382 | ++ jobs -p*
*2015-01-30 11:48:47.398 | + jobs='29629*
*2015-01-30 11:48:47.398 | 956'*
*2015-01-30 11:48:47.398 | + [[ -n 29629*
*2015-01-30 11:48:47.398 | 956 ]]*
*2015-01-30 11:48:47.398 | + [[ -n
/opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]*
*2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]*
*2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'*
*2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes*
*2015-01-30 11:48:47.399 | + kill 29629 956*
*2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such
process*
*2015-01-30 11:48:47.399 | + kill_spinner*
*2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'*
*2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]*
*2015-01-30 11:48:47.399 | + echo 'Error on exit'*
*2015-01-30 11:48:47.399 | Error on exit*
*2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]*
*2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d
/opt/stack/new*
*2015-01-30 11:48:47.438 | World dumping... see
/opt/stack/new/worlddump-2015-01-30-114847.txt for details*
*2015-01-30 11:48:47.440 | df: '/run/user/112/gvfs': Permission denied*
*2015-01-30 11:48:47.468 | ./stack.sh: line 427: 29629 Terminated
   _old_run_process "$service" "$command"*
*2015-01-30 11:48:47.469 | + exit 1*

So, if anyone knows the solution for this problem please do reply.

-- 


*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2015-02-02 Thread Simon Pasquier
Hello,
(resurrecting this old thread because I think I found the root cause)

The problem affects all OpenStack environments using Syslog, not only
Fuel-based installations: when use_syslog is true, the
logging_context_format_string and logging_default_format_string parameters
aren't taken into account (see [1] for details).
The issue is fixed in oslo.log but not in oslo-incubator/log (See [2]).
Depending on when the different projects synchronized with oslo-incubator
during the Juno timeframe, some of them are immune to the bug (from the
Fuel bug report: heat, glance and neutron). As such the bug will affect all
projects that don't switch to oslo.log during the Kilo cycle.

BR,
Simon

[1] https://bugs.launchpad.net/oslo.log/+bug/1399088
[2] https://review.openstack.org/#/c/151157/

On Fri, Dec 12, 2014 at 7:35 PM, Dmitry Pyzhov  wrote:

> We have a high priority bug in 6.0:
> https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.
>
> Our openstack services use to send logs in strange format with extra copy
> of timestamp and loglevel:
> ==> ./neutron-metadata-agent.log <==
> 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
> neutron.common.config [-] Logging enabled!
>
> And we have a workaround for this. We hide extra timestamp and use second
> loglevel.
>
> In Juno some of services have updated oslo.logging and now send logs in
> simple format:
> ==> ./nova-api.log <==
> 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
> /etc/nova/api-paste.ini
>
> In order to keep backward compatibility and deal with both formats we have
> a dirty workaround for our workaround:
> https://review.openstack.org/#/c/141450/
>
> As I see, our best choice here is to throw away all workarounds and show
> logs on UI as is. If service sends duplicated data - we should show
> duplicated data.
>
> Long term fix here is to update oslo.logging in all packages. We can do it
> in 6.1.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Bob Ball
Hi Abhishek,

This is bug https://launchpad.net/bugs/1415795 introduced by 
https://review.openstack.org/#/c/142967/ because Swift doesn't use olso.config.

The fix is at https://review.openstack.org/#/c/151506/ which has not yet been 
approved, but if you can cherry-pick it for your CI it should get it working 
again.

Thanks,

Bob

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: 02 February 2015 09:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Fwd: Image Upload error while installing devstack on 
CI slave machine.

Hi all,

For the past few days I have been facing the problem of getting the image 
upload error while installation of devstack in my CI. The devstack installation 
is triggered in CI when someone does the checkin, and the failure cause comes 
out to be the same.

Below is the log for the error:

2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]
2015-01-30 11:48:46.205 | + image=/opt/stack/new/devstack/files/mysql.qcow2
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ openvz ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vmdk ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vhd\.tgz ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.xen-raw\.tgz 
]]
2015-01-30 11:48:46.205 | + local kernel=
2015-01-30 11:48:46.205 | + local ramdisk=
2015-01-30 11:48:46.206 | + local disk_format=
2015-01-30 11:48:46.206 | + local container_format=
2015-01-30 11:48:46.206 | + local unpack=
2015-01-30 11:48:46.206 | + local img_property=
2015-01-30 11:48:46.206 | + case "$image_fname" in
2015-01-30 11:48:46.210 | ++ basename /opt/stack/new/devstack/files/mysql.qcow2 
.qcow2
2015-01-30 11:48:46.212 | + image_name=mysql
2015-01-30 11:48:46.212 | + disk_format=qcow2
2015-01-30 11:48:46.212 | + container_format=bare
2015-01-30 11:48:46.212 | + is_arch ppc64
2015-01-30 11:48:46.215 | ++ uname -m
2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]
2015-01-30 11:48:46.219 | + '[' bare = bare ']'
2015-01-30 11:48:46.219 | + '[' '' = zcat ']'
2015-01-30 11:48:46.219 | + openstack --os-token 
ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292 image create 
mysql --public --container-format=bare --disk-format qcow2
2015-01-30 11:48:47.342 | ERROR: openstack 
2015-01-30 11:48:47.342 |  
2015-01-30 11:48:47.342 |   401 Unauthorized
2015-01-30 11:48:47.342 |  
2015-01-30 11:48:47.342 |  
2015-01-30 11:48:47.342 |   401 Unauthorized
2015-01-30 11:48:47.343 |   This server could not verify that you are 
authorized to access the document you requested. Either you supplied the wrong 
credentials (e.g., bad password), or your browser does not understand how to 
supply the credentials required.
2015-01-30 11:48:47.343 |
2015-01-30 11:48:47.343 |  
2015-01-30 11:48:47.343 |  (HTTP 401)
2015-01-30 11:48:47.381 | + exit_trap
2015-01-30 11:48:47.381 | + local r=1
2015-01-30 11:48:47.382 | ++ jobs -p
2015-01-30 11:48:47.398 | + jobs='29629
2015-01-30 11:48:47.398 | 956'
2015-01-30 11:48:47.398 | + [[ -n 29629
2015-01-30 11:48:47.398 | 956 ]]
2015-01-30 11:48:47.398 | + [[ -n 
/opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]
2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]
2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'
2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes
2015-01-30 11:48:47.399 | + kill 29629 956
2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such process
2015-01-30 11:48:47.399 | + kill_spinner
2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'
2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]
2015-01-30 11:48:47.399 | + echo 'Error on exit'
2015-01-30 11:48:47.399 | Error on exit
2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]
2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d 
/opt/stack/new
2015-01-30 11:48:47.438 | World dumping... see 
/opt/stack/new/worlddump-2015-01-30-114847.txt for details
2015-01-30 11:48:47.440 | df: '/run/user/112/gvfs': Permission denied
2015-01-30 11:48:47.468 | ./stack.sh: line 427: 29629 Terminated  
_old_run_process "$service" "$command"
2015-01-30 11:48:47.469 | + exit 1

So, if anyone knows the solution for this problem please do reply.

--
[https://docs.google.com/uc?export=download&id=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0&revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks & Regards,
Abhishek
Cloudbyte Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Daniel P. Berrange
On Fri, Jan 30, 2015 at 04:38:44PM -0600, Matt Riedemann wrote:
> 
> 
> On 1/30/2015 3:16 PM, Soren Hansen wrote:
> >As I've said a couple of times in the past, I think the
> >architecturally sound approach is to keep this inside Nova.
> >
> >The two main reasons are:
> >  * Having multiple frontend API's keeps us honest in terms of
> >separation between the different layers in Nova.
> >  * Having the EC2 API inside Nova ensures the internal data model is
> >rich enough to "feed" the EC2 API. If some field's only use is to
> >enable the EC2 API and the EC2 API is a separate component, it's not
> >hard to imagine it being deprecated as well.
> >
> >I fear that deprecation is a one way street and I would like to ask
> >one more chance to resucitate it in its current home.
> >
> >I could be open to a discussion about putting it into a separate
> >repository, but having it functionally remain in its current place, if
> >that's somehow easier to swallow.
> >
> >
> >Soren Hansen | http://linux2go.dk/
> >Ubuntu Developer | http://www.ubuntu.com/
> >OpenStack Developer  | http://www.openstack.org/
> >
> >
> >2015-01-28 20:56 GMT+01:00 Sean Dague :
> >>The following review for Kilo deprecates the EC2 API in Nova -
> >>https://review.openstack.org/#/c/150929/
> >>
> >>There are a number of reasons for this. The EC2 API has been slowly
> >>rotting in the Nova tree, never was highly tested, implements a
> >>substantially older version of what AWS has, and currently can't work
> >>with any recent releases of the boto library (due to implementing
> >>extremely old version of auth). This has given the misunderstanding that
> >>it's a first class supported feature in OpenStack, which it hasn't been
> >>in quite sometime. Deprecating honestly communicates where we stand.
> >>
> >>There is a new stackforge project which is getting some activity now -
> >>https://github.com/stackforge/ec2-api. The intent and hope is that is
> >>the path forward for the portion of the community that wants this
> >>feature, and that efforts will be focused there.
> >>
> >>Comments are welcomed, but we've attempted to get more people engaged to
> >>address these issues over the last 18 months, and never really had
> >>anyone step up. Without some real maintainers of this code in Nova (and
> >>tests somewhere in the community) it's really no longer viable.
> >>
> >> -Sean
> >>
> >>--
> >>Sean Dague
> >>http://dague.net
> >>
> >>
> >>__
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Deprecation isn't a one-way street really, nova-network was deprecated for a
> couple of releases and then undeprecated and opened up again for feature
> development (at least for a short while until the migration to neutron is
> sorted out and implemented).

Nova-network was prematurely deprecated as the alternative was not fully
ready. That's a prime example of why we should not be deprecating EC2
right now either.

Deprecation is a mechanism by which you inform users that they should
stop using the current functionality and switch to $NEW-THING as the
replacement. In the case of nova-network they could not switch because
neutron did not offer feature parity at the time we were asking them
to switch (nor did it have an upgrade path for that matter). Likewise
in the case of the EC2 API, the alternative is not ready for users to
actually switch to at a production quality level.

What we actually trying to tell users is that we think the out of tree
EC2 implementation is the long term strategic direction of the EC2
support with Nova, and that the current in tree impl is not being actively
developed. That's a sensible thing to tell our users, but deprecation is
the wrong mechanism for this. It is a task best suited for release notes.
Keep deprecation available as a mechanism for telling users that the time
has come for them to actively switch their deployments to the new impl.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un

Re: [openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Abhishek Shrivastava
Hi Bob,

Thanks for the reply, this is really a great help for me.

On Mon, Feb 2, 2015 at 3:11 PM, Bob Ball  wrote:

>  Hi Abhishek,
>
>
>
> This is bug https://launchpad.net/bugs/1415795 introduced by
> https://review.openstack.org/#/c/142967/ because Swift doesn't use
> olso.config.
>
>
>
> The fix is at https://review.openstack.org/#/c/151506/ which has not yet
> been approved, but if you can cherry-pick it for your CI it should get it
> working again.
>
>
>
> Thanks,
>
>
>
> Bob
>
>
>
> *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
> *Sent:* 02 February 2015 09:35
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] Fwd: Image Upload error while installing
> devstack on CI slave machine.
>
>
>
> Hi all,
>
>
>
> For the past few days I have been facing the problem of getting the image
> upload error while installation of devstack in my CI. The devstack
> installation is triggered in CI when someone does the checkin, and the
> failure cause comes out to be the same.
>
>
>
> Below is the log for the error:
>
>
>
> *2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]*
>
> *2015-01-30 11:48:46.205 | +
> image=/opt/stack/new/devstack/files/mysql.qcow2*
>
> *2015-01-30 11:48:46.205 | + [[
> http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
>  =~ openvz
> ]]*
>
> *2015-01-30 11:48:46.205 | + [[
> http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
>  =~ \.vmdk
> ]]*
>
> *2015-01-30 11:48:46.205 | + [[
> http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
>  =~
> \.vhd\.tgz ]]*
>
> *2015-01-30 11:48:46.205 | + [[
> http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
>  =~
> \.xen-raw\.tgz ]]*
>
> *2015-01-30 11:48:46.205 | + local kernel=*
>
> *2015-01-30 11:48:46.205 | + local ramdisk=*
>
> *2015-01-30 11:48:46.206 | + local disk_format=*
>
> *2015-01-30 11:48:46.206 | + local container_format=*
>
> *2015-01-30 11:48:46.206 | + local unpack=*
>
> *2015-01-30 11:48:46.206 | + local img_property=*
>
> *2015-01-30 11:48:46.206 | + case "$image_fname" in*
>
> *2015-01-30 11:48:46.210 | ++ basename
> /opt/stack/new/devstack/files/mysql.qcow2 .qcow2*
>
> *2015-01-30 11:48:46.212 | + image_name=mysql*
>
> *2015-01-30 11:48:46.212 | + disk_format=qcow2*
>
> *2015-01-30 11:48:46.212 | + container_format=bare*
>
> *2015-01-30 11:48:46.212 | + is_arch ppc64*
>
> *2015-01-30 11:48:46.215 | ++ uname -m*
>
> *2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]*
>
> *2015-01-30 11:48:46.219 | + '[' bare = bare ']'*
>
> *2015-01-30 11:48:46.219 | + '[' '' = zcat ']'*
>
> *2015-01-30 11:48:46.219 | + openstack --os-token
> ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292
>  image create mysql --public --container-format=bare
> --disk-format qcow2*
>
> *2015-01-30 11:48:47.342 | ERROR: openstack *
>
> *2015-01-30 11:48:47.342 |  *
>
> *2015-01-30 11:48:47.342 |   401 Unauthorized*
>
> *2015-01-30 11:48:47.342 |  *
>
> *2015-01-30 11:48:47.342 |  *
>
> *2015-01-30 11:48:47.342 |   401 Unauthorized*
>
> *2015-01-30 11:48:47.343 |   This server could not verify that you are
> authorized to access the document you requested. Either you supplied the
> wrong credentials (e.g., bad password), or your browser does not understand
> how to supply the credentials required.*
>
> *2015-01-30 11:48:47.343 |*
>
> *2015-01-30 11:48:47.343 |  *
>
> *2015-01-30 11:48:47.343 |  (HTTP 401)*
>
> *2015-01-30 11:48:47.381 | + exit_trap*
>
> *2015-01-30 11:48:47.381 | + local r=1*
>
> *2015-01-30 11:48:47.382 | ++ jobs -p*
>
> *2015-01-30 11:48:47.398 | + jobs='29629*
>
> *2015-01-30 11:48:47.398 | 956'*
>
> *2015-01-30 11:48:47.398 | + [[ -n 29629*
>
> *2015-01-30 11:48:47.398 | 956 ]]*
>
> *2015-01-30 11:48:47.398 | + [[ -n
> /opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]*
>
> *2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]*
>
> *2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'*
>
> *2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes*
>
> *2015-01-30 11:48:47.399 | + kill 29629 956*
>
> *2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such
> process*
>
> *2015-01-30 11:48:47.399 | + kill_spinner*
>
> *2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'*
>
> *2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]*
>
> *2015-01-30 11:48:47.399 | + echo 'Error on exit'*
>
> *2015-01-30 11:48:47.399 | Error on exit*
>
> *2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]*
>
> *2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d
> /opt/stack/new*
>
> *2015-01-30 11:48:47.438 | World dumping... see
> /opt/stack/new/worlddump-2015-01-30-114847.txt for details*
>
> *2015-01-30 11:48:47.440 | df: '/run/user/112/

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Daniel P. Berrange
On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:
> Alex,
> 
> 
> 
> Many thanks for the constructive approach. I've added an item to the list for 
> the Ops meetup in March to see who would be interested to help.
> 
> 
> 
> As discussed on the change, it is likely that there would need to be some 
> additional
> Nova APIs added to support the full EC2 semantics. Thus, there would need to 
> support
> from the Nova team to enable these additional functions.  Having tables in 
> the EC2
> layer which get out of sync with those in the Nova layer would be a 
> significant
> problem in production.

Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
reasonsable.
Indeed if there is data needed by EC2 that Nova doesn't provide already, chances
are that providing this data woudl be useful to other regular users / client 
apps
too. Just really needs someone to submit a spec with details of exactly which
functionality is missing. It shouldnt be hard for Nova cores to support it, 
given
the desire to see the out of tree EC2 impl take over & in tree impl removed.

> I think this would merit a good slot in the Vancouver design sessions so we 
> can
> also discuss documentation, migration, packaging, configuration management,
> scaling, HA, etc.

I'd really strongly encourage the people working on this to submit the
detailed spec for the new APIs well before the Vancouver design summit.
Likewise at lesat document somewhere the thoughts on upgrade paths plans.
We need to at least discuss & iterate on this a few times online, so that
we can take advantage of the f2f time for any remaining harder parts of
the discussion.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] Use of asynchronous slaves in Nova (was: Deprecating use_slave in Nova)

2015-02-02 Thread Matthew Booth
On 30/01/15 19:06, Mike Bayer wrote:
> 
> 
> Matthew Booth  wrote:
> 
>> At some point in the near future, hopefully early in L, we're intending
>> to update Nova to use the new database transaction management in
>> oslo.db's enginefacade.
>>
>> Spec:
>> http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst
>>
>> Implementation:
>> https://review.openstack.org/#/c/138215/
>>
>> One of the effects of this is that we will always know when we are in a
>> read-only transaction, or a transaction which includes writes. We intend
>> to use this new contextual information to make greater use of read-only
>> slave databases. We are currently proposing that if an admin has
>> configured a slave database, we will use the slave for *all* read-only
>> transactions. This would make the use_slave parameter passed to some
>> Nova apis redundant, as we would always use the slave where the context
>> allows.
>>
>> However, using a slave database has a potential pitfall when mixed with
>> separate write transactions. A caller might currently:
>>
>> 1. start a write transaction
>> 2. update the database
>> 3. commit the transaction
>> 4. start a read transaction
>> 5. read from the database
>>
>> The client might expect data written in step 2 to be reflected in data
>> read in step 5. I can think of 3 cases here:
>>
>> 1. A short-lived RPC call is using multiple transactions
>>
>> This is a bug which the new enginefacade will help us eliminate. We
>> should not be using multiple transactions in this case. If the reads are
>> in the same transaction as the write: they will be on the master, they
>> will be consistent, and there is no problem. As a bonus, lots of these
>> will be race conditions, and we'll fix at least some.
>>
>> 2. A long-lived task is using multiple transactions between long-running
>> sub-tasks
>>
>> In this case, for example creating a new instance, we genuinely want
>> multiple transactions: we don't want to hold a database transaction open
>> while we copy images around. However, I can't immediately think of a
>> situation where we'd write data, then subsequently want to read it back
>> from the db in a read-only transaction. I think we will typically be
>> updating state, meaning it's going to be a succession of write transactions.
>>
>> 3. Separate RPC calls from a remote client
>>
>> This seems potentially problematic to me. A client makes an RPC call to
>> create a new object. The client subsequently tries to retrieve the
>> created object, and gets a 404.
>>
>> Summary: 1 is a class of bugs which we should be able to find fairly
>> mechanically through unit testing. 2 probably isn't a problem in
>> practise? 3 seems like a problem, unless consumers of cloud services are
>> supposed to expect that sort of thing.
>>
>> I understand that slave databases can occasionally get very behind. How
>> behind is this in practise?
>>
>> How do we use use_slave currently? Why do we need a use_slave parameter
>> passed in via rpc, when it should be apparent to the developer whether a
>> particular task is safe for out-of-date data.
>>
>> Any chance they have some kind of barrier mechanism? e.g. block until
>> the current state contains transaction X.
>>
>> General comments on the usefulness of slave databases, and the
>> desirability of making maximum use of them?
> 
> keep in mind that the big win we get from writer()/ reader() is that
writer() can remain pointing to one node in a Galera cluster, and
reader() can point to the cluster as a whole. reader() by default should
definitely refer to the cluster as a whole, that is, “use slave”.
> 
> As for issue #3, galera cluster is synchronous replication. Slaves
don’t get “behind” at all. So to the degree that we need to
transparently support some other kind of master/slave where slaves do
get behind, perhaps there would be a reader(synchronous_required=True)
kind of thing; based on configuration, it would be known that
“synchronous” either means we don’t care (using galera) or that we
should use the writer (an asynchronous replication scheme).

This sounds like the crux of the matter to me. After some (admittedly
cursory) reading, it seems that galera can use both synchronous and
asynchronous replication. Up until Friday I had only ever considered
synchronous replication, which would not be a problem.

I think opportunistically using synchronous slaves whenever possible
could only be a win. Are there any unpleasant practicalities which might
mean this isn't the case?

However, it sounds to me like there is at least some OpenStack
deployment in production using asynchronous slaves, otherwise the issue
of 'getting behind' wouldn't have come up. We need to understand:

* Are people actually using asynchronous slaves?
* If so, why did they choose to do that, and
* what are they using them for?

> 
> All of this points to the fact that I really don’t think the
directives / flags should say anything about which specific database

Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Kevin Benton
Your VM must be launched on the controller node then. In a multi-node setup
the controller will also act as a compute node unless you have disabled the
n-cpu service. The 'host' attribute is specifically to indicate where a
port is being used. It's not for anything else.

On Mon, Feb 2, 2015 at 1:15 AM, Harshada Kakad 
wrote:

> Thanks Kevin for reply.
> But 'host' attribute returns me the controller hostname and not compute
> host name. I am having multi node setup, and I want to know compute host
> where the VM get launch.
>
> On Mon, Feb 2, 2015 at 2:19 PM, Kevin Benton  wrote:
>
>> ML2 makes the hostname available in the context it passes to the drivers
>> via the 'host' attribute.[1] This is the only thing Neutron knows about the
>> compute node using the port.
>>
>> 1.
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776
>>
>> On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad <
>> harshada.ka...@izeltech.com> wrote:
>>
>>> Hi All,
>>>
>>> I am developing ml2 driver and I want compute host details while
>>> creation of ports. I mean to say is, I have multi node setup and when I
>>> launch VM I want to get deatils on which compute node does this VM got
>>> launced while creation of ports. Can anyone please help me on this.
>>>
>>> Thanks in Advance.
>>>
>>> --
>>> *Regards,*
>>> *Harshada Kakad*
>>> **
>>> *Sr. Software Engineer*
>>> *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
>>> 411013, India*
>>> *Mobile-9689187388*
>>> *Email-Id : harshada.ka...@izeltech.com *
>>> *website : www.izeltech.com *
>>>
>>> *Disclaimer*
>>> The information contained in this e-mail and any attachment(s) to this
>>> message are intended for the exclusive use of the addressee(s) and may
>>> contain proprietary, confidential or privileged information of Izel
>>> Technologies Pvt. Ltd. If you are not the intended recipient, you are
>>> notified that any review, use, any form of reproduction, dissemination,
>>> copying, disclosure, modification, distribution and/or publication of this
>>> e-mail message, contents or its attachment(s) is strictly prohibited and
>>> you are requested to notify us the same immediately by e-mail and delete
>>> this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
>>> virus infected e-mail or errors or omissions or consequences which may
>>> arise as a result of this e-mail transmission.
>>> *End of Disclaimer*
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> *Regards,*
> *Harshada Kakad*
> **
> *Sr. Software Engineer*
> *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
> 411013, India*
> *Mobile-9689187388*
> *Email-Id : harshada.ka...@izeltech.com *
> *website : www.izeltech.com *
>
> *Disclaimer*
> The information contained in this e-mail and any attachment(s) to this
> message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information of Izel
> Technologies Pvt. Ltd. If you are not the intended recipient, you are
> notified that any review, use, any form of reproduction, dissemination,
> copying, disclosure, modification, distribution and/or publication of this
> e-mail message, contents or its attachment(s) is strictly prohibited and
> you are requested to notify us the same immediately by e-mail and delete
> this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
> virus infected e-mail or errors or omissions or consequences which may
> arise as a result of this e-mail transmission.
> *End of Disclaimer*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][nova] Cinder backend for ephemeral disks?

2015-02-02 Thread Tobias Engelbert
Hi,
It was not re-proposed for Kilo as there is basic work ongoing in Cinder to 
create a common library Brick that can be used by Cinder and Nova.
There might be a chance in L* to get it in. Would be nice to get some people 
together working on it
/Tobi

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com] 
Sent: Monday, February 02, 2015 12:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder][nova] Cinder backend for ephemeral disks?

It looks like this was never re-proposed for Kilo. I am open to it being 
proposed for L* when that release opens for specs soon, but we need a developer 
to be advocating for it.

Michael

On Sun, Feb 1, 2015 at 5:22 PM, Adam Lawson  wrote:
> Question, looks like this spec was abandoned , hard to tell if it is 
> being addressed elsewhere? Good idea that received a -2 then 
> ultimately abandoned sure to juno freeze I think.
>
> https://blueprints.launchpad.net/nova/+spec/nova-ephemeral-cinder
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sat, Jan 31, 2015 at 03:55:23AM +0100, Vladik Romanovsky wrote:
> 
> 
> - Original Message -
> > From: "Daniel P. Berrange" 
> > To: openstack-dev@lists.openstack.org, 
> > openstack-operat...@lists.openstack.org
> > Sent: Friday, 30 January, 2015 11:47:16 AM
> > Subject: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends
> > 
> > In working on a recent Nova migration bug
> > 
> >   https://bugs.launchpad.net/nova/+bug/1414065
> > 
> > I had cause to refactor the way the nova libvirt driver monitors live
> > migration completion/failure/progress. This refactor has opened the
> > door for doing more intelligent active management of the live migration
> > process.
> > 
> > As it stands today, we launch live migration, with a possible bandwidth
> > limit applied and just pray that it succeeds eventually. It might take
> > until the end of the universe and we'll happily wait that long. This is
> > pretty dumb really and I think we really ought to do better. The problem
> > is that I'm not really sure what "better" should mean, except for ensuring
> > it doesn't run forever.
> > 
> > As a demo, I pushed a quick proof of concept showing how we could easily
> > just abort live migration after say 10 minutes
> > 
> >   https://review.openstack.org/#/c/151665/
> > 
> > There are a number of possible things to consider though...
> > 
> > First how to detect when live migration isn't going to succeeed.
> > 
> >  - Could do a crude timeout, eg allow 10 minutes to succeeed or else.
> > 
> >  - Look at data transfer stats (memory transferred, memory remaining to
> >transfer, disk transferred, disk remaining to transfer) to determine
> >if it is making forward progress.
> 
> I think this is a better option. We could define a timeout for the progress
> and cancel if there is no progress. IIRC there were similar debates about it
> in Ovirt, we could do something similar:
> https://github.com/oVirt/vdsm/blob/master/vdsm/virt/migration.py#L430

That looks like quite a good implementation to follow. They are monitoring
progress and if they see progress stalling, then they wait a configurable
time before aborting. That should avoid prematurely aborting migrations
that are actually working, while avoiding migrations getting stuck forever.
They also have a global timeout which is based on the number of GB of RAM
the guest has, which is also a good idea compared to a one-size-fits-all
timeout.

> > Fourth there's a question of whether we should give the tenant user or
> > cloud admin further APIs for influencing migration
> > 
> >  - Add an explicit API for cancelling migration ?
> > 
> >  - Add APIs for setting tunables like downtime, bandwidth on the fly ?
> > 
> >  - Or drive some of the tunables like downtime, bandwidth, or policies
> >like cancel vs paused from flavour or image metadata properties ?
> > 
> >  - Allow operations like evacuate to specify a live migration policy
> >eg switch non-live migrate after 5 minutes ?
> > 
> IMHO, an explicit API for cancelling migration is very much needed.
> I remember cases when migrations took about 8 or hours, leaving the
> admins helpless :)

The oVirt hueristics should avoid that stuck scenario, but I do think
we need an API anyway.

> Also, I very much like the idea of having tunables and policy to set
> in the flavours and image properties.
> To allow the administrators to set these as a "template" in the flavour
> and also to let the users to update/override or "request" these options
> as they should know the best (hopefully) what is running in their guests.

We do need to make sure the administrators can always force migration
to succeed regardless of what the user might have configured, so they
can be ensured of emergency evacuation if needed.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 08:24:20AM +1300, Robert Collins wrote:
> On 31 January 2015 at 05:47, Daniel P. Berrange  wrote:
> > In working on a recent Nova migration bug
> >
> >   https://bugs.launchpad.net/nova/+bug/1414065
> >
> > I had cause to refactor the way the nova libvirt driver monitors live
> > migration completion/failure/progress. This refactor has opened the
> > door for doing more intelligent active management of the live migration
> > process.
> ...
> > What kind of things would be the biggest win from Operators' or tenants'
> > POV ?
> 
> Awesome. Couple thoughts from my perspective. Firstly, there's a bunch
> of situation dependent tuning. One thing Crowbar does really nicely is
> that you specify the host layout in broad abstract terms - e.g. 'first
> 10G network link' and so on : some of your settings above like whether
> to compress page are going to be heavily dependent on the bandwidth
> available (I doubt that compression is a win on a 100G link for
> instance, and would be suspect at 10G even). So it would be nice if
> there was a single dial or two to set and Nova would auto-calculate
> good defaults from that (with appropriate overrides being available).

I wonder how such an idea would fit into Nova, since it doesn't really
have that kind of knowledge about the network deployment characteristics.

> Operationally avoiding trouble is better than being able to fix it, so
> I quite like the idea of defaulting the auto-converge option on, or
> perhaps making it controllable via flavours, so that operators can
> offer (and identify!) those particularly performance sensitive
> workloads rather than having to guess which instances are special and
> which aren't.

I'll investigate the auto-converge further to find out what the
potential downsides of it are. If we can unconditionally enable
it, it would be simpler than adding yet more tunables.

> Being able to cancel the migration would be good. Relatedly being able
> to restart nova-compute while a migration is going on would be good
> (or put differently, a migration happening shouldn't prevent a deploy
> of Nova code: interlocks like that make continuous deployment much
> harder).
> 
> If we can't already, I'd like as a user to be able to see that the
> migration is happening (allows diagnosis of transient issues during
> the migration). Some ops folk may want to hide that of course.
> 
> I'm not sure that automatically rolling back after N minutes makes
> sense : if the impact on the cluster is significant then 1 minute vs
> 10 doesn't instrinsically matter: what matters more is preventing too
> many concurrent migrations, so that would be another feature that I
> don't think we have yet: don't allow more than some N inbound and M
> outbound live migrations to a compute host at any time, to prevent IO
> storms. We may want to log with NOTIFICATION migrations that are still
> progressing but appear to be having trouble completing. And of course
> an admin API to query all migrations in progress to allow API driven
> health checks by monitoring tools - which gives the power to manage
> things to admins without us having to write a probably-too-simple
> config interface.

Interesting, the point about concurrent migrations hadn't occurred to
me before, but it does of course make sense since migration is
primarily network bandwidth limited, though disk bandwidth is relevant
too if doing block migration.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
> Thanks for bringing this up, Daniel.  I don't think it makes sense to have
> a timeout on live migration, but operators should be able to cancel it,
> just like any other unbounded long-running process.  For example, there's
> no timeout on file transfers, but they need an interface report progress
> and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic "tasks API" in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sun, Feb 01, 2015 at 03:03:45PM -0700, David Medberry wrote:
> I'll second much of what Rob said:
> API that indicated how many live-migrations (l-m) were going would be good.
> API that told you what progress (and start time) a given l-m had made would
> be great.
> API to cancel a given l-m would also be great. I think this is a preferred
> approach over an auto timeout (it would give us the tools we need to
> implement an auto timeout though.)
> 
> I like the idea of trying auto-convergence (and agree it should be flavor
> feature and likely not the default.) I suspect this one needs some testing.
> It may be fine to automatically do this if it doesn't actually throttle the
> VM some 90-99% of the time.  (Presumably this could also increase the max
> downtime between cutover as well as throttling the vm.)

For reference the auto-convergance code in QEMU is this commit

  
http://git.qemu.org/?p=qemu.git;a=commit;h=7ca1dfad952d8a8655b32e78623edcc38a51b14a

If the migration operation is making good progress, it does not have any
impact on the guest. Periodically it checks the data transfer progress and
if the guest has dirtied more than 50% of the pages than were transferred
it'll start throttling. It throttles by simplying preventing the guest
vCPUs from running for a period of time. So the guest will obviously get
a performance drop, but the migration is more likely (but not guaranteed)
to succeed.

>From the QEMU level you can actually enable this on the fly it seems, but
libvirt only lets it be set at startup of migration.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Feodor Tersin
Hi Ken,

1 imageRef isn't the only attribute, which could receive an image id. There
are kernelId, ramdiskId, and even bdm v2 as well. So we couldn't guess
which attribute has the invalid value.

2 Besides NotFound case, other mixed cases are there. Such as attaching of
a volume. A mountpoint can be busy, or the volume can be used by an other
instance - both cases generate a conflict error. Do you suggest to use
specially formatted message in all such cases (when the same http error
code has several reasons)? But to make a work with Nova API
straightforward, all messages should have this format, even in simplest
cases.

3 How to parse a localized message? A Nova API client shouldn't use en_us
locale only to communicate with Nova, because it should display messages
generated by Nova to an end user.



On Mon, Feb 2, 2015 at 8:28 AM, Ken'ichi Ohmichi 
wrote:

> 2015-01-30 18:13 GMT+09:00 Simon Pasquier :
> > On Fri, Jan 30, 2015 at 3:05 AM, Kenichi Oomichi <
> oomi...@mxs.nes.nec.co.jp>
> > wrote:
> >>
> >> > -Original Message-
> >> > From: Roman Podoliaka [mailto:rpodoly...@mirantis.com]
> >> > Sent: Friday, January 30, 2015 2:12 AM
> >> > To: OpenStack Development Mailing List (not for usage questions)
> >> > Subject: Re: [openstack-dev] [api][nova] Openstack HTTP error codes
> >> >
> >> > Hi Anne,
> >> >
> >> > I think Eugeniya refers to a problem, that we can't really distinguish
> >> > between two different  badRequest (400) errors (e.g. wrong security
> >> > group name vs wrong key pair name when starting an instance), unless
> >> > we parse the error description, which might be error prone.
> >>
> >> Yeah, current Nova v2 API (not v2.1 API) returns inconsistent messages
> >> in badRequest responses, because these messages are implemented at many
> >> places. But Nova v2.1 API can return consistent messages in most cases
> >> because its input validation framework generates messages
> >> automatically[1].
> >
> >
> > When you say "most cases", you mean JSON schema validation only, right?
> > IIUC, this won't apply to the errors described by the OP such as invalid
> key
> > name, unknown security group, ...
>
> Yeah, right.
> I implied that in "most cases" and I have one patch[1] for covering them.
> By the patch, the sample messages also will be based on the same
> format and be consistent.
> The other choice we have is CamelCase exception as the fist mail, that
> also is interesting.
>
> Thanks
> Ken Ohmichi
>
> ---
> [1]: https://review.openstack.org/#/c/151954
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-02-02 Thread Yuriy Taraday
On Mon Feb 02 2015 at 11:49:31 AM Julien Danjou  wrote:

> On Fri, Jan 30 2015, Yuriy Taraday wrote:
>
> > That's a great research! Under its impression I've spent most of last
> > evening reading PyMySQL sources. It looks like it not as much need C
> > speedups currently as plain old Python optimizations. Protocol parsing
> code
> > seems very inefficient (chained struct.unpack's interleaved with data
> > copying and util method calls that do the same struct.unpack with
> > unnecessary type check... wow...) That's a huge place for improvement.
> > I think it worth spending time on coming vacation to fix these slowdowns.
> > We'll see if they'll pay back those 10% slowdown people are talking
> about.
>
> With all my respect, you may be right, but I need to say it'd be better
> to profile and then optimize rather than spend time rewriting random
> parts of the code then hoping it's going to be faster. :-)
>

Don't worry, I do profile. Currently I use mini-benchmark Mike provided an
optimizing hottest methods. I'm already getting 25% more speed in this case
and that's not a limit. I will be posting pull requests to pymysql soon.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-02-02 Thread Christopher Yeoh
On Sat, Jan 31, 2015 at 4:09 AM, Andrey Kurilin 
wrote:

> Thanks for the answer. Can I help with implementation of novaclient part?
>

Sure! Do you think its something you can get proposed into Gerrit by the
end of the week or very soon after?
Its the sort of timeframe we're looking for to get microversions enabled
asap I guess just let me know if it
turns out you don't have the time.

So I think a short summary of what is needed is:
- if os-compute-api-version is not supplied don't send any header at all
- it is probably worth doing a bit version parsing to see if it makes sense
eg of format:
 r"^([1-9]\d*)\.([1-9]\d*|0)$" or latest
- handle  HTTPNotAcceptable if the user asked for a version which is not
supported
  (can also get a badrequest if its badly formatted and got through the
novaclient filter)
- show the version header information returned

Regards,

Chris


> On Wed, Jan 28, 2015 at 11:50 AM, Christopher Yeoh 
> wrote:
>
>> On Fri, 23 Jan 2015 15:51:54 +0200
>> Andrey Kurilin  wrote:
>>
>> > Hi everyone!
>> > After removing nova V3 API from novaclient[1], implementation of v1.1
>> > client is used for v1.1, v2 and v3 [2].
>> > Since we moving to micro versions, I wonder, do we need such
>> > mechanism of choosing api version(os-compute-api-version) or we can
>> > simply remove it, like in proposed change - [3]?
>> > If we remove it, how micro version should be selected?
>> >
>>
>> So since v3 was never officially released I think we can re-use
>> os-compute-api-version for microversions which will map to the
>> X-OpenStack-Compute-API-Version header. See here for details on what
>> the header will look like. We need to also modify novaclient to handle
>> errors when a version requested is not supported by the server.
>>
>> If the user does not specify a version number then we should not send
>> anything at all. The server will run the default behaviour which for
>> quite a while will just be v2.1 (functionally equivalent to v.2)
>>
>>
>> http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html
>>
>>
>> >
>> > [1] - https://review.openstack.org/#/c/138694
>> > [2] -
>> >
>> https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
>> > [3] - https://review.openstack.org/#/c/149006
>> >
>>
>>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Daniel,

On 2/2/15 12:58 PM, Daniel P. Berrange wrote:

On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:

Alex,



Many thanks for the constructive approach. I've added an item to the list for 
the Ops meetup in March to see who would be interested to help.



As discussed on the change, it is likely that there would need to be some 
additional
Nova APIs added to support the full EC2 semantics. Thus, there would need to 
support
from the Nova team to enable these additional functions.  Having tables in the 
EC2
layer which get out of sync with those in the Nova layer would be a significant
problem in production.

Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
reasonsable.
Indeed if there is data needed by EC2 that Nova doesn't provide already, chances
are that providing this data woudl be useful to other regular users / client 
apps
too. Just really needs someone to submit a spec with details of exactly which
functionality is missing. It shouldnt be hard for Nova cores to support it, 
given
the desire to see the out of tree EC2 impl take over & in tree impl removed.


We'll do the spec shortly.



I think this would merit a good slot in the Vancouver design sessions so we can
also discuss documentation, migration, packaging, configuration management,
scaling, HA, etc.

I'd really strongly encourage the people working on this to submit the
detailed spec for the new APIs well before the Vancouver design summit.
Likewise at lesat document somewhere the thoughts on upgrade paths plans.
We need to at least discuss & iterate on this a few times online, so that
we can take advantage of the f2f time for any remaining harder parts of
the discussion.


We'll see about that also when all of the subjects we can think of or 
get questions about are covered somewhere in docs or specs. By the way - 
how do you usually do those online discussions? I mean what is the tooling?


Regards,
Daniel

Best regards,
  Alex Levine


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K-2 Review-a-thon

2015-02-02 Thread Erlon Cruz
Hi Mike,

There is 2 features[1][2][3] of HNAS driver that still are not
approved/targeted.
Is there anything missing to then to be approved?

Erlon


[1] https://blueprints.launchpad.net/cinder/+spec/hds-hnas-ssh-backend
[2] https://blueprints.launchpad.net/cinder/+spec/hds-hnas-pool-aware-sched
[3] https://bugs.launchpad.net/cinder/+bug/1402771

On Sat, Jan 31, 2015 at 7:30 PM, Mike Perez  wrote:

> * Why: We got a bit in the review queue. K-2 [1] cut is set to February
> 5th.
>
> * When: February 2nd at 2:00 UTC [2] to February 5th at 2:00 UTC [3]
> or sooner if we finish!
>
> * Where: #openstack-cinder on freenode IRC. There will also be a
> posted google hangout link in channel and etherpad [4] since that
> really worked out in previous hackathons. Remember there is a limit,
> so please join only if you're really going to be participating. You
> also don't have to be core.
>
> I'm encouraging two cores to sign up for a review in the etherpad [4].
> If there are already two people to a review, try to move onto
> something else to avoid getting burnt out on efforts already spent on
> a review.
>
> Patch owners will also be receiving an email directly from me to be
> aware of this prime time to respond back to feedback and post
> revisions if necessary.
>
> --
> Mike Perez
>
> [1] - https://launchpad.net/cinder/+milestone/kilo-2
> [2] -
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150202T02&p1=1440
> [3] -
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150205T02&p1=1440
> [4] - https://etherpad.openstack.org/p/cinder-k2-priorities
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 02:45:53PM +0300, Alexandre Levine wrote:
> Daniel,
> 
> On 2/2/15 12:58 PM, Daniel P. Berrange wrote:
> >On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:
> >>Alex,
> >>
> >>
> >>
> >>Many thanks for the constructive approach. I've added an item to the list 
> >>for the Ops meetup in March to see who would be interested to help.
> >>
> >>
> >>
> >>As discussed on the change, it is likely that there would need to be some 
> >>additional
> >>Nova APIs added to support the full EC2 semantics. Thus, there would need 
> >>to support
> >>from the Nova team to enable these additional functions.  Having tables in 
> >>the EC2
> >>layer which get out of sync with those in the Nova layer would be a 
> >>significant
> >>problem in production.
> >Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
> >reasonsable.
> >Indeed if there is data needed by EC2 that Nova doesn't provide already, 
> >chances
> >are that providing this data woudl be useful to other regular users / client 
> >apps
> >too. Just really needs someone to submit a spec with details of exactly which
> >functionality is missing. It shouldnt be hard for Nova cores to support it, 
> >given
> >the desire to see the out of tree EC2 impl take over & in tree impl removed.
> 
> We'll do the spec shortly.
> >
> >>I think this would merit a good slot in the Vancouver design sessions so we 
> >>can
> >>also discuss documentation, migration, packaging, configuration management,
> >>scaling, HA, etc.
> >I'd really strongly encourage the people working on this to submit the
> >detailed spec for the new APIs well before the Vancouver design summit.
> >Likewise at lesat document somewhere the thoughts on upgrade paths plans.
> >We need to at least discuss & iterate on this a few times online, so that
> >we can take advantage of the f2f time for any remaining harder parts of
> >the discussion.
> 
> We'll see about that also when all of the subjects we can think of or get
> questions about are covered somewhere in docs or specs. By the way - how do
> you usually do those online discussions? I mean what is the tooling?

I just mean discussions on this mailing list, or in the gerrit reviews
for the spec and/or patches

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-02 Thread Kevin Benton
>The only thing this discussion has convinced me of is that allowing users
to change the fixed IP address on a neutron port leads to a bad
user-experience.

Not as bad as having to delete a port and create another one on the same
network just to change addresses though...

>Even with an 8-minute renew time you're talking up to a 7-minute blackout
(87.5% of lease time before using broadcast).

I suggested 240 seconds renewal time, which is up to 4 minutes of
connectivity outage. This doesn't have anything to do with lease time and
unicast DHCP will work because the spoof rules allow DHCP client traffic
before restricting to specific IPs.

> Most would have rebooted long before then, true?  Cattle not pets, right?

Only in an ideal world that I haven't encountered with customer
deployments. Many enterprise deployments end up bringing pets along where
reboots aren't always free. The time taken to relaunch programs and restore
state can end up being 10 minutes+ if it's something like a VDI deployment
or dev environment where someone spends a lot of time working on one VM.

>Changing the lease time is just papering-over the real bug - neutron
doesn't support seamless changes in IP addresses on ports, since it totally
relies on the dhcp configuration settings a deployer has chosen.

It doesn't need to be seamless, but it certainly shouldn't be useless.
Connectivity interruptions can be expected with IP changes (e.g. I've seen
changes in elastic IPs on EC2 can interrupt connectivity to an instance for
up to 2 minutes), but an entire day of downtime is awful.

One of the things I'm getting at is that a deployer shouldn't be choosing
such high lease times and we are encouraging it with a high default. You
are arguing for infrequent renewals to work around excessive logging, which
is just an implementation problem that should be addressed with a patch to
your logging collector (de-duplication) or to dnsmasq (don't log renewals).

>Documenting a VM reboot is necessary, or even deprecating this (you won't
like that) are sounding better to me by the minute.

If this is an approach you really want to go with, then we should at least
be consistent and deprecate the extra dhcp options extension (or at least
the ability to update ports' dhcp options). Updating subnet attributes like
gateway_ip, dns_nameserves, and host_routes should be thrown out as well.
All of these things depend on the DHCP server to deliver updated
information and are hindered by renewal times. Why discriminate against IP
updates on a port? A failure to receive many of those other types of
changes could result in just as severe of a connection disruption.


In summary, the information the DHCP server gives to clients is not static.
Unless we eliminate updates to everything in the Neutron API that results
in different DHCP lease information, my suggestion is that we include a new
option for the renewal interval and have the default set <5 minutes. We can
leave the lease default to 1 day so the amount of time a DHCP server can be
offline without impacting running clients can stay the same.

On Fri, Jan 30, 2015 at 8:00 AM, Brian Haley  wrote:

> Kevin,
>
> The only thing this discussion has convinced me of is that allowing users
> to
> change the fixed IP address on a neutron port leads to a bad
> user-experience.
> Even with an 8-minute renew time you're talking up to a 7-minute blackout
> (87.5%
> of lease time before using broadcast).  This is time that customers are
> paying
> for.  Most would have rebooted long before then, true?  Cattle not pets,
> right?
>
> Changing the lease time is just papering-over the real bug - neutron
> doesn't
> support seamless changes in IP addresses on ports, since it totally relies
> on
> the dhcp configuration settings a deployer has chosen.  Bickering over the
> lease
> time doesn't fix that non-deterministic recovery for the VM.  Documenting
> a VM
> reboot is necessary, or even deprecating this (you won't like that) are
> sounding
> better to me by the minute.
>
> Is there anyone else that has used, or has customers using, this part of
> the
> neutron API?  Can they share their experiences?
>
> -Brian
>
>
> On 01/30/2015 07:26 AM, Kevin Benton wrote:
> >>But they will if we document it well, which is what Salvatore suggested.
> >
> > I don't think this is a good approach, and it's a big part of why I
> started this
> > thread. Most of the deployers/operators I have worked with only read the
> bare
> > minimum documentation to get a Neutron deployment working and they only
> adjust
> > the settings necessary for basic functionality.
> >
> > We have an overwhelming amount of configuration options and adding a note
> > specifying that a particular setting for DHCP leases has been optimized
> to
> > reduce logging at the cost of long downtimes during port IP address
> updates is a
> > waste of time and effort on our part.
> >
> >>I think the current default value is also more indicative of something
> > you'd find in 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Michael,

I'm rather new here, especially in regard to communication matters, so 
I'd also be glad to understand how it's done and then I can drive it if 
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3 
persons are involved.


From the technical point of view the transition plan could look 
somewhat like this (sequence can be different):


1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them 
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get 
full info.

4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and 
problematic points for the switching from existing EC2 API to the new 
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if 
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss 
the situation there.


Michael, I am still wondering, who's going to be responsible for timely 
reviews and approvals of the fixes and tests we're going to contribute 
to nova? So far this is the biggest risk. Is there anyway to allow some 
of us to participate in the process?


Best regards,
  Alex Levine

On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas  wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
   Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy  wrote:

As you know we have been driving forward on the stack forge project and
it¹s our intention to continue to support it over time, plus reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder & CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, "Michael Still"  wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to "break out" of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

--
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions

Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-02-02 Thread Andrey Kurilin
Thanks for the summary, I'll try to send first patch(maybe WIP) in few days.

On Mon, Feb 2, 2015 at 1:43 PM, Christopher Yeoh  wrote:

>
>
> On Sat, Jan 31, 2015 at 4:09 AM, Andrey Kurilin 
> wrote:
>
>> Thanks for the answer. Can I help with implementation of novaclient part?
>>
>
> Sure! Do you think its something you can get proposed into Gerrit by the
> end of the week or very soon after?
> Its the sort of timeframe we're looking for to get microversions enabled
> asap I guess just let me know if it
> turns out you don't have the time.
>
> So I think a short summary of what is needed is:
> - if os-compute-api-version is not supplied don't send any header at all
> - it is probably worth doing a bit version parsing to see if it makes
> sense eg of format:
>  r"^([1-9]\d*)\.([1-9]\d*|0)$" or latest
> - handle  HTTPNotAcceptable if the user asked for a version which is not
> supported
>   (can also get a badrequest if its badly formatted and got through the
> novaclient filter)
> - show the version header information returned
>
> Regards,
>
> Chris
>
>
>> On Wed, Jan 28, 2015 at 11:50 AM, Christopher Yeoh 
>> wrote:
>>
>>> On Fri, 23 Jan 2015 15:51:54 +0200
>>> Andrey Kurilin  wrote:
>>>
>>> > Hi everyone!
>>> > After removing nova V3 API from novaclient[1], implementation of v1.1
>>> > client is used for v1.1, v2 and v3 [2].
>>> > Since we moving to micro versions, I wonder, do we need such
>>> > mechanism of choosing api version(os-compute-api-version) or we can
>>> > simply remove it, like in proposed change - [3]?
>>> > If we remove it, how micro version should be selected?
>>> >
>>>
>>> So since v3 was never officially released I think we can re-use
>>> os-compute-api-version for microversions which will map to the
>>> X-OpenStack-Compute-API-Version header. See here for details on what
>>> the header will look like. We need to also modify novaclient to handle
>>> errors when a version requested is not supported by the server.
>>>
>>> If the user does not specify a version number then we should not send
>>> anything at all. The server will run the default behaviour which for
>>> quite a while will just be v2.1 (functionally equivalent to v.2)
>>>
>>>
>>> http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html
>>>
>>>
>>> >
>>> > [1] - https://review.openstack.org/#/c/138694
>>> > [2] -
>>> >
>>> https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
>>> > [3] - https://review.openstack.org/#/c/149006
>>> >
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Sebastien Han
I believe this will start somewhere after Kilo.

> On 28 Jan 2015, at 22:59, Valeriy Ponomaryov  wrote:
> 
> Hello Jake,
> 
> Main thing, that should be mentioned, is that blueprint has no assignee. 
> Also, It is created long time ago without any activity after it.
> I did not hear any intentions about it, moreover did not see some, at least, 
> drafts.
> 
> So, I guess, it is open for volunteers.
> 
> Regards,
> Valeriy Ponomaryov
> 
> On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel  wrote:
> Hi,
> 
> I see there is a blueprint for a Manila driver for CephFS here [1].  It
> looks like it was opened back in 2013 but still in Drafting state.  Does
> anyone know more status about this one?
> 
> Thank you,
> -Jake
> 
> [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Ryan Brown
On 01/30/2015 06:18 PM, Dean Troyer wrote:
> On Fri, Jan 30, 2015 at 4:57 PM, Everett Toews
> mailto:everett.to...@rackspace.com>> wrote:
> 
> What is the API WG mission statement?
> 
> 
> It's more of a mantra than a Mission Statement(TM):
> 
> Identify existing and future best practices in OpenStack REST APIs to
> enable new and existing projects to evolve and converge.
> 

Identify existing and future pragmatic ideals in OpenStack REST APIs to
enable new and existing projects to evolve and converge.

I like it, but I'd like to get "pragmatic" in there somewhere. Just to
be clear we aren't just looking for pie-in-the-sky ideals, but ones that
can apply now/in the future.

> Tweetable, 126 chars!
> 
> Plus, buzzword-bingo-compatibile, would score 5 in my old corporate
> buzzwordlist...
> 
> dt
> 
> (Can you tell my flight has been delayed? ;)
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Chris Dent

On Fri, 30 Jan 2015, Everett Toews wrote:


To converge the OpenStack APIs to a consistent and pragmatic RESTful
design by creating guidelines that the projects should follow. The
intent is not to create backwards incompatible changes in existing
APIs, but to have new APIs and future versions of existing APIs
converge.


This is pretty good but I think it leaves unresolved the biggest
question I've had about this process: What's so great about
converging the APIs? If we can narrow or clarify that aspect, good
to go.

The implication with your statement above is that there is some kind
of ideal which maps, at least to some extent, across the rather
diverse set of resources, interactions and transactions that are
present in the OpenStack ecosystem. It may not be your intent but
the above sounds like "we want all the APIs to be kinda similar in
feel" or "when someone is using an OpenStack-related API they'll be
able to share some knowledge between then with regard to how stuff
works".

I'm not sure how realistic^Wuseful that is when we're in an
environment with APIs with such drastically different interactions
as (to just select three) Swift, Nova and Ceilometer.

We've seen this rather clearly in the recent debates about handling
metadata.

Now, there's nothing in what you say above that actually straight
out disagrees with my response, but I think there's got to be some
way we can remove the ambiguity or narrow the focus. The need to
remove ambiguity is why the discussion of having a mission statement
came up.

I think where we want to focus our attention is:

* strict adherence to correct HTTP
* proper use of response status codes
* effective (and correct) use of a media types
* some guidance on how to deal with change/versioning
* and _maybe_ a standard for providing actionable error responses
* setting not standards but guidelines for anything else

For most of that there is prior art and/or active conversation going
on outside the OpenStack world which ought to be useful fodder.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Dmitriy Shulyak
> >> But why to add another interface when there is one already (rest api)?
>
> I'm ok if we decide to use REST API, but of course there is a problem which
> we should solve, like versioning, which is much harder to support, than
> versioning
> in core-serializers. Also do you have any ideas how it can be implemented?
>

We need to think about deployment serializers not as part of nailgun (fuel
data inventory), but - part of another layer which uses nailgun api to
generate deployment information. Lets take ansible for example, and dynamic
inventory feature [1].
Nailgun API can be used inside of ansible dynamic inventory to generate
config that will be consumed by ansible during deployment.

Such approach will have several benefits:
- cleaner interface (ability to use ansible as main interface to control
deployment and all its features)
- deployment configuration will be tightly coupled with deployment code
- no limitation on what sources to use for configuration, and how to
compute additional values from requested data

I want to emphasize that i am not considering ansible as solution for fuel,
it serves only as example of architecture.


> You run some code which get the information from api on the master node and
> then sets the information in tasks? Or you are going to run this code on
> OpenStack
> nodes? As you mentioned in case of tokens, you should get the token right
> before
> you really need it, because of expiring problem, but in this case you don't
> need any serializers, get required token right in the task.
>

I think all information should be fetched before deployment.

>
>
>> What is your opinion about serializing additional information in plugins
> code? How it can be done, without exposing db schema?
>
> With exposing the data in more abstract way the way it's done right now
> for the current deployment logic.
>

I mean what if plugin will want to generate additional data, like -
https://review.openstack.org/#/c/150782/? Schema will be still exposed?

[1] http://docs.ansible.com/intro_dynamic_inventory.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Vladik Romanovsky


- Original Message -
> From: "Daniel P. Berrange" 
> To: "Robert Collins" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> ,
> openstack-operat...@lists.openstack.org
> Sent: Monday, 2 February, 2015 5:56:56 AM
> Subject: Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration 
> ends
> 
> On Mon, Feb 02, 2015 at 08:24:20AM +1300, Robert Collins wrote:
> > On 31 January 2015 at 05:47, Daniel P. Berrange 
> > wrote:
> > > In working on a recent Nova migration bug
> > >
> > >   https://bugs.launchpad.net/nova/+bug/1414065
> > >
> > > I had cause to refactor the way the nova libvirt driver monitors live
> > > migration completion/failure/progress. This refactor has opened the
> > > door for doing more intelligent active management of the live migration
> > > process.
> > ...
> > > What kind of things would be the biggest win from Operators' or tenants'
> > > POV ?
> > 
> > Awesome. Couple thoughts from my perspective. Firstly, there's a bunch
> > of situation dependent tuning. One thing Crowbar does really nicely is
> > that you specify the host layout in broad abstract terms - e.g. 'first
> > 10G network link' and so on : some of your settings above like whether
> > to compress page are going to be heavily dependent on the bandwidth
> > available (I doubt that compression is a win on a 100G link for
> > instance, and would be suspect at 10G even). So it would be nice if
> > there was a single dial or two to set and Nova would auto-calculate
> > good defaults from that (with appropriate overrides being available).
> 
> I wonder how such an idea would fit into Nova, since it doesn't really
> have that kind of knowledge about the network deployment characteristics.
> 
> > Operationally avoiding trouble is better than being able to fix it, so
> > I quite like the idea of defaulting the auto-converge option on, or
> > perhaps making it controllable via flavours, so that operators can
> > offer (and identify!) those particularly performance sensitive
> > workloads rather than having to guess which instances are special and
> > which aren't.
> 
> I'll investigate the auto-converge further to find out what the
> potential downsides of it are. If we can unconditionally enable
> it, it would be simpler than adding yet more tunables.
> 
> > Being able to cancel the migration would be good. Relatedly being able
> > to restart nova-compute while a migration is going on would be good
> > (or put differently, a migration happening shouldn't prevent a deploy
> > of Nova code: interlocks like that make continuous deployment much
> > harder).
> > 
> > If we can't already, I'd like as a user to be able to see that the
> > migration is happening (allows diagnosis of transient issues during
> > the migration). Some ops folk may want to hide that of course.
> > 
> > I'm not sure that automatically rolling back after N minutes makes
> > sense : if the impact on the cluster is significant then 1 minute vs
> > 10 doesn't instrinsically matter: what matters more is preventing too
> > many concurrent migrations, so that would be another feature that I
> > don't think we have yet: don't allow more than some N inbound and M
> > outbound live migrations to a compute host at any time, to prevent IO
> > storms. We may want to log with NOTIFICATION migrations that are still
> > progressing but appear to be having trouble completing. And of course
> > an admin API to query all migrations in progress to allow API driven
> > health checks by monitoring tools - which gives the power to manage
> > things to admins without us having to write a probably-too-simple
> > config interface.
> 
> Interesting, the point about concurrent migrations hadn't occurred to
> me before, but it does of course make sense since migration is
> primarily network bandwidth limited, though disk bandwidth is relevant
> too if doing block migration.

Indeed, there was a lot time spent investigating this topic (in Ovirt again)
and eventually it was decided to expose a config option and allow 3 concurrent
migrations by default.

https://github.com/oVirt/vdsm/blob/master/lib/vdsm/config.py.in#L126

> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Thierry Carrez
Daniel P. Berrange wrote:
> On Fri, Jan 30, 2015 at 04:38:44PM -0600, Matt Riedemann wrote:

>> Deprecation isn't a one-way street really, nova-network was deprecated for a
>> couple of releases and then undeprecated and opened up again for feature
>> development (at least for a short while until the migration to neutron is
>> sorted out and implemented).
> 
> Nova-network was prematurely deprecated as the alternative was not fully
> ready. That's a prime example of why we should not be deprecating EC2
> right now either.
> 
> Deprecation is a mechanism by which you inform users that they should
> stop using the current functionality and switch to $NEW-THING as the
> replacement. In the case of nova-network they could not switch because
> neutron did not offer feature parity at the time we were asking them
> to switch (nor did it have an upgrade path for that matter). Likewise
> in the case of the EC2 API, the alternative is not ready for users to
> actually switch to at a production quality level.
> 
> What we actually trying to tell users is that we think the out of tree
> EC2 implementation is the long term strategic direction of the EC2
> support with Nova, and that the current in tree impl is not being actively
> developed. That's a sensible thing to tell our users, but deprecation is
> the wrong mechanism for this. It is a task best suited for release notes.
> Keep deprecation available as a mechanism for telling users that the time
> has come for them to actively switch their deployments to the new impl.

I'm with Daniel on that one. We shouldn't "deprecate" until we are 100%
sure that the replacement is up to the task and that strategy is solid.

Today, we are just figuring out the strategy between the mainline EC2
support and the separated EC2 support repository, and we have some
promised resources to work on the issue. We have been there before (a
few times), and if we had deprecated the EC2 support on that promise
back then, we would have put ourselves in an odd corner. Today is not
really the best moment to "deprecate". Announcing the proposed strategy,
however, is good information to send to our users.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-02-02 Thread Chris Dent

On Thu, 29 Jan 2015, michael mccune wrote:

in a similar vein, i started to work on marking up the sahara and barbican 
code bases to produce swagger. for sahara this was a little easier as flask 
makes it simple to query the paths. for barbican i started a pecan-swagger[1] 
project to aid in marking up the code. it's still in infancy but i have a few 
ideas.


pecan-swagger looks cool but presumably pecan has most of the info
you're putting in the decorators in itself already? So, given an
undecorated pecan app, would it be possible to provide it to a function
and have that function output all the paths?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Team meeting today

2015-02-02 Thread Kyle Mestery
Just a reminder, we'll have the weekly Neutron meeting [1] at 2100UTC in
#openstack-meeting today. We'll likely spend the majority of the time going
over any critical bugs, as well as covering BPs for Kilo-2 which have yet
to land this week. The other two standing items we'll discuss are the
nova-network to neutron migration, and the plugin decomposition.

Please feel free to add other items in the "On Demand" section of the
agenda [2].

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
[2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Dan Smith
> I'm with Daniel on that one. We shouldn't "deprecate" until we are 100%
> sure that the replacement is up to the task and that strategy is solid.

My problem with this is: If there wasn't a stackforge project, what
would we do? Nova's in-tree EC2 support has been rotting for years now,
and despite several rallies for developers, no real progress has been
made to rescue it. I don't think that it's reasonable to say that if
there wasn't a stackforge project we'd just have to suck it up and
magically produce the developers to work on EC2; it's clear that's not
going to happen.

Thus, it seems to me that we need to communicate that our EC2 support is
going away. Hopefully the stackforge project will be at a point to
support users that want to keep the functionality. However, the fate of
our in-tree support seems clear regardless of how that turns out.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> Michael,
> 
> I'm rather new here, especially in regard to communication matters, so
> I'd also be glad to understand how it's done and then I can drive it if
> it's ok with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3
> persons are involved.
> 
> From the technical point of view the transition plan could look somewhat
> like this (sequence can be different):
> 
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them
> against nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get
> full info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and
> problematic points for the switching from existing EC2 API to the new
> one. Provide solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
> any bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss
> the situation there.
> 
> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute
> to nova? So far this is the biggest risk. Is there anyway to allow some
> of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 07:44:24AM -0800, Dan Smith wrote:
> > I'm with Daniel on that one. We shouldn't "deprecate" until we are 100%
> > sure that the replacement is up to the task and that strategy is solid.
> 
> My problem with this is: If there wasn't a stackforge project, what
> would we do? Nova's in-tree EC2 support has been rotting for years now,
> and despite several rallies for developers, no real progress has been
> made to rescue it. I don't think that it's reasonable to say that if
> there wasn't a stackforge project we'd just have to suck it up and
> magically produce the developers to work on EC2; it's clear that's not
> going to happen.

I think that is exactly what we'd would have todo. We exist as a project
to serve the needs of our users and it seems pretty clear from the survey
results that users are deploying the EC2 impl in significant numbers,
so to just remove it would essentially be ignoring what our users want
from the project. If we're saying it is reasonable to ignore what our
users want, then this project is frankly doomed.

> Thus, it seems to me that we need to communicate that our EC2 support is
> going away. Hopefully the stackforge project will be at a point to
> support users that want to keep the functionality. However, the fate of
> our in-tree support seems clear regardless of how that turns out.

If the external EC2 support doesn't work out for whatever reason, then
I don't think the fate of the in-tree support is at all clear. I think
it would have a very strong case for continuing to exist.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> Michael,
> 
> I'm rather new here, especially in regard to communication matters, so
> I'd also be glad to understand how it's done and then I can drive it if
> it's ok with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3
> persons are involved.
> 
> From the technical point of view the transition plan could look somewhat
> like this (sequence can be different):
> 
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them
> against nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get
> full info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and
> problematic points for the switching from existing EC2 API to the new
> one. Provide solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
> any bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss
> the situation there.
> 
> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute
> to nova? So far this is the biggest risk. Is there anyway to allow some
> of us to participate in the process?

It would also be really helpful if there were reviews from you team on
any ec2 touching code.

https://review.openstack.org/#/q/file:%255Enova/api/ec2.*+status:open,n,z

There currently are only a few patches which touch ec2 that are ec2
function/bug related, and mostly don't have any scored reviews.
Especially this series -
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z


Which is a month old with no scoring.

-Sean

> 
> Best regards,
>   Alex Levine
> 
> On 2/2/15 2:46 AM, Michael Still wrote:
>> So, its exciting to me that we seem to developing more forward
>> momentum here. I personally think the way forward is a staged
>> transition from the in-nova EC2 API to the stackforge project, with
>> testing added to ensure that we are feature complete between the two.
>> I note that Soren disagrees with me here, but that's ok -- I'd like to
>> see us work through that as a team based on the merits.
>>
>> So... It sounds like we have an EC2 sub team forming. How do we get
>> that group meeting to come up with a transition plan?
>>
>> Michael
>>
>> On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
>> wrote:
>>> Alex,
>>>
>>> Very cool. thanks.
>>>
>>> -- dims
>>>
>>> On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
>>>  wrote:
 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up
 a bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:
> Alexandre, Randy,
>
> Are there plans afoot to add support to switch on stackforge/ec2-api
> in devstack? add tempest tests etc? CI Would go a long way in
> alleviating concerns i think.
>
> thanks,
> dims
>
> On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
> wrote:
>> As you know we have been driving forward on the stack forge
>> project and
>> it¹s our intention to continue to support it over time, plus
>> reinvigorate
>> the GCE APIs when that makes sense. So we¹re supportive of
>> deprecating
>> from Nova to focus on EC2 API in Nova.  I also think it¹s good for
>> these
>> APIs to be able to iterate outside of the standard release cycle.
>>
>>
>>
>> --Randy
>>
>> VP, Technology, EMC Corporation
>> Formerly Founder & CEO, Cloudscaling (now a part of EMC)
>> +1 (415) 787-2253 [google voice]
>> TWITTER: twitter.com/randybias
>> LINKEDIN: linkedin.com/in/randybias
>> ASSISTANT: ren...@emc.com
>>
>>
>>
>>
>>
>>
>> On 1/29/15, 4:01 PM, "Michael Still"  wrote:
>>
>>> Hi,
>>>
>>> as you might have read on openstack-dev, the Nova EC2 API
>>> implementation is in a pretty sad state. I wont repeat all of those
>>> details here -- you can read the thread on openstack-dev for detail.
>>>
>>> However, we got here because no one is maintaining the code in Nova
>>> for the EC2 API. This is despite repeated calls over the last 18
>>> months (at least).
>>>
>>> So, does the Foundation have a role here? The Nova team has
>>> failed to
>>> find someone to help us resolve these issues. Can the board perhaps
>>> find resources as the representatives of some of the largest
>>> contributors to OpenStack? Could the Foundation employ someone to
>>> help
>>> us our here?
>

Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/02/2015 12:54 AM, Christopher Yeoh wrote:
> 
> 
> On Sun, Feb 1, 2015 at 2:57 AM, Sean Dague  > wrote:
> 
> On 01/31/2015 05:24 AM, Duncan Thomas wrote:
> > Hi
> >
> > This discussion came up at the cinder mid-cycle last week too,
> > specifically in the context of 'Can we change the details text in an
> > existing error, or is that an unacceptable API change'.
> >
> > I have to second security / operational concerns about exposing
> too much
> > granularity of failure in these error codes.
> >
> > For cases where there is something wrong with the request (item out of
> > range, invalid names, feature not supported, etc) I totally agree that
> > we should have good, clear, parsable response, and standardisation
> would
> > be good. Having some fixed part of the response (whether a numeric
> code
> > or, as I tend to prefer, a CamelCaseDescription so that I don't
> have to
> > go look it up) and a human readable description section that is
> subject
> > to change seems sensible.
> >
> > What I would rather not see is leakage of information when something
> > internal to the cloud goes wrong, that the tenant can do nothing
> > against. We certainly shouldn't be leaking internal implementation
> > details like vendor details - that is what request IDs and logs
> are for.
> > The whole point of the cloud, to me, is that separation between the
> > things a tenant controls (what they want done) and what the cloud
> > provider controls (the details of how the work is done).
> >
> > For example, if a create volume request fails because cinder-scheduler
> > has crashed, all the tenant should get back is 'Things are broken, try
> > again later or pass request id 1234-5678-abcd-def0 to the cloud
> admin'.
> > They should need to or even be allowed to care about the details
> of the
> > failure, it is not their domain.
> 
> Sure, the value really is in determining things that are under the
> client's control to do differently. A concrete one is a multi hypervisor
> cloud with 2 hypervisors (say kvm and docker). The volume attach
> operation to a docker instance (which presumably is a separate set of
> instance types) can't work. The user should be told that that can't work
> with this instance_type if they try it.
> 
> That's actually user correctable information. And doesn't require a
> ticket to move forward.
> 
> I also think we could have a detail level knob, because I expect the
> level of information exposure might be considered different in public
> cloud use case vs. a private cloud at an org level or a private cloud at
> a dept level.
> 
> 
> That could turn into a major compatibility issue if what we returned
> could (and
> probably would between public/private) change between clouds? If we want
> to encourage
> people to parse this sort of thing I think we need to settle on whether
> we send the
> information back or not for everyone. 

Sure, it's a theoretical concern. We're not going to get anywhere rat
holing on theoretical concerns though, lets get some concrete instances
out there to discuss.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 3 is dead, long live Python 3

2015-02-02 Thread Jeremy Stanley
After a long wait and much testing, we've merged a change[1] which
moves the remainder of Python 3.3 based jobs to Python 3.4. This is
primarily in service of getting rid of the custom workers we
implemented to perform 3.3 testing more than a year ago, since we
can now run 3.4 tests on normal Ubuntu Trusty workers (with the
exception of a couple bugs[2][3] which have caused us to temporarily
suspend[4] Py3K jobs for oslo.messaging and oslo.rootwrap).

I've personally tested `tox -e py34` on every project hosted in our
infrastructure which was gating on Python 3.3 jobs and they all
still work, so you shouldn't see any issues arise from this change.
If you do, however, please let the Infrastructure team know about it
as soon as possible. Thanks!

[1] https://review.openstack.org/151713
[2] https://launchpad.net/bugs/1367907
[3] https://launchpad.net/bugs/1382607
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
> Putting on my "sorry-but-it-is-my-job-to-get-in-your-way" hat (aka security), 
> let's be careful how generous we are with the user and data we hand back. It 
> should give enough information to be useful but no more. I don't want to see 
> us opened to weird attack vectors because we're exposing internal state too 
> generously. 
> 
> In short let's aim for a slow roll of extra info in, and evaluate each data 
> point we expose (about a failure) before we do so. Knowing more about a 
> failure is important for our users. Allowing easy access to information that 
> could be used to attack / increase impact of a DOS could be bad. 
> 
> I think we can do it but it is important to not swing the pendulum too far 
> the other direction too fast (give too much info all of a sudden). 

Security by cloud obscurity?

I agree we should evaluate information sharing with security in mind.
However, the black boxing level we have today is bad for OpenStack. At a
certain point once you've added so many belts and suspenders, you can no
longer walk normally any more.

Anyway, lets stop having this discussion in abstract and actually just
evaluate the cases in question that come up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Andrew Laski


On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:

On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:

Thanks for bringing this up, Daniel.  I don't think it makes sense to have
a timeout on live migration, but operators should be able to cancel it,
just like any other unbounded long-running process.  For example, there's
no timeout on file transfers, but they need an interface report progress
and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic "tasks API" in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.


Currently the status of a tasks API is that it would happen after the 
API v2.1 microversions work has created a suitable framework in which to 
add tasks to the API.




Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:
> 
> On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:
> >On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
> >>Thanks for bringing this up, Daniel.  I don't think it makes sense to have
> >>a timeout on live migration, but operators should be able to cancel it,
> >>just like any other unbounded long-running process.  For example, there's
> >>no timeout on file transfers, but they need an interface report progress
> >>and to cancel them.  That would imply an option to cancel evacuation too.
> >There has been periodic talk about a generic "tasks API" in Nova for managing
> >long running operations and getting information about their progress, but I
> >am not sure what the status of that is. It would obviously be applicable to
> >migration if that's a route we took.
> 
> Currently the status of a tasks API is that it would happen after the API
> v2.1 microversions work has created a suitable framework in which to add
> tasks to the API.

So is all work on tasks blocked by the microversions support ? I would have
though that would only block places where we need to modify existing APIs.
Are we not able to add APIs for listing / cancelling tasks as new APIs
without such a dependency on microversions ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Sean Dague
On 02/02/2015 10:55 AM, Daniel P. Berrange wrote:
> On Mon, Feb 02, 2015 at 07:44:24AM -0800, Dan Smith wrote:
>>> I'm with Daniel on that one. We shouldn't "deprecate" until we are 100%
>>> sure that the replacement is up to the task and that strategy is solid.
>>
>> My problem with this is: If there wasn't a stackforge project, what
>> would we do? Nova's in-tree EC2 support has been rotting for years now,
>> and despite several rallies for developers, no real progress has been
>> made to rescue it. I don't think that it's reasonable to say that if
>> there wasn't a stackforge project we'd just have to suck it up and
>> magically produce the developers to work on EC2; it's clear that's not
>> going to happen.
> 
> I think that is exactly what we'd would have todo. We exist as a project
> to serve the needs of our users and it seems pretty clear from the survey
> results that users are deploying the EC2 impl in significant numbers,
> so to just remove it would essentially be ignoring what our users want
> from the project. If we're saying it is reasonable to ignore what our
> users want, then this project is frankly doomed.
> 
>> Thus, it seems to me that we need to communicate that our EC2 support is
>> going away. Hopefully the stackforge project will be at a point to
>> support users that want to keep the functionality. However, the fate of
>> our in-tree support seems clear regardless of how that turns out.
> 
> If the external EC2 support doesn't work out for whatever reason, then
> I don't think the fate of the in-tree support is at all clear. I think
> it would have a very strong case for continuing to exist.

It's really easy to say "someone should do this", but the problem is
that none of the core team is interested, neither is anyone else. Most
of the people that once were interested have left being active in OpenStack.

EC2 compatibility does not appear to be part of the long term strategy
for the project, hasn't been in a while (looking at the level of
maintenance here). Ok, we should signal that so that new and existing
users that believe that is a core supported feature realize it's not.

The fact that there is some plan to exist out of tree is a bonus,
however the fact that this is not a first class feature in Nova really
does need to be signaled. It hasn't been.

Maybe deprecation is the wrong tool for that, and marking EC2 as
experimental and non supported in the log message is more appropriate.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-02-02 Thread michael mccune

On 02/02/2015 10:26 AM, Chris Dent wrote:

pecan-swagger looks cool but presumably pecan has most of the info
you're putting in the decorators in itself already? So, given an
undecorated pecan app, would it be possible to provide it to a function
and have that function output all the paths?



you are correct, pecan is storing most of the information we want in 
it's controller metadata. i am working on the next version of 
pecan-swagger now that will reduce the need for so many decorators, and 
instead pull the endpoint information out of the pecan based controller 
classes.


in terms of having a completely undecorated pecan app, i'm not sure 
that's possible just yet due to the object-dispatch routing used by 
pecan. in the next version of pecan-swagger i'm going to reduce the 
deocrators to only be needed on controller classes, but i'm not sure 
that it will be possible to reduce further as there will need to be some 
way to learn the route path hierarchy.


i suppose in the future it might be advantageous to create a pecan 
controller base class that could help inform the routing structure, but 
this would still need to be added to current pecan projects.



mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one, right?

Best regards,
  Alex Levine

On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Chris Friesen

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:

On 29 January 2015 at 04:57, Chris Friesen mailto:chris.frie...@windriver.com>> wrote:

On 01/28/2015 10:33 PM, Mathieu Gagné wrote:

On 2015-01-28 11:13 PM, Chris Friesen wrote:

Anyone have any suggestions on where to start digging?

We have a similar issue which has yet to be properly diagnosed on our 
side.

One workaround which looks to be working for us is enabling the "private
mode"
in the browser. If it doesn't work, try deleting your cookies.

Can you see if those workarounds work for you?


Neither of those seems to work for me.  I still get a multi-second delay and
then the red bar with "Connect timeout".

I suspect it's something related to websockify, but I can't figure out what.


In some versions of websockify and the relates noVNC versions that use it I've
seen the same behaviour. This is due to the way websockify tries to detect the
protocol to use. It ends up doing a localhost connection and the browser rejects
it as an unsafe operation.

It was fixed in later versions of websockify.

Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it turns out 
that we needed to upversion the novnc package.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/30/2015 07:23 AM, Sandy Walsh wrote:


From: Johannes Erdfelt [johan...@erdfelt.com]
Sent: Thursday, January 29, 2015 9:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

On Thu, Jan 29, 2015, Morgan Fainberg  wrote:

The concept that there is a utility that can (and in many cases
willfully) cause permanent, and in some cases irrevocable, data loss
from a simple command line interface sounds crazy when I try and
explain it to someone.

The more I work with the data stored in SQL, and the more I think we
should really recommend the tried-and-true best practices when trying
to revert from a migration: Restore your DB to a known good state.

You mean like restoring from backup?

Unless your code deploy fails before it has any chance of running, then
you could have had new instances started or instances changed and then
restoring from backups would lose data.

If you meant another way of restoring your data, then there are
some strategies that downgrades could employ that doesn't lose data,
but there is nothing that can handle 100% of cases.

All of that said, for the Rackspace Public Cloud, we have never rolled
back our deploy. We have always rolled forward for any fixes we needed.


>From my perspective, I'd be fine with doing away with downgrades, but

I'm not sure how to document that deployers should roll forward if they
have any deploy problems.

JE

Yep ... downgrades simply aren't practical with a SQL-schema based
solution. Too coarse-grained.

We'd have to move to a schema-less model, per-record versioning and
up-down conversion at the Nova Objects layer. Or, possibly introduce
more nodes that can deal with older versions. Either way, that's a big
hairy change


Horse pocky!  Schema less means "implied contract instead of implicit."  
That would be madness.  Please take the NoSQL good, SQL bad approach of 
of the conversation, as absotutely (yes, absotutely) everything we have 
here is doubly true for NoSQL, we just don't hammer on it as much.  We 
don't even document the record formats in the NoSQL cases in Keystone so 
we can break them both willy and nilly, but have often found that we are 
just stuck.  Usually, we are only dealing with the token table, and so 
we just dump the old tokens and shake our heads sadly.






.

The upgrade code is still required, so removing the downgrades (and
tests, if any) is a relatively small change to the code base.

The bigger issue is the anxiety the deployer will experience until a
patch lands.

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> Thank you Sean.
> 
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one,
> right?
> 
> Best regards,
>   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
tests out of the Tempest tree as well and into a more dedicated place.
Like as part of the stackforge project tree. Given that the right
expertise would be there as well. It could use tempest-lib for some of
the common parts.

-Sean

> 
> On 2/2/15 6:55 PM, Sean Dague wrote:
>> On 02/02/2015 07:01 AM, Alexandre Levine wrote:
>>> Michael,
>>>
>>> I'm rather new here, especially in regard to communication matters, so
>>> I'd also be glad to understand how it's done and then I can drive it if
>>> it's ok with everybody.
>>> By saying EC2 sub team - who did you keep in mind? From my team 3
>>> persons are involved.
>>>
>>>  From the technical point of view the transition plan could look
>>> somewhat
>>> like this (sequence can be different):
>>>
>>> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
>>> 2. Contribute Tempest tests for EC2 functionality and employ them
>>> against nova's EC2.
>>> 3. Write spec for required API to be exposed from nova so that we get
>>> full info.
>>> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
>>> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
>>> 6. Communicate and discover all of the existing questions and
>>> problematic points for the switching from existing EC2 API to the new
>>> one. Provide solutions or decisions about them.
>>> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
>>> any bottlenecks come up.
>>> 8. Have all of the above prepared for the Vancouver summit and discuss
>>> the situation there.
>>>
>>> Michael, I am still wondering, who's going to be responsible for timely
>>> reviews and approvals of the fixes and tests we're going to contribute
>>> to nova? So far this is the biggest risk. Is there anyway to allow some
>>> of us to participate in the process?
>> I am happy to volunteer to shephard these reviews. I'll try to keep an
>> eye on them, and if something is blocking please just ping me directly
>> on IRC in #openstack-nova or bring them forward to the weekly Nova
>> meeting.
>>
>> -Sean
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Evgeniy L
Hi Dmitry,

I've read about inventories and I'm not sure if it's what we really need,
inventory provides you a way to have some kind of nodes discovery
mechanism, but what we need is to get some abstract data and convert
the data to more tasks friendly format.

In another thread I've mentioned Variables [1] in Ansible, probably it
fits more than inventory from architecture point of view.

With this functionality plugin will be able to get required information from
Nailgun via REST API and pass the information into specific task.

But it's not a way to go with the core deployment. I would like to remind
you what we had two years ago, we had Nailgun which passed the information
in format A to Orchestrator (Astute), than Orchestrator converted the
information
in second format B. It was horrible from debugging point of view, it's
always
hard when you have to go in several places to figure out what you get
as result. Your have pretty similar design suggestion, which is dividing
searilization logic between Nailgun and some another layer in tasks
scripts.

Thanks,

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Mon, Feb 2, 2015 at 5:05 PM, Dmitriy Shulyak 
wrote:

>
> >> But why to add another interface when there is one already (rest api)?
>>
>> I'm ok if we decide to use REST API, but of course there is a problem
>> which
>> we should solve, like versioning, which is much harder to support, than
>> versioning
>> in core-serializers. Also do you have any ideas how it can be implemented?
>>
>
> We need to think about deployment serializers not as part of nailgun (fuel
> data inventory), but - part of another layer which uses nailgun api to
> generate deployment information. Lets take ansible for example, and
> dynamic inventory feature [1].
> Nailgun API can be used inside of ansible dynamic inventory to generate
> config that will be consumed by ansible during deployment.
>
> Such approach will have several benefits:
> - cleaner interface (ability to use ansible as main interface to control
> deployment and all its features)
> - deployment configuration will be tightly coupled with deployment code
> - no limitation on what sources to use for configuration, and how to
> compute additional values from requested data
>
> I want to emphasize that i am not considering ansible as solution for
> fuel, it serves only as example of architecture.
>
>
>> You run some code which get the information from api on the master node
>> and
>> then sets the information in tasks? Or you are going to run this code on
>> OpenStack
>> nodes? As you mentioned in case of tokens, you should get the token right
>> before
>> you really need it, because of expiring problem, but in this case you
>> don't
>> need any serializers, get required token right in the task.
>>
>
> I think all information should be fetched before deployment.
>
>>
>>
> >> What is your opinion about serializing additional information in
>> plugins code? How it can be done, without exposing db schema?
>>
>> With exposing the data in more abstract way the way it's done right now
>> for the current deployment logic.
>>
>
> I mean what if plugin will want to generate additional data, like -
> https://review.openstack.org/#/c/150782/? Schema will be still exposed?
>
> [1] http://docs.ansible.com/intro_dynamic_inventory.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/29/2015 03:11 PM, Mike Bayer wrote:


Morgan Fainberg  wrote:


Are downward migrations really a good idea for us to support? Is this downward 
migration path a sane expectation? In the real world, would any one really 
trust the data after migrating downwards?

It’s a good idea for a migration script to include a rudimentary downgrade 
operation to complement the upgrade operation, if feasible.  The purpose of 
this downgrade is from




Except that is it is code we need to maintain and support.  I think we 
are making more work for ourselves than the value these scripts provide 
justify.

  a practical standpoint helpful when locally testing a specific, typically 
small series of migrations.

A downgrade however typically only applies to schema objects, and not so much 
data.   It is often impossible to provide downgrades of data changes as it is 
likely that a data upgrade operation was destructive of some data.  Therefore, 
when dealing with a full series of real world migrations that include data 
migrations within them, downgrades are typically impossible.   I’m getting the 
impression that our migration scripts have data migrations galore in them.

So I am +1 on establishing a policy that the deployer of the application would 
not have access to any “downgrade” migrations, and -1 on removing “downgrade” 
entirely from individual migrations.   Specific migration scripts may return 
NotImplemented for their downgrade if its really not feasible, but for things 
like table and column changes where autogenerate has already rendered the 
downgrade, it’s handy to keep at least the smaller ones working.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen
Hi,

I'm trying to make use of huge pages as described in 
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:


  

  67028244
  16032069
  5000
  1
...

  67108864
  16052224
  5000
  1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below 
in nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py", line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell, limit_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 851, in 
_numa_fit_instance_cell
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 692, in 
_numa_cell_supports_pagesize_request
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
avail_pag

Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-02 Thread Adam Young

On 01/30/2015 02:19 AM, Thomas Spatzier wrote:

From: Zane Bitter 
To: openstack Development Mailing List



Date: 29/01/2015 17:47
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in

Heat

I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?

I am really not a keystone expert,

I am!  But when I grow up, I want to be a fireman!

so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times.
SHould be possible.  LDAP can be read only, but these things can all go 
into SQL, and just have a loose coupling with the LDAP entities.




I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.


THose domains, users, projects ,etc would all go intop SQL.  THe only 
case ot use LDAP would be if their remote organization already had an 
LDAP system that contained users, and the4y wanted to reuse it.  There 
are issues, there, and I suspect Federation (SAML) will be the mechanism 
of choice for these types of integrations, not LDAP.



If this is something to be enabled, that would be interesting to explore.

Regards,
Thomas


cheers,
Zane.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Jay Pipes
This is a bug that I discovered when fixing some of the NUMA related 
nova objects. I have a patch that should fix it up shortly.


This is what happens when we don't have any functional testing of stuff 
that is merged into master...


Best,
-jay

On 02/02/2015 11:44 AM, Chris Friesen wrote:

Hi,

I'm trying to make use of huge pages as described in 
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:

 
   
 
   67028244
   16032069
   5000
   1
...
 
   67108864
   16052224
   5000
   1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below in 
nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py", line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell, limit_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 851, in 
_numa_fit_instance_cell
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.

Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-02 Thread Brian Haley
Kevin,

I think we are finally converging.  One of the points I've been trying to make
is that users are playing with fire when they start playing with some of these
port attributes, and given the tool we have to work with (DHCP), the
instantiation of these changes cannot be made seamlessly to a VM.  That's life
in the cloud, and most of these things can (and should) be designed around.

On 02/02/2015 06:48 AM, Kevin Benton wrote:
>> The only thing this discussion has convinced me of is that allowing users
> to change the fixed IP address on a neutron port leads to a bad
> user-experience.
> 
> Not as bad as having to delete a port and create another one on the same
> network just to change addresses though...
> 
>> Even with an 8-minute renew time you're talking up to a 7-minute blackout
> (87.5% of lease time before using broadcast).
> 
> I suggested 240 seconds renewal time, which is up to 4 minutes of
> connectivity outage. This doesn't have anything to do with lease time and
> unicast DHCP will work because the spoof rules allow DHCP client traffic
> before restricting to specific IPs.

The unicast DHCP will make it to the "wire", but if you've renumbered the subnet
either a) the DHCP server won't respond because it's IP has changed as well; or
b) the DHCP server won't respond because there is no mapping for the VM on it's
old subnet.

>> Most would have rebooted long before then, true?  Cattle not pets, right?
> 
> Only in an ideal world that I haven't encountered with customer deployments. 
> Many enterprise deployments end up bringing pets along where reboots aren't 
> always free. The time taken to relaunch programs and restore state can end
> up being 10 minutes+ if it's something like a VDI deployment or dev
> environment where someone spends a lot of time working on one VM.

This would happen if the AZ their VM was in went offline as well, at which point
they would change their design to be more cloud-aware than it was.  Let's not
heap all the blame on neutron - the user is tasked with vetting that their
decisions meet the requirements they desire by thoroughly testing it.

>> Changing the lease time is just papering-over the real bug - neutron
> doesn't support seamless changes in IP addresses on ports, since it totally 
> relies on the dhcp configuration settings a deployer has chosen.
> 
> It doesn't need to be seamless, but it certainly shouldn't be useless. 
> Connectivity interruptions can be expected with IP changes (e.g. I've seen 
> changes in elastic IPs on EC2 can interrupt connectivity to an instance for
> up to 2 minutes), but an entire day of downtime is awful.

Yes, I agree, an entire day of downtime is bad.

> One of the things I'm getting at is that a deployer shouldn't be choosing
> such high lease times and we are encouraging it with a high default. You are
> arguing for infrequent renewals to work around excessive logging, which is
> just an implementation problem that should be addressed with a patch to your
> logging collector (de-duplication) or to dnsmasq (don't log renewals).

My #1 deployment problem was around control-plane upgrade, not logging:

"During a control-plane upgrade or outage, having a short DHCP lease time will
take all your VMs offline.  The old value of 2 minutes is not a realistic value
for an upgrade, and I don't think 8 minutes is much better.  Yes, when DHCP is
down you can't boot a new VM, but as long as customers can get to their existing
VMs they're pretty happy and won't scream bloody murder."

>> Documenting a VM reboot is necessary, or even deprecating this (you won't
>> like
> that) are sounding better to me by the minute.
> 
> If this is an approach you really want to go with, then we should at least
> be consistent and deprecate the extra dhcp options extension (or at least
> the ability to update ports' dhcp options). Updating subnet attributes like 
> gateway_ip, dns_nameserves, and host_routes should be thrown out as well. All
> of these things depend on the DHCP server to deliver updated information and
> are hindered by renewal times. Why discriminate against IP updates on a port?
> A failure to receive many of those other types of changes could result in
> just as severe of a connection disruption.

How about a big (*) next to all the things that could cause issues?  :)  We've
completely "loaded the gun" exposing all these attributes to the general user
when only the network-aware power-user should be playing with them.

(*) Changing these attributes could cause VMs to become unresponsive for a long
period of time depending on the deployment settings, and should be used with
caution.  Sometimes a VM reboot will be required to re-gain connectivity.

> In summary, the information the DHCP server gives to clients is not static. 
> Unless we eliminate updates to everything in the Neutron API that results in 
> different DHCP lease information, my suggestion is that we include a new
> option for the renewal interval and have the default se

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:
> Hi,
> 
> I'm trying to make use of huge pages as described in
> "http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
> I'm running kilo as of Jan 27th.
> I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on 
> that node contains:
> 
> 
>   
> 
>   67028244
>   16032069
>   5000
>   1
> ...
> 
>   67108864
>   16052224
>   5000
>   1
> 
> 
> I then restarted nova-compute, I set "hw:mem_page_size=large" on a
> flavor, and then tried to boot up an instance with that flavor.  I
> got the error logs below in nova-scheduler.  Is this a bug?

Hello,

Launchpad.net could be more appropriate to
discuss on something which looks like a bug.

  https://bugs.launchpad.net/nova/+filebug

According to your trace I would say you are running different versions
of Nova services.

BTW please verify your version of libvirt. Hugepages is supported
start to 1.2.8 (but this should difinitly not failed so badly like
that)

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Boris Pavlovic
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> Thank you Sean.
>
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one,
> right?
>
> Best regards,
>   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
> tests out of the Tempest tree as well and into a more dedicated place.
> Like as part of the stackforge project tree. Given that the right
> expertise would be there as well. It could use tempest-lib for some of
> the common parts.



Rally team would be happy to accept some of tests, and as well we support
in tree plugins.
So part of tests (that are only for hardcore functional testing and not
reusable in reallife)
can stay in tree of ec2-api.

Best regards,
Boris Pavlovic


On Mon, Feb 2, 2015 at 7:39 PM, Sean Dague  wrote:

> On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> > Thank you Sean.
> >
> > We'll be tons of EC2 Tempest tests for your attention shortly.
> > How would you prefer them? In several reviews, I believe. Not in one,
> > right?
> >
> > Best regards,
> >   Alex Levine
>
> So, honestly, I think that we should probably look at getting the ec2
> tests out of the Tempest tree as well and into a more dedicated place.
> Like as part of the stackforge project tree. Given that the right
> expertise would be there as well. It could use tempest-lib for some of
> the common parts.
>
> -Sean
>
> >
> > On 2/2/15 6:55 PM, Sean Dague wrote:
> >> On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> >>> Michael,
> >>>
> >>> I'm rather new here, especially in regard to communication matters, so
> >>> I'd also be glad to understand how it's done and then I can drive it if
> >>> it's ok with everybody.
> >>> By saying EC2 sub team - who did you keep in mind? From my team 3
> >>> persons are involved.
> >>>
> >>>  From the technical point of view the transition plan could look
> >>> somewhat
> >>> like this (sequence can be different):
> >>>
> >>> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> >>> 2. Contribute Tempest tests for EC2 functionality and employ them
> >>> against nova's EC2.
> >>> 3. Write spec for required API to be exposed from nova so that we get
> >>> full info.
> >>> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> >>> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> >>> 6. Communicate and discover all of the existing questions and
> >>> problematic points for the switching from existing EC2 API to the new
> >>> one. Provide solutions or decisions about them.
> >>> 7. Do performance testing of the new stackforge/ec2 and provide fixes
> if
> >>> any bottlenecks come up.
> >>> 8. Have all of the above prepared for the Vancouver summit and discuss
> >>> the situation there.
> >>>
> >>> Michael, I am still wondering, who's going to be responsible for timely
> >>> reviews and approvals of the fixes and tests we're going to contribute
> >>> to nova? So far this is the biggest risk. Is there anyway to allow some
> >>> of us to participate in the process?
> >> I am happy to volunteer to shephard these reviews. I'll try to keep an
> >> eye on them, and if something is blocking please just ping me directly
> >> on IRC in #openstack-nova or bring them forward to the weekly Nova
> >> meeting.
> >>
> >> -Sean
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 11:51:47AM -0500, Jay Pipes wrote:
> This is a bug that I discovered when fixing some of the NUMA related nova
> objects. I have a patch that should fix it up shortly.

Never seen this issue, could be great to have a bug repported.

> This is what happens when we don't have any functional testing of stuff that
> is merged into master...
> Best,
> -jay

Thanks,
s.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 02/02/2014

2015-02-02 Thread Renat Akhmerov
Thanks for joining us today for team meeting!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-02-16.00.html
 

 
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-02-16.00.log.html
 


The next meeting is scheduled on Feb 09

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 7:39 PM, Sean Dague wrote:

On 02/02/2015 11:35 AM, Alexandre Levine wrote:

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one,
right?

Best regards,
   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
tests out of the Tempest tree as well and into a more dedicated place.
Like as part of the stackforge project tree. Given that the right
expertise would be there as well. It could use tempest-lib for some of
the common parts.

-Sean
We tried to find out about tempest-lib, asked Keichi Ohmichi, but it 
seems that's still work in progress. Can you point us somewhere where we 
can understand how to employ this technology.

So the use cases will be:

1. Be able to run the suit against EC2 in nova.
2. Be able to run the suit against stackforge/EC2.
3. Use that for gating for both repos.

Additional complication here is that some of the tests will have to 
skipped because of functionality absence or because of bugs in nova's 
EC2 but should be employed against stackforge's version.


Could you advice how to achieve such effects?




On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

  From the technical point of view the transition plan could look
somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova
meeting.

 -Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 7:04 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

It would also be really helpful if there were reviews from you team on
any ec2 touching code.

https://review.openstack.org/#/q/file:%255Enova/api/ec2.*+status:open,n,z

There currently are only a few patches which touch ec2 that are ec2
function/bug related, and mostly don't have any scored reviews.
Especially this series -
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z


Which is a month old with no scoring.


Yes, we'll start looking there as well.


-Sean


Best regards,
   Alex Levine

On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up
a bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
wrote:

As you know we have been driving forward on the stack forge
project and
it¹s our intention to continue to support it over time, plus
reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of
deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for
these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder & CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, "Michael Still"  wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has
failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to
help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible
with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to "break out" of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welc

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 07:35:46PM +0300, Alexandre Levine wrote:
> Thank you Sean.
> 
> We'll be tons of EC2 Tempest tests for your attention shortly.
> How would you prefer them? In several reviews, I believe. Not in one, right?

Let's take a step back for a sec. How many tests and what kind are we talking
about here?

I'm thinking it might be better to not just try and dump all this stuff in
tempest. While in the past we've just dumped all of this in tempest, moving
forward I don't think that's what we want to be doing. The current ec2 tests
have always felt out of place to me in tempest and historically haven't been
maintained as well as the other tests. If we're talking about ramping up the ec2
testing we probably should look at migrating everything elsewhere, especially
given that it just essentially nova testing. I see 2 better options here: we
either put the tests in the tree for the project with the ec2 implementation, or
we create a new repo like tempest-ec2 for testing this. In either case we'll
leverage tempest-lib to make sure the bits your existing testing is relying on
are consumable outside of the tempest repo.

-Matt Treinish


> On 2/2/15 6:55 PM, Sean Dague wrote:
> >On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> >>Michael,
> >>
> >>I'm rather new here, especially in regard to communication matters, so
> >>I'd also be glad to understand how it's done and then I can drive it if
> >>it's ok with everybody.
> >>By saying EC2 sub team - who did you keep in mind? From my team 3
> >>persons are involved.
> >>
> >> From the technical point of view the transition plan could look somewhat
> >>like this (sequence can be different):
> >>
> >>1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> >>2. Contribute Tempest tests for EC2 functionality and employ them
> >>against nova's EC2.
> >>3. Write spec for required API to be exposed from nova so that we get
> >>full info.
> >>4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> >>5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> >>6. Communicate and discover all of the existing questions and
> >>problematic points for the switching from existing EC2 API to the new
> >>one. Provide solutions or decisions about them.
> >>7. Do performance testing of the new stackforge/ec2 and provide fixes if
> >>any bottlenecks come up.
> >>8. Have all of the above prepared for the Vancouver summit and discuss
> >>the situation there.
> >>
> >>Michael, I am still wondering, who's going to be responsible for timely
> >>reviews and approvals of the fixes and tests we're going to contribute
> >>to nova? So far this is the biggest risk. Is there anyway to allow some
> >>of us to participate in the process?
> >I am happy to volunteer to shephard these reviews. I'll try to keep an
> >eye on them, and if something is blocking please just ping me directly
> >on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.
> >
> > -Sean
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgp0JOfRz8arI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 2.2.2 released today

2015-02-02 Thread John Dickinson
Everyone,

I'm happy to announce that today we have release Swift 2.2.2. (Yes, that's
2.2.2 on 2/2.) This release has a few very important features that came
directly from production clusters. I recommend that you upgrade so you can
take advantage of the new goodness.

As always, you can upgrade to this version of Swift with zero end-user
downtime.

So what's so great in this release? Below are some highlights, but please
read the full changelog at
https://github.com/openstack/swift/blob/master/CHANGELOG

* Data placement changes

  This release has several major changes to data placement in Swift in
  order to better handle different deployment patterns. First, with an
  unbalance-able ring, less partitions will move if the movement doesn't
  result in any better dispersion across failure domains. Also, empty
  (partition weight of zero) devices will no longer keep partitions after
  rebalancing when there is an unbalance-able ring.

  Second, the notion of "overload" has been added to Swift's rings. This
  allows devices to take some extra partitions (more than would normally
  be allowed by the device weight) so that smaller and unbalanced clusters
  will have less data movement between servers, zones, or regions if there
  is a failure in the cluster.

  Finally, rings have a new metric called "dispersion". This is the
  percentage of partitions in the ring that have too many replicas in a
  particular failure domain. For example, if you have three servers in a
  cluster but two replicas for a partition get placed onto the same
  server, that partition will count towards the dispersion metric. A
  lower value is better, and the value can be used to find the proper
  value for "overload".

  The overload and dispersion metrics have been exposed in the
  swift-ring-build CLI tools.

  See http://swift.openstack.org/overview_ring.html
  for more info on how data placement works now.

* Improve container replication for large, out-of-date containers

* Added console logging to swift-drive-audit

* Changed retaliating to support whitelisting and blacklisting based on
  account metadata (sysmeta). Note that the existing config options continue
  to work.

This release is the combined work of 20 developers, including 3 first-time
Swift contributors:

* Harshit Chitalia
* Dhriti Shikhar
* Nicolas Trangez


Thank you to everyone who contributed: developers, support staff, and
operators alike--all of whom helped find and diagnose the problems solved in
this release.

--John









signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 08:07:27PM +0300, Alexandre Levine wrote:
> 
> On 2/2/15 7:39 PM, Sean Dague wrote:
> >On 02/02/2015 11:35 AM, Alexandre Levine wrote:
> >>Thank you Sean.
> >>
> >>We'll be tons of EC2 Tempest tests for your attention shortly.
> >>How would you prefer them? In several reviews, I believe. Not in one,
> >>right?
> >>
> >>Best regards,
> >>   Alex Levine
> >So, honestly, I think that we should probably look at getting the ec2
> >tests out of the Tempest tree as well and into a more dedicated place.
> >Like as part of the stackforge project tree. Given that the right
> >expertise would be there as well. It could use tempest-lib for some of
> >the common parts.
> >
> > -Sean
> We tried to find out about tempest-lib, asked Keichi Ohmichi, but it seems
> that's still work in progress. Can you point us somewhere where we can
> understand how to employ this technology.

Tempest-lib is the effort to break out useful pieces from the tempest repo so
that they have stable interfaces and can easily be consumed externally.
Right now it only has some basic functionality in it, but we are working on
expanding it more constantly. If there is a needed feature from inside the
tempest repo which is currently missing from the lib we can work together on
migrating it over faster. 

> So the use cases will be:
> 
> 1. Be able to run the suit against EC2 in nova.
> 2. Be able to run the suit against stackforge/EC2.
> 3. Use that for gating for both repos.

These 3 things are really independent of tempest-lib. There more about how you
configure the test suite to be run. (in general and in the CI) Tempest-lib is
just a library which has the common functionality from tempest that is
generally useful outside of the tempest repo and won't help with how you
configure things to run.

But, if your tests are only interacting with things only through the API 1 and
2 should be as simple as pointing it at different endpoints.

> 
> Additional complication here is that some of the tests will have to skipped
> because of functionality absence or because of bugs in nova's EC2 but should
> be employed against stackforge's version.
> 
> Could you advice how to achieve such effects?

This also is just a matter of how you setup and configure your test jobs and the
test suite. It would be the same pretty much wherever the tests end up. When you
get a test suite setup I can help with setting things up to make this simpler.

If you join the #openstack-qa channel on freenode and we can work through 
exactly
what you're trying to accomplish with a higher throughput.

-Matt Treinish

> 
> >
> >>On 2/2/15 6:55 PM, Sean Dague wrote:
> >>>On 02/02/2015 07:01 AM, Alexandre Levine wrote:
> Michael,
> 
> I'm rather new here, especially in regard to communication matters, so
> I'd also be glad to understand how it's done and then I can drive it if
> it's ok with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3
> persons are involved.
> 
>   From the technical point of view the transition plan could look
> somewhat
> like this (sequence can be different):
> 
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them
> against nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get
> full info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and
> problematic points for the switching from existing EC2 API to the new
> one. Provide solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if
> any bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss
> the situation there.
> 
> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute
> to nova? So far this is the biggest risk. Is there anyway to allow some
> of us to participate in the process?
> >>>I am happy to volunteer to shephard these reviews. I'll try to keep an
> >>>eye on them, and if something is blocking please just ping me directly
> >>>on IRC in #openstack-nova or bring them forward to the weekly Nova
> >>>meeting.
> >>>
> >>> -Sean
> >>>
> >>
> >>__
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 11:00 AM, Sahid Orentino Ferdjaoui wrote:

On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:

Hi,

I'm trying to make use of huge pages as described in
"http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html";.
I'm running kilo as of Jan 27th.
I've allocated 1 2MB pages on a compute node.  "virsh capabilities" on that 
node contains:

 
   
 
   67028244
   16032069
   5000
   1
...
 
   67108864
   16052224
   5000
   1


I then restarted nova-compute, I set "hw:mem_page_size=large" on a
flavor, and then tried to boot up an instance with that flavor.  I
got the error logs below in nova-scheduler.  Is this a bug?


Hello,

Launchpad.net could be more appropriate to
discuss on something which looks like a bug.

   https://bugs.launchpad.net/nova/+filebug


Just wanted to make sure I wasn't missing something.  Bug has been opened at 
https://bugs.launchpad.net/nova/+bug/1417201


I added some additional logs to the bug report of what the numa topology looks 
like on the compute node and in NUMATopologyFilter.host_passes().



According to your trace I would say you are running different versions
of Nova services.


nova should all be the same version.  I'm running juno versions of other 
openstack components though.



BTW please verify your version of libvirt. Hugepages is supported
start to 1.2.8 (but this should difinitly not failed so badly like
that)


Libvirt is 1.2.8.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matt Riedemann
This came up in the operators mailing list back in June [1] but given 
the subject probably didn't get much attention.


Basically there is a really old bug [2] from Grizzly that is still a 
problem and affects multiple projects.  A tenant can be deleted in 
Keystone even though other resources in other projects are under that 
project, and those resources aren't cleaned up.


Keystone implemented event notifications back in Havana [3] but the 
other projects aren't listening on them to know when a project has been 
deleted and act accordingly.


The bug has several people saying "we should talk about this at the 
summit" for several summits, but I can't find any discussion or summit 
sessions related back to the bug.


Given this is an operations and cross-project issue, I'd like to bring 
it up again for the Vancouver summit if there is still interest (which 
I'm assuming there is from operators).


There is a blueprint specifically for the tenant deletion case but it's 
targeted at only Horizon [4].


Is anyone still working on this? Is there sufficient interest in a 
cross-project session at the L summit?


Thinking out loud, even if nova doesn't listen to events from keystone, 
we could at least have a periodic task that looks for instances where 
the tenant no longer exists in keystone and then take some action (log a 
warning, shutdown/archive/, reap, etc).


There is also a spec for L to transfer instance ownership [5] which 
could maybe come into play, but I wouldn't depend on it.


[1] 
http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html

[2] https://bugs.launchpad.net/nova/+bug/967832
[3] https://blueprints.launchpad.net/keystone/+spec/notifications
[4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
[5] https://review.openstack.org/#/c/105367/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 10:51 AM, Jay Pipes wrote:

This is a bug that I discovered when fixing some of the NUMA related nova
objects. I have a patch that should fix it up shortly.


Any chance you could point me at it or send it to me?


This is what happens when we don't have any functional testing of stuff that is
merged into master...


Indeed.  Does tempest support hugepages/NUMA/pinning?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matt Riedemann



On 2/2/2015 11:46 AM, Matt Riedemann wrote:

This came up in the operators mailing list back in June [1] but given
the subject probably didn't get much attention.

Basically there is a really old bug [2] from Grizzly that is still a
problem and affects multiple projects.  A tenant can be deleted in
Keystone even though other resources in other projects are under that
project, and those resources aren't cleaned up.

Keystone implemented event notifications back in Havana [3] but the
other projects aren't listening on them to know when a project has been
deleted and act accordingly.

The bug has several people saying "we should talk about this at the
summit" for several summits, but I can't find any discussion or summit
sessions related back to the bug.

Given this is an operations and cross-project issue, I'd like to bring
it up again for the Vancouver summit if there is still interest (which
I'm assuming there is from operators).

There is a blueprint specifically for the tenant deletion case but it's
targeted at only Horizon [4].

Is anyone still working on this? Is there sufficient interest in a
cross-project session at the L summit?

Thinking out loud, even if nova doesn't listen to events from keystone,
we could at least have a periodic task that looks for instances where
the tenant no longer exists in keystone and then take some action (log a
warning, shutdown/archive/, reap, etc).

There is also a spec for L to transfer instance ownership [5] which
could maybe come into play, but I wouldn't depend on it.

[1]
http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html

[2] https://bugs.launchpad.net/nova/+bug/967832
[3] https://blueprints.launchpad.net/keystone/+spec/notifications
[4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
[5] https://review.openstack.org/#/c/105367/



I will apologize ahead of time for saying 'projects' for services like 
nova, glance, cinder, etc, while also talking about projects/tenants in 
keystone, I realize this is confusing. :)


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 11:49:26AM -0600, Chris Friesen wrote:
> On 02/02/2015 10:51 AM, Jay Pipes wrote:
> >This is a bug that I discovered when fixing some of the NUMA related nova
> >objects. I have a patch that should fix it up shortly.
> 
> Any chance you could point me at it or send it to me?
> 
> >This is what happens when we don't have any functional testing of stuff that 
> >is
> >merged into master...
> 
> Indeed.  Does tempest support hugepages/NUMA/pinning?

The short answer is not explicitly. The longer answer is that there are 2
patches[1][2] up for review right now that add basic checks to tempest. But,
they haven't been able to merge because the nova support hasn't worked and the
tests fail...

Aside from those 2 basic checks I don't expect any other direct numa, hugepage,
etc. tests to be in tempest. Testing anything besides these basic cases would
require knowledge of the underlying hardware for the deployment, which is out of
scope for tempest. There really needs to be lower level functional testing of
these features.

That being said the other thing you could do using tempest is to configure 
tempest
to use flavors which are created to use numa. That would at least implicitly
test that the functionality would work. But, that really isn't a replacement for
the functional testing which is sorely needed here.

[1] https://review.openstack.org/143540
[2] https://review.openstack.org/#/c/143541/


-Matt Treinish


pgpYTjxjBs7qI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Spark CDH followup and questions related to DIB

2015-02-02 Thread Trevor McKay
Hello all,

  I tried a Spark image with the cdh5 element Daniele describes below,
but it did not fix the jackson version issue. The spark assembly still
depends on inconsistent versions.

  Looking into the spark git a little bit more, I discovered that in the
cdh5-1.2.0_5.3.0 branch the jackson version is settable. I built spark
on this branch with jackson 1.9.13 and was able to run Spark EDP without
any classpath manipulations. But, it doesn't appear to be released yet.

  A couple questions come out of this:

1) When do we move to cdh5.3 for spark images? Do we try to do this in
Kilo?

The work is already started, as noted below.  Daniele has done initial
work using cdh5 for the spark plugin and the Intel folks are working on 
cdh5 and cdh5.3 for the CDH plugin.

2) Do we carry a Spark assembly for Sahara ourselves, or wait for a
release tarball from CDH that uses this branch and sets a consistent
jackson version?  

I asked about any plans to release a tarball from this
branch on the apache spark users list, waiting for a response.

One alternative is for us to host our own spark build that we can use in
sahara-image-elements. The other idea is for us to wait for a release
tarball at http://archive.apache.org/dist/spark/ and continue to use the
classpath workaround in spark EDP for the time being.

3) Do we fix up sahara-image-elements to support multiple spark
versions? 

Historically sahara-image-elements only supports a single version for
spark images.  This is different from the other plugins.  Since we have
agreed to carry support for a release cycle of older versions after
introducing a new one, should we support both cdh4 and cdh5.x? This will
require changes in diskimage_create.sh.

4) Like #3, do we fix up the spark plugin in Sahara to handle multiple
versions? This is similar to the work the Intel folks are doing now to
separate cdh5 and cdh5.3 code in the cdh plugin.

I am wondering if the above 4 issues result in too much work to add to
kilo-3. Do we make an incremental improvement over Juno, having
spark-swift integration in EDP on cdh4 but without other changes and
address the above issues in L, or do we push on and try to resolve it
all for Kilo?

Best regards,

Trevor

On Wed, 2015-01-28 at 11:57 -0500, Trevor McKay wrote:
> Daniele,
> 
>   Excellent! I'll have to keep a closer eye on bigfoot activity :) I'll
> pursue this.
> 
> Best,
> 
> Trevor
> 
> On Wed, 2015-01-28 at 17:40 +0100, Daniele Venzano wrote:
> > Hello everyone,
> > 
> > there is already some code in our repository:
> > https://github.com/bigfootproject/savanna-image-elements
> > 
> > I did the necessary changes to have the Spark element use the cdh5
> > element. I updated also to Spark 1.2. The old cloudera HDFS-only
> > element is still needed for generating cdh4 images (but probably cdh4
> > support can be thrown away).
> > 
> > Unfortunately I do not have the time to do the necessary
> > testing/validation and submit for review. I also changed the CDH
> > element so that it can install only HDFS, if so required.
> > The changes I made are simple and all contained in the last commit on
> > the master branch of that repo.
> > 
> > The image generated with this code runs in Sahara without any further
> > changes. Feel free to take the code, clean it up and submit for review.
> > 
> > Dan
> > 
> > On Wed, Jan 28, 2015 at 10:43:30AM -0500, Trevor McKay wrote:
> > > Intel folks,
> > > 
> > > Belated welcome to Sahara!  Thank you for your recent commits.
> > > 
> > > Moving this thread to openstack-dev so others may contribute, cc'ing
> > > Daniele and Pietro who pioneered the Spark plugin.
> > > 
> > > I'll respond with another email about Oozie work, but I want to
> > > address the Spark/Swift issue in CDH since I have been working
> > > on it and there is a task which still needs to be done -- that
> > > is to upgrade the CDH version in the spark image and see if
> > > the situation improves (see below)
> > > 
> > > Relevant reviews are here:
> > > 
> > > https://review.openstack.org/146659
> > > https://review.openstack.org/147955
> > > https://review.openstack.org/147985
> > > https://review.openstack.org/146659
> > > 
> > > In the first review, you can see that we set an extra driver
> > > classpath to pull in '/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar.
> > > 
> > > This is because the spark-assembly JAR in CDH4 contains classes from
> > > jackson-mapper-asl-1.8.8 and jackson-core-asl-1.9.x. When the
> > > hadoop-swift.jar dereferences a Swift path, it calls into code
> > > from jackson-mapper-asl-1.8.8 which uses JsonClass.  But JsonClass
> > > was removed in jackson-core-asl-1.9.x, so there is an exception.
> > > 
> > > Therefore, we need to use the classpath to either upgrade the version of
> > > jackson-mapper-asl to 1.9.x or downgrade the version of jackson-core-asl
> > > to 1.8.8 (both work in my testing).  However, the first of these options
> > > requires us to bundle an extra jar.  Since /usr/lib/hadoop

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Ian Wells
On 2 February 2015 at 09:49, Chris Friesen 
wrote:

> On 02/02/2015 10:51 AM, Jay Pipes wrote:
>
>> This is a bug that I discovered when fixing some of the NUMA related nova
>> objects. I have a patch that should fix it up shortly.
>>
>
> Any chance you could point me at it or send it to me?
>
>  This is what happens when we don't have any functional testing of stuff
>> that is
>> merged into master...
>>
>
> Indeed.  Does tempest support hugepages/NUMA/pinning?
>

This is a running discussion, but largely no - because this is ited to the
capabilities of the host, there's no guarantee for a given scenario what
result you would get (because Tempest will run on any hardware).

If you have test cases that should pass or fail on a NUMA-capable node, can
you write them up?  We're working on NUMA-specific testing right now
(though I'm not sure who, specifically, is working on the test case side of
that).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Matthew Treinish
On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> This came up in the operators mailing list back in June [1] but given the
> subject probably didn't get much attention.
> 
> Basically there is a really old bug [2] from Grizzly that is still a problem
> and affects multiple projects.  A tenant can be deleted in Keystone even
> though other resources in other projects are under that project, and those
> resources aren't cleaned up.

I agree this probably can be a major pain point for users. We've had to work 
around it
in tempest by creating things like:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
and
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py

to ensure we aren't dangling resources after a run. But, this doesn't work in
all cases either. (like with tenant isolation enabled)

I also know there is a stackforge project that is attempting something similar
here:

http://git.openstack.org/cgit/stackforge/ospurge/

It would be much nicer if the burden for doing this was taken off users and this
was just handled cleanly under the covers.

> 
> Keystone implemented event notifications back in Havana [3] but the other
> projects aren't listening on them to know when a project has been deleted
> and act accordingly.
> 
> The bug has several people saying "we should talk about this at the summit"
> for several summits, but I can't find any discussion or summit sessions
> related back to the bug.
> 
> Given this is an operations and cross-project issue, I'd like to bring it up
> again for the Vancouver summit if there is still interest (which I'm
> assuming there is from operators).

I'd definitely support having a cross-project session on this.

> 
> There is a blueprint specifically for the tenant deletion case but it's
> targeted at only Horizon [4].
> 
> Is anyone still working on this? Is there sufficient interest in a
> cross-project session at the L summit?
> 
> Thinking out loud, even if nova doesn't listen to events from keystone, we
> could at least have a periodic task that looks for instances where the
> tenant no longer exists in keystone and then take some action (log a
> warning, shutdown/archive/, reap, etc).
> 
> There is also a spec for L to transfer instance ownership [5] which could
> maybe come into play, but I wouldn't depend on it.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
> [2] https://bugs.launchpad.net/nova/+bug/967832
> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
> [5] https://review.openstack.org/#/c/105367/
> 

-Matt Treinish


pgp0Mz2keiApM.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread bharath thiruveedula
Hi,
I am Bharath Thiruveedula. I am new to openstack neutron and networking. I am 
trying to solve the bug 1317363. But I am unable to reproduce that bug. The 
steps I have done to reproduce:
1)I have created with network with external = True2)Created a subnet for the 
above network with CIDR=172.24.4.0/24 with gateway-ip =172.24.4.53)Created the 
router4)Set the gateway interface to the router5)Tried to change subnet 
gateway-ip but got this error"Gateway ip 172.24.4.7 conflicts with allocation 
pool 172.24.4.6-172.24.4.254"I used this command for that"neutron subnet-update 
ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip 172.24.4.7" 
Can you please help me with this issue?

-- Bharath Thiruveedula   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Andrew Laski


On 02/02/2015 11:26 AM, Daniel P. Berrange wrote:

On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:

On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:

On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:

Thanks for bringing this up, Daniel.  I don't think it makes sense to have
a timeout on live migration, but operators should be able to cancel it,
just like any other unbounded long-running process.  For example, there's
no timeout on file transfers, but they need an interface report progress
and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic "tasks API" in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.

Currently the status of a tasks API is that it would happen after the API
v2.1 microversions work has created a suitable framework in which to add
tasks to the API.

So is all work on tasks blocked by the microversions support ? I would have
though that would only block places where we need to modify existing APIs.
Are we not able to add APIs for listing / cancelling tasks as new APIs
without such a dependency on microversions ?


Tasks work is certainly not blocked on waiting for microversions. There 
is a large amount of non API facing work that could be done to move 
forward the idea of a task driving state changes within Nova. I would 
very likely be working on that if I wasn't currently spending much of my 
time on cells v2.




Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: "swift_enable_snet". The default value of
  "swift_enable_snet" was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option "swift_store_endpoint". The default value of
  "swift_store_endpoint" is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: "swift_store_endpoint" = "https://www.example.com/v1/not_a_container";

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using "swift_enable_snet" (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
I think the simple answer is "yes". We (keystone) should emit notifications. 
And yes other projects should listen. 

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch). 

--Morgan 

Sent via mobile

> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> 
>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> This came up in the operators mailing list back in June [1] but given the
>> subject probably didn't get much attention.
>> 
>> Basically there is a really old bug [2] from Grizzly that is still a problem
>> and affects multiple projects.  A tenant can be deleted in Keystone even
>> though other resources in other projects are under that project, and those
>> resources aren't cleaned up.
> 
> I agree this probably can be a major pain point for users. We've had to work 
> around it
> in tempest by creating things like:
> 
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> and
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
> 
> to ensure we aren't dangling resources after a run. But, this doesn't work in
> all cases either. (like with tenant isolation enabled)
> 
> I also know there is a stackforge project that is attempting something similar
> here:
> 
> http://git.openstack.org/cgit/stackforge/ospurge/
> 
> It would be much nicer if the burden for doing this was taken off users and 
> this
> was just handled cleanly under the covers.
> 
>> 
>> Keystone implemented event notifications back in Havana [3] but the other
>> projects aren't listening on them to know when a project has been deleted
>> and act accordingly.
>> 
>> The bug has several people saying "we should talk about this at the summit"
>> for several summits, but I can't find any discussion or summit sessions
>> related back to the bug.
>> 
>> Given this is an operations and cross-project issue, I'd like to bring it up
>> again for the Vancouver summit if there is still interest (which I'm
>> assuming there is from operators).
> 
> I'd definitely support having a cross-project session on this.
> 
>> 
>> There is a blueprint specifically for the tenant deletion case but it's
>> targeted at only Horizon [4].
>> 
>> Is anyone still working on this? Is there sufficient interest in a
>> cross-project session at the L summit?
>> 
>> Thinking out loud, even if nova doesn't listen to events from keystone, we
>> could at least have a periodic task that looks for instances where the
>> tenant no longer exists in keystone and then take some action (log a
>> warning, shutdown/archive/, reap, etc).
>> 
>> There is also a spec for L to transfer instance ownership [5] which could
>> maybe come into play, but I wouldn't depend on it.
>> 
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
>> [2] https://bugs.launchpad.net/nova/+bug/967832
>> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
>> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>> [5] https://review.openstack.org/#/c/105367/
> 
> -Matt Treinish
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pratik Mallya
Hello Heat Developers,

As part of an internal development project at Rackspace, I implemented a 
mechanism to allow using Jinja templating system in heat templates. I was 
hoping to give a talk on the same for the upcoming summit (which will be the 
first summit after I started working on openstack). Have any of you worked/ are 
working on something similar? If so, could you please contact me and we can 
maybe propose a joint talk? :-)

Please let me know! It’s been interesting work and I hope the community will be 
excited to see it.

Thanks!
-Pratik 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen

On 02/02/2015 12:13 PM, Ian Wells wrote:

On 2 February 2015 at 09:49, Chris Friesen 


Indeed.  Does tempest support hugepages/NUMA/pinning?


This is a running discussion, but largely no - because this is ited to the
capabilities of the host, there's no guarantee for a given scenario what result
you would get (because Tempest will run on any hardware).

If you have test cases that should pass or fail on a NUMA-capable node, can you
write them up?  We're working on NUMA-specific testing right now (though I'm not
sure who, specifically, is working on the test case side of that).


I don't really have time to write up individual testcases right now, but I think 
a good start would be to test the following features:



http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html

http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-vcpu-topology.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/input-output-based-numa-scheduling.html

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 8:30 PM, Matthew Treinish wrote:

On Mon, Feb 02, 2015 at 07:35:46PM +0300, Alexandre Levine wrote:

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one, right?

Let's take a step back for a sec. How many tests and what kind are we talking
about here?
We've got our root in /tempest/thirdparty/aws/ec2 (which we considered a 
better naming than boto) and it works via botocore (so no boto in any case).

12 files with 79 API tests.
However we've got additionally some amount of complex scenario tests as 
well, unfortunately using boto, not botocore. Most of them though are 
about VPC stuff so those we'll run against our stackforge's EC2 only.


Please let us know where and how to put it.


I'm thinking it might be better to not just try and dump all this stuff in
tempest. While in the past we've just dumped all of this in tempest, moving
forward I don't think that's what we want to be doing. The current ec2 tests
have always felt out of place to me in tempest and historically haven't been
maintained as well as the other tests. If we're talking about ramping up the ec2
testing we probably should look at migrating everything elsewhere, especially
given that it just essentially nova testing. I see 2 better options here: we
either put the tests in the tree for the project with the ec2 implementation, or
we create a new repo like tempest-ec2 for testing this. In either case we'll
leverage tempest-lib to make sure the bits your existing testing is relying on
are consumable outside of the tempest repo.

-Matt Treinish



On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Sample Config Update (until final decision by larger thread occurs)

2015-02-02 Thread Morgan Fainberg
I am making a quick change in how Keystone is handling updates to the sample 
config files until all of those discussion points are addressed in the big 
thread of “how do we handle sample configs".

These changes are just to help limit rebase issues and make contributions a bit 
easier to manage:

1. Please do not update the sample configuration in your main patch chain. 
Update the sample configuration outside (once your changes merge) or at the end 
of the chain.

2. I’ll start -1ing anything that is dependent on a sample.config change, this 
is so that we can avoid rebase nightmares because a lot of things touch the 
sample config.

3. I or one of the keystone core will be attempting to update the sample config 
on a regular basis to catch any updates that were otherwise missed.

4. Please do not add a -1 to a Keystone review for not updating the sample 
config. I’m asking the core team to ignore these -1s (only there because a 
sample config was not updated).

I hope this helps to keep code moving into the repository with fewer headaches. 
Once all the discussion around where sample config files go has been resolved 
(OpenStack wide) these policies are subject to change.

Cheers,
Morgan

-- 
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joshua Harlow

The Oslo team is pleased to announce the release of:

TaskFlow 0.7.0: taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/0.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Noteable changes


* Using non-deprecated oslo.utils and oslo.serialization imports.
* Added note(s) about publicly consumable types into docs.
* Increase robustness of WBE producer/consumers by supporting and using
  the kombu provided feature to retry/ensure on transient/recoverable
  failures (such as timeouts).
* Move the jobboard/job bases to a jobboard/base module and
  move the persistence base to the parent directory (standardizes how
  all pluggable types now have a similiar base module in a similar 
location,

  making the layout of taskflow's codebase easier to understand/follow).
* Add executor statistics, using taskflow.futures executors now provides a
  useful feature to know about the following when using these executors.
  --
  | Statistic | What it is |
  -
  | failures  | How many submissions ended up raising exceptions  |
  | executed  | How many submissions were executed (failed or not)|
  | runtime   | Total runtime of all submissions executed (failed or not) |
  | cancelled | How many submissions were cancelled before executing  |
  -
* The taskflow logger module does not provide a logging adapter [bug]
* Use monotonic time when/if available for stopwatches (py3.3+ natively
  supports this) and other time.time usage (where the usage of 
time.time only

  cares about the duration between two points in time).
* Make all/most usage of type errors follow a similar pattern (exception
  cleanup).

Changes in /homes/harlowja/dev/os/taskflow 0.6.1..0.7.0
---

NOTE: Skipping requirement commits...

19f9674 Abstract out the worker finding from the WBE engine
99b92ae Add and use a nicer kombu message formatter
df6fb03 Remove duplicated 'do' in types documentation
43d70eb Use the class defined constant instead of raw strings
344b3f6 Use kombu socket.timeout alias instead of socket.timeout
d5128cf Stopwatch usage cleanup/tweak
2e43b67 Add note about publicly consumable types
e9226ca Add docstring to wbe proxy to denote not for public use
80888c6 Use monotonic time when/if available
7fe2945 Link WBE docs together better (especially around arguments)
f3a1dcb Emit a warning when no routing keys provided on publish()
802bce9 Center SVG state diagrams
97797ab Use importutils.try_import for optional eventlet imports
84d44fa Shrink the WBE request transition SVG image size
ca82e20 Add a thread bundle helper utility + tests
e417914 Make all/most usage of type errors follow a similar pattern
2f04395 Leave use-cases out of WBE developer documentation
e3e2950 Allow just specifying 'workers' for WBE entrypoint
66fc2df Add comments to runner state machine reaction functions
35745c9 Fix coverage environment
fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
0672467 WBE documentation tweaks/adjustments
55ad11f Add a WBE request state diagram + explanation
45ef595 Tidy up the WBE cache (now WBE types) module
1469552 Fix leftover/remaining 'oslo.utils' usage
93d73b8 Show the failure discarded (and the future intention)
5773fb0 Use a class provided logger before falling back to module
addc286 Use explicit WBE object arguments (instead of kwargs)
342c59e Fix persistence doc inheritance hierarchy
072210a The gathered runtime is for failures/not failures
410efa7 add clarification re parallel engine
cb27080 Increase robustness of WBE producer/consumers
bb38457 Move implementation(s) to there own sections
f14ee9e Move the jobboard/job bases to a jobboard/base module
ac5345e Have the serial task executor shutdown/restart its executor
426484f Mirror the task executor methods in the retry action
d92c226 Add back a 'eventlet_utils' helper utility module
1ed0f22 Use constants for runner state machine event names
bfc1136 Remove 'SaveOrderTask' and test state in class variables
22eef96 Provide the stopwatch elapsed method a maximum
3968508 Fix unused and conflicting variables
2280f9a Switch to using 'oslo_serialization' vs 'oslo.serialization'
d748db9 Switch to using 'oslo_utils' vs 'oslo.utils'
9c15eff Add executor statistics
bf2f205 Use oslo.utils reflection for class name
9fe99ba Add split time capturing to the stop watch
42a665d Use platform neutral line separator(s)
eb536da Create and use a multiprocessing sync manager subclass
4c756ef Use a single sender
778e210 Include the 'old_state' in all currently provided listeners
c07a96b Update the README.rst with accurate requirements
2f7d86a Include docstrings for parallel engine types/strings supported
0d602a

Re: [openstack-dev] About Sahara Oozie plan

2015-02-02 Thread Trevor McKay
Hi,

  Thanks for your patience.  I have been consumed with spark-swift, but
I can start to address these questions now :)

On (1) (a) below, I will try to reproduce and look at how we can better
support classpath in EDP. I'll let you know what I find.
We may need to add some configuration options for EDP or change how it
works.

On (1) (b) below, in the edp-move-examples.rst spec for Juno we
described a directory structure that could be used
for separating hadoop1 vs hadoop2 specific directories.  Maybe we can do
something similar based on plugins

For instance, if we have some hbase examples, we can make subdirectories
for each plugin.  Common parts can be
shared, plugin-specific files can be stored in the subdirectories.

(and perhaps the "hadoop2" example already there should just be a
subdirectory under "edp-java")

Best,

Trevor

--

Hi McKay
Thx for your support
I will talk details of these items as below:

(1) EDP job in Java action

   The background is that we want write integration test case for newly
added services like HBase, zookeeper just like the way the edp-examples
does( sample code under sahara/etc/edp-examples/). So I thought I can
wrote an example via edp job by Java action to test HBase Service, then
I wrote the HBaseTest.java and packaged as a jar file, and run this jar
manually with the command "java -cp `hbase classpath` HBaseTest.jar
HBaseTest", it works well in the vm(provisioned by sahara with cdh
plugin).
“/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp "HBaseTest.jar:`hbase
classpath`" HBaseTest”
So I want run this job via horizon in sahara job execution page, but
found no place to pass the `hbase classpath` parameter.(I have tried
java_opt and configuration and args, all failed). When I pass the “-cp
`hbase classpath`” to java_opts in horizon job execution page. Oozie
raise this error as below

“2015-01-15 16:43:26,074 WARN
org.apache.oozie.action.hadoop.JavaActionExecutor:
SERVER[hbase-master-copy-copy-001.novalocal] USER[hdfs] GROUP[-] TOKEN[]
APP[job-wf] JOB[045-150105050354389-oozie-oozi-W]
ACTION[045-150105050354389-oozie-oozi-W@job-node] LauncherMapper
died, check Hadoop LOG for job
[hbase-master-copy-copy-001.novalocal:8032:job_1420434100219_0054]
2015-01-15 16:43:26,172 INFO
org.apache.oozie.command.wf.ActionEndXCommand:
SERVER[hbase-master-copy-copy-001.novalocal] USER[hdfs] GROUP[-] TOKEN[]
APP[job-wf] JOB[045-150105050354389-oozie-oozi-W]
ACTION[045-150105050354389-oozie-oozi-W@job-node] ERROR is
considered as FAILED for SLA”

So I stuck with this issue, I can’t write the integration test in sahara
( could not pass the classpath parameter), 
I have check oozie official site .
https://cwiki.apache.org/confluence/display/OOZIE/Java+Cookbook found no
help info.
 
   So about the EDP job in java, I have two problems right now:
a)  How to pass classpath to java action as I mention before. So
this also reminds me that we can allow user to modify or upload this own
workflow.xml, then we can provide more options for user.
b)  I concern that it’s hard to have a common edp-example for HBase
for all plugin(cdh dhp), because the example code depends on third party
jars(for example hbase-client.jar…) and different platform(CDH HDP) they
may have different version hbase-client.jar, for example, cdh use
hbase-client-0.98.6-cdh5.2.1.jar. 

attached is a zip file which contains HBaseTest.jar and the source code.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Steve Gordon
- Original Message -
> From: "Ian Wells" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
>
> On 2 February 2015 at 09:49, Chris Friesen 
> wrote:
> 
> > On 02/02/2015 10:51 AM, Jay Pipes wrote:
> >
> >> This is a bug that I discovered when fixing some of the NUMA related nova
> >> objects. I have a patch that should fix it up shortly.
> >>
> >
> > Any chance you could point me at it or send it to me?
> >
> >  This is what happens when we don't have any functional testing of stuff
> >> that is
> >> merged into master...
> >>
> >
> > Indeed.  Does tempest support hugepages/NUMA/pinning?
> >
> 
> This is a running discussion, but largely no - because this is ited to the
> capabilities of the host, there's no guarantee for a given scenario what
> result you would get (because Tempest will run on any hardware).
> 
> If you have test cases that should pass or fail on a NUMA-capable node, can
> you write them up?  We're working on NUMA-specific testing right now
> (though I'm not sure who, specifically, is working on the test case side of
> that).

Vladik and Sean (CC'd) are working on these.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread Kevin Benton
The mailing list isn't a great place to discuss reproducing a bug. Post
this comment on the bug report instead of the mailing list. That way the
person who reported it and the ones who triaged it can see this information
and respond. They might not be watching the dev mailing list as closely.



On Mon, Feb 2, 2015 at 10:17 AM, bharath thiruveedula <
bharath_...@hotmail.com> wrote:

> Hi,
>
> I am Bharath Thiruveedula. I am new to openstack neutron and networking. I
> am trying to solve the bug 1317363. But I am unable to reproduce that bug.
> The steps I have done to reproduce:
>
> 1)I have created with network with external = True
> 2)Created a subnet for the above network with CIDR=172.24.4.0/24 with
> gateway-ip =172.24.4.5
> 3)Created the router
> 4)Set the gateway interface to the router
> 5)Tried to change subnet gateway-ip but got this error
> "Gateway ip 172.24.4.7 conflicts with allocation pool
> 172.24.4.6-172.24.4.254"
> I used this command for that
> "neutron subnet-update ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip
> 172.24.4.7"
>
> Can you please help me with this issue?
>
>
> -- Bharath Thiruveedula
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday February 3rd at 19:00 UTC

2015-02-02 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday February 3rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-27-19.06.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Sahara Oozie plan

2015-02-02 Thread Trevor McKay
Answers to other questions:

2) (first part) Yes, I think Oozie shell actions are a great idea. I can
help work on a spec for this.

In general, Sahara should be able to support any kind of Oozie action.
Each will require a new job type, changes to the Oozie engine, and a UI
form to handle submission. We talked about shell actions once upon a
time. I don't think a spec for that will be too difficult.

Typically when adding new Oozie actions, I start by running things with
the Oozie command line to figure out what's possible and what the
workflow.xml looks like in general.


We also talked about allowing a user to upload raw workflows -- the
difficulty there is figuring out what Sahara generates vs what the user
writes, so this may be a more complicated topic. I think it will have to
wait for another cycle.

2) (error information)

Yes, the lack of good error information is a big problem in my opinion,
but we have no plan for it at this time.

The OpenStack approach seems to be to look through lots of log files to
identify errors.  For EDP, we may need to support a similar approach by
allowing job logs to be easily retrieved from clusters and written
somewhere a user can parse through them for error information.  Any
ideas on how to do this are welcome.

Trevor

-- 

(2) Sahara oozie plan

So when I search the solution for HBase test case, I found
http://archive.cloudera.com/cdh5/cdh/5/oozie/DG_ShellActionExtension.html , it 
talks about oozie shell action job type, I believe my first issue in EDP job in 
java action can be solved by shell action, because I can set java 
`hbase classpath` in workflow.xml, just like the way I run 
this jar in the vm console by command. So I raise a bp for adding oozie shell 
action https://blueprints.launchpad.net/sahara/+spec/add-edp-shell-action  I 
will make further research on the bp/specs and update the spec. In today’s 
meeting , you mentioned about allow user to upload his own workflow.xml, I am 
interesting about this , we can provide our support to this part, so can you 
provide some bp/specs or other docs for me? So we can discuss for more.

For more, is there any plan to provide edp job error info to the user? I
think this is also important, currently we just have "killed" label, no
more information.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Mathieu Gagné

On 2015-02-02 11:36 AM, Chris Friesen wrote:

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:


Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it
turns out that we needed to upversion the novnc package.



Which version are you using?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Chris Friesen

On 02/02/2015 01:27 PM, Mathieu Gagné wrote:

On 2015-02-02 11:36 AM, Chris Friesen wrote:

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:


Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it
turns out that we needed to upversion the novnc package.



Which version are you using?


Pretty sure we're on 0.5.1 now.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pavlo Shchelokovskyy
Hi Pratik,

what would be the aim for this templating? I ask since we in Heat try to
keep the imperative logic like e.g. if-else out of heat templates, leaving
it to other services. Plus there is already a spec for a heat template
function to repeat pieces of template structure [1].

I can definitely say that some other OpenStack projects that are consumers
of Heat will be interested - Trove already tries to use Jinja templates to
create Heat templates [2], and possibly Sahara and Murano might be
interested as well (I suspect though the latter already uses YAQL for that).

[1] https://review.openstack.org/#/c/140849/
[2]
https://github.com/openstack/trove/blob/master/trove/templates/default.heat.template

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Feb 2, 2015 at 8:29 PM, Pratik Mallya 
wrote:

> Hello Heat Developers,
>
> As part of an internal development project at Rackspace, I implemented a
> mechanism to allow using Jinja templating system in heat templates. I was
> hoping to give a talk on the same for the upcoming summit (which will be
> the first summit after I started working on openstack). Have any of you
> worked/ are working on something similar? If so, could you please contact
> me and we can maybe propose a joint talk? :-)
>
> Please let me know! It’s been interesting work and I hope the community
> will be excited to see it.
>
> Thanks!
> -Pratik
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Jake Kugel
OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han  wrote on 02/02/2015 06:51:10 
AM:

> From: Sebastien Han 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 02/02/2015 06:54 AM
> Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS
> 
> I believe this will start somewhere after Kilo.
> 
> > On 28 Jan 2015, at 22:59, Valeriy Ponomaryov 
>  wrote:
> > 
> > Hello Jake,
> > 
> > Main thing, that should be mentioned, is that blueprint has no 
> assignee. Also, It is created long time ago without any activity after 
it.
> > I did not hear any intentions about it, moreover did not see some,
> at least, drafts.
> > 
> > So, I guess, it is open for volunteers.
> > 
> > Regards,
> > Valeriy Ponomaryov
> > 
> > On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel  
wrote:
> > Hi,
> > 
> > I see there is a blueprint for a Manila driver for CephFS here [1]. It
> > looks like it was opened back in 2013 but still in Drafting state. 
Does
> > anyone know more status about this one?
> > 
> > Thank you,
> > -Jake
> > 
> > [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
> > 
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> Cheers.
> 
> Sébastien Han
> Cloud Architect
> 
> "Always give 100%. Unless you're giving blood."
> 
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien@enovance.com
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Stefano Maffulli
On Fri, 2015-01-30 at 23:05 +, Everett Toews wrote:
> To converge the OpenStack APIs to a consistent and pragmatic RESTful
> design by creating guidelines that the projects should follow. The
> intent is not to create backwards incompatible changes in existing
> APIs, but to have new APIs and future versions of existing APIs
> converge.

It's looking good already. I think it would be good also to mention the
end-recipients of the consistent and pragmatic RESTful design so that
whoever reads the mission is reminded why that's important. Something
like:

To improve developer experience converging the OpenStack API to
a consistent and pragmatic RESTful design. The working group
creates guidelines that all OpenStack projects should follow,
avoids introducing backwards incompatible changes in existing
APIs and promotes convergence of new APIs and future versions of
existing APIs.

more or less...

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Schedule for Trove Mid-Cycle Sprint

2015-02-02 Thread Nikhil Manchanda
Hi folks:

I've updated the schedule for the Trove Mid-Cycle Sprint at
https://wiki.openstack.org/wiki/Sprints/TroveKiloSprint#Schedule
and have linked the slots on the time-table to the etherpads that we're
planning on using to track the discussion.

I've also updated the page with some more information about remote
participation in case you're not able to make it to the mid-cycle
location (Seattle, WA) in person.

Hope to see many of you tomorrow at the mid-cycle sprint.

Cheers,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Multiple template libraries being used in tree - Switch to using only Jinja2?

2015-02-02 Thread Sean M. Collins
Hi,

During my review of the full-stack tests framework[1], I noticed that
Mako was being added as an explicit dependency. I know that in the code
for creating radvd configs for IPv6, we use Jinja, but I did a quick
git grep and see that we have one file[2] that uses Mako for templating.

My intention is to replace the one file that uses Mako with Jinja2, to
keep things consistent.

Thoughts?

[1]: https://review.openstack.org/#/c/128259/
[2]: 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/script.py.mako
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean M. Collins
Sorry, I should have done a bit more grepping before I sent the e-mail,
since it appears that Mako is being used by alembic.

http://alembic.readthedocs.org/en/latest/tutorial.html

So, should we switch the radvd templating over to Mako instead?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Michael Still
On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
 wrote:
> Michael,
>
> I'm rather new here, especially in regard to communication matters, so I'd
> also be glad to understand how it's done and then I can drive it if it's ok
> with everybody.
> By saying EC2 sub team - who did you keep in mind? From my team 3 persons
> are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.

> From the technical point of view the transition plan could look somewhat
> like this (sequence can be different):
>
> 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
> 2. Contribute Tempest tests for EC2 functionality and employ them against
> nova's EC2.
> 3. Write spec for required API to be exposed from nova so that we get full
> info.
> 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
> 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
> 6. Communicate and discover all of the existing questions and problematic
> points for the switching from existing EC2 API to the new one. Provide
> solutions or decisions about them.
> 7. Do performance testing of the new stackforge/ec2 and provide fixes if any
> bottlenecks come up.
> 8. Have all of the above prepared for the Vancouver summit and discuss the
> situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.

> Michael, I am still wondering, who's going to be responsible for timely
> reviews and approvals of the fixes and tests we're going to contribute to
> nova? So far this is the biggest risk. Is there anyway to allow some of us
> to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.

I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael


> On 2/2/15 2:46 AM, Michael Still wrote:
>>
>> So, its exciting to me that we seem to developing more forward
>> momentum here. I personally think the way forward is a staged
>> transition from the in-nova EC2 API to the stackforge project, with
>> testing added to ensure that we are feature complete between the two.
>> I note that Soren disagrees with me here, but that's ok -- I'd like to
>> see us work through that as a team based on the merits.
>>
>> So... It sounds like we have an EC2 sub team forming. How do we get
>> that group meeting to come up with a transition plan?
>>
>> Michael
>>
>> On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
>> wrote:
>>>
>>> Alex,
>>>
>>> Very cool. thanks.
>>>
>>> -- dims
>>>
>>> On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
>>>  wrote:

 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up a
 bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:
>
> Alexandre, Randy,
>
> Are there plans afoot to add support to switch on stackforge/ec2-api
> in devstack? add tempest tests etc? CI Would go a long way in
> alleviating concerns i think.
>
> thanks,
> dims
>
> On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
> wrote:
>>
>> As you know we have been driving forward on the stack forge project
>> and
>> it¹s our intention to continue to support it over time, plus
>> reinvigorate
>> the GCE APIs when that makes sense. So we¹re supportive of deprecating
>> from Nova to focus on EC2 API in Nova.  I also think it¹s good for
>> these
>> APIs to be able to iterate outside of the standard release cycle.
>>
>>
>>
>> --Randy
>>
>> VP, Technology, EMC Corporation
>> Formerly Founder & CEO, Cloudscaling (now a part of EMC)
>> +1 (415) 787-2253 [google voice]
>> TWITTER: twitter.com/randybias
>> LINKEDIN: linkedin.com/in/randybias
>> ASSISTANT: ren...@emc.com
>>
>>
>>
>>
>>
>>
>> On 1/29/15, 4:01 PM, "Michael Still"  wrote:
>>
>>> Hi,
>>>
>>> as you might have read on openstack-dev, the Nova EC2 API

Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 01:21:31PM -0500, Andrew Laski wrote:
> 
> On 02/02/2015 11:26 AM, Daniel P. Berrange wrote:
> >On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:
> >>On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:
> >>>On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
> Thanks for bringing this up, Daniel.  I don't think it makes sense to have
> a timeout on live migration, but operators should be able to cancel it,
> just like any other unbounded long-running process.  For example, there's
> no timeout on file transfers, but they need an interface report progress
> and to cancel them.  That would imply an option to cancel evacuation too.
> >>>There has been periodic talk about a generic "tasks API" in Nova for 
> >>>managing
> >>>long running operations and getting information about their progress, but I
> >>>am not sure what the status of that is. It would obviously be applicable to
> >>>migration if that's a route we took.
> >>Currently the status of a tasks API is that it would happen after the API
> >>v2.1 microversions work has created a suitable framework in which to add
> >>tasks to the API.
> >So is all work on tasks blocked by the microversions support ? I would have
> >though that would only block places where we need to modify existing APIs.
> >Are we not able to add APIs for listing / cancelling tasks as new APIs
> >without such a dependency on microversions ?
> 
> Tasks work is certainly not blocked on waiting for microversions. There is a
> large amount of non API facing work that could be done to move forward the
> idea of a task driving state changes within Nova. I would very likely be
> working on that if I wasn't currently spending much of my time on cells v2.

Ok, thanks for the info. So from the POV of migration, I'll focus on the
non-API stuff, and expect the tasks work to provide the API mechanisms

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 11:15 PM, Michael Still wrote:

On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
 wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so I'd
also be glad to understand how it's done and then I can drive it if it's ok
with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3 persons
are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.


I'll do that when I've got myself acquainted with the weekly meetings 
procedure (haven't actually bumped into it before) :)



 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them against
nova's EC2.
3. Write spec for required API to be exposed from nova so that we get full
info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and problematic
points for the switching from existing EC2 API to the new one. Provide
solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if any
bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss the
situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.


Yes, that's correct. The very final step though would be cleaning up 
nova from the EC2 stuff. But you're right, the major goal would be to 
make external EC2 API production-ready and to have all of the necessary 
means for users to seamlessly transition (no downtimes, no instances 
recreation required).

So I can point at least three distinct major milestones here:

1. EC2 API in nova is back and revived (no showstoppers, all of the 
currently employed functionality safe and sound, new tests added to 
check and ensure that).

2. External EC2 API is production-ready.
3. Nova is relieved of the EC2 stuff.

Vancouver is somewhere in between 1 and 3.



Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute to
nova? So far this is the biggest risk. Is there anyway to allow some of us
to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.


We've already started.


I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael



On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas 
wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a
bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy 
wrote:

As you know we have been driving forward on the stack forge project
and
it¹s our intention to continue to support it over time, plus
reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for
these
APIs to b

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Jeremy Stanley
On 2015-02-02 23:29:55 +0300 (+0300), Alexandre Levine wrote:
> I'll do that when I've got myself acquainted with the weekly meetings
> procedure (haven't actually bumped into it before) :)
[...]

Start from the https://wiki.openstack.org/wiki/Meetings page
preamble and follow the instructions linked from it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >