Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-19 Thread Abhishek Shrivastava
Hi Ramy,

Thanks for the suggestion but since I am not including the neutron project,
so downloading and including it will require any additional configuration
in devstack-gate or not?

On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  We ran into this issue as well. I never found the root cause, but I
 found a work-around: Use neutron-networking instead of the default
 nova-networking.



 If you’re using devstack-gate, it’s as  simple as:

 export DEVSTACK_GATE_NEUTRON=1



 Then run the job as usual.



 Ramy



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Friday, July 17, 2015 9:15 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests
 failing with SSH timeout.



 Hi Folks,



 In my CI I see the following tempest tests failure for a past couple of
 days.

 ·
 tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
  [361.274316s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  [320.122458s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  [317.399342s] ... FAILED

 ·
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  [257.858272s] ... FAILED

  The failure logs are always the same every time, i.e;



  *03:34:09* 2015-07-17 03:21:13,256 9505 ERROR
 [tempest.scenario.manager] (TestVolumeBootPattern:test_volume_boot_pattern) 
 Initializing SSH connection to 172.24.5.1 failed. Error: Connection to the 
 172.24.5.1 via SSH timed out.

 *03:34:09* User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 Traceback (most recent call last):

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/scenario/manager.py, line 312, in get_remote_client

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  linux_client.validate_authentication()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/common/utils/linux/remote_client.py, line 62, in 
 validate_authentication

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  self.ssh_client.test_connection_auth()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 151, in test_connection_auth

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  connection = self._get_ssh_connection()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 87, in _get_ssh_connection

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  password=self.password)

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 SSHTimeout: Connection to the 172.24.5.1 via SSH timed out.

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager

 *03:34:09* 2015-07-17 03:21:14,377 9505 INFO [tempest_lib.common.re



 Because of these every job is failing, so if someone can help me regarding 
 this please do reply.



 --

   *Thanks  Regards,*

 *Abhishek*

 *Cloudbyte Inc. http://www.cloudbyte.com*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] 'routed' network type, DHCP agent + devstack support - review requested

2015-07-19 Thread Neil Jerram

Hi Neutron folk!

I'd like to give an update on and encourage wide review of my work on
a type of network that connects VMs through IP routing instead of
through bridging and tunneling at L2.  I believe the core Neutron
pieces of this are now complete and ready for detailed review and
potential merging.

The change at [1] creates and describes a new 'routed' value for
provider:network_type.  It means that a compute host handles data
to/from the relevant TAP interfaces by routing it, and specifically
that those TAP interfaces are not bridged.  It is the job of a
particular mechanism driver and agent implementation to set up the
required routing rules, and my team's Calico project [2] is one
example of that, although not the only possible example.

The DHCP agent needs enhancement to provide DHCP service to routed TAP
interfaces, and the change for that is at [3].

A devstack plugin is included in the Calico repository at [4]. Using
this it is possible to see 'routed' networking in action, using the
Calico mechanism driver and agent, simply by running devstack with the
following in local.conf:

  enable_plugin calico https://github.com/Metaswitch/calico routed

Demonstration steps once stack.sh completes are suggested at [5].

[1] https://review.openstack.org/#/c/198439/
[2] http://projectcalico.org/
[3] https://review.openstack.org/#/c/197578/
[4] https://github.com/Metaswitch/calico/tree/routed
[5] https://github.com/Metaswitch/calico/blob/routed/devstack/README.rst

FYI I also plan to propose a networking-calico project (or continue
proposing, given [6]), to contain the Calico mechanism driver and
devstack plugin pieces that are currently in [4], so that all of
Calico's OpenStack-specific code is under the Neutron big tent.  But I
believe that can be decoupled from review of the core Neutron changes
proposed above.

[6] https://review.openstack.org/#/c/194709/

Please do let me know if you have thoughts or comments on this.

Many thanks,
 Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Peng Zhao
Thanks Jay.


Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.


Peng
 


-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper

 
Hong Bin,


I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.


Thanks


2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:
   
Peng,
 
 
 
Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
 suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?
 
 
 
Best regards,
 
Hongbin
 
 
  
From: Peng Zhao [mailto:p...@hyper.sh] 
 Sent: July-17-15 12:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with 
Hyper



 
 
 
  
Hi, Adrian, Jay and all,
 
  
 
 
  
There could be a much longer version of this, but let me try to explain in a 
minimalist way.
 
  
 
 
  
Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to 
isolate different tenants' containers. In other words, bay is single-tenancy. 
For BM-based bay, the single tenancy is a worthy tradeoff, given the 
performance  merits of LXC vs VM. However, for a VM-based bay, there is no 
performance gain, but single tenancy seems a must, due to the lack of isolation 
in container. Hyper, as a hypervisor-based substitute for container, brings the 
much-needed isolation, and therefore  enables multi-tenancy. In HyperStack, we 
don't really need Ironic to provision multiple Hyper bays. On the other hand,  
the entire HyperStack cluster is a single big bay. Pretty similar to how Nova 
works.
 
  
 
 
  
Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. 
So when someone submits a Docker Compose app, HyperStack would launch HyperVMs 
and call Cinder/Neutron to setup the volumes and network. The architecture is  
quite simple.
 
  
 
 
  
Here are a blog I'd like to recommend: 
https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html
 
 

  
 
 
  
Let me know your questions.
 
  
 
 
  
Thanks,
 
  
Peng
 
  
 
 
 

  
-- Original --
 
   
From:  Adrian Ottoadrian.o...@rackspace.com;
 
  
Date:  Thu, Jul 16, 2015 11:02 PM
 
  
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 
 
  
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwith Hyper
 
 
  
 
 
 
Jay, 
  
 
 
  
Hyper is a substitute for a Docker host, so I expect it could work equally well 
for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes 
“pod” are similar, but different. I’m not yet convinced  that integrating Hyper 
host creation direct with Magnum (and completely bypassing nova) is a good 
idea. It probably makes more sense to implement use nova with the ironic dirt 
driver to provision Hyper hosts so we can use those as substitutes for Bay 
nodes  in our various Bay types. This would fit in the place were we use Fedora 
Atomic today. We could still rely on nova to do all of the machine instance 
management and accounting like we do today, but produce bays that use Hyper 
instead of a Docker host. Everywhere  we currently offer CoreOS as an option we 
could also offer Hyper as an alternative, with some caveats. 
 
  
 
 
  
There may be some caveats/drawbacks to consider before committing to a Hyper 
integration. I’ll be asking those of Peng also on this thread, so keep an eye 
out.
 
  
 
 
  
Thanks,
 
  
 
 
  
Adrian
 
 

 
 

On Jul 16, 2015, at 3:23 AM, Jay Lau jay.lau@gmail.com wrote:
 
 
 
 

 
Thanks Peng, then I can see two integration points for Magnum and Hyper:
 
 
1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: 
docker and hyper mode, the end user can select which mode they want to use. For 
such case, we do not need  to create a new bay but may need some enhancement 
for current k8s bay
 
 
2) After mesos and hyper integration,  we can treat mesos and hyper as a new 
bay to magnum. Just like what we are doing now for mesos+marathon.
 
 
Thanks!
 
 

 
 
  
2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.sh:
 

  
Hi Jay,
 
  
 
 
  
Yes, we are working with the community to integrate Hyper with Mesos and K8S. 
Since Hyper uses Pod as the 

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Peng Zhao
Hi Jay,


My idea is that if someone wants an IaaS solution, go Nova+Cinder+Neutron. For 
private CaaS solution, K8S/Mesos+Cinder+Neutron(libnetwork?)+Docker, for public 
CaaS, go K8S/Mesos+Cinder+Neutron+Hyper.


By doing this, we could clearly deliver the message to the community and 
market. What you suggested is more of a hybrid cluster. It is of course a valid 
case, though I think it should be a more advanced stage. 


Currently, most CaaS are deployed on some IaaS, and viewed by many as an 
extension of IaaS. With HyperStack, we could redefine the cloud by introducing 
a native, secure, multi-tenant CaaS. And all of these can be done in the 
OpenStack framework. 


Best,
Peng
 
-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:32 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper

 
Hi Peng,


Please check some of my understandings in line.


Thanks


2015-07-18 0:33 GMT+08:00 Peng Zhao p...@hyper.sh:
Hi, Adrian, Jay and all,


There could be a much longer version of this, but let me try to explain in a 
minimalist way.


Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to 
isolate different tenants' containers. In other words, bay is single-tenancy. 
For BM-based bay, the single tenancy is a worthy tradeoff, given the 
performance merits of LXC vs VM. However, for a VM-based bay, there is no 
performance gain, but single tenancy seems a must, due to the lack of isolation 
in container. Hyper, as a hypervisor-based substitute for container, brings the 
much-needed isolation, and therefore enables multi-tenancy. In HyperStack, we 
don't really need Ironic to provision multiple Hyper bays. On the other hand,  
the entire HyperStack cluster is a single big bay. Pretty similar to how Nova 
works.
IMHO, only creating one big bay might not fit into Magnum user scenario well, 
what you mentioned that put the entire HyperStack as a single big bay is more 
like a public cloud case. But for some private cloud  cases, there are 
different users and tenants, and different  tenants might want to set up their 
own HyperStack bay on their own resources.



Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. 
So when someone submits a Docker Compose app, HyperStack would launch HyperVMs 
and call Cinder/Neutron to setup the volumes and network. The architecture is 
quite simple. 

This is cool! 



Here are a blog I'd like to recommend: 
https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html
 
Let me know your questions.


Thanks,
Peng


-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Thu, Jul 16, 2015 11:02 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwith  
Hyper



 
 Jay, 
 
 Hyper is a substitute for a Docker host, so I expect it could work equally 
well for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes 
“pod” are similar, but different. I’m not yet convinced that integrating Hyper 
host creation  direct with Magnum (and completely bypassing nova) is a good 
idea. It probably makes more sense to implement use nova with the ironic dirt 
driver to provision Hyper hosts so we can use those as substitutes for Bay 
nodes in our various Bay types. This would  fit in the place were we use Fedora 
Atomic today. We could still rely on nova to do all of the machine instance 
management and accounting like we do today, but produce bays that use Hyper 
instead of a Docker host. Everywhere we currently offer CoreOS as an  option we 
could also offer Hyper as an alternative, with some caveats. 
 
 
 There may be some caveats/drawbacks to consider before committing to a Hyper 
integration. I’ll be asking those of Peng also on this thread, so keep an eye 
out.
 
 
 Thanks,
 
 
 Adrian
 
   On Jul 16, 2015, at 3:23 AM, Jay Lau jay.lau@gmail.com wrote:
 
 Thanks Peng, then I can see two integration points for Magnum and Hyper:
 
 
 1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: 
docker and hyper mode, the end user can select which mode they want to use. For 
such case, we do not need to create a new bay but may need some enhancement for 
current k8s bay
 
 
 2) After mesos and hyper integration,  we can treat mesos and hyper as a new 
bay to magnum. Just like what we are doing now for mesos+marathon.
 
 
 Thanks!
 
 
 2015-07-16 17:38 GMT+08:00 Peng Zhao  p...@hyper.sh:
Hi Jay,
 
 
 Yes, we are working with the community to integrate Hyper with Mesos and K8S. 
Since Hyper uses Pod as the default job unit, it is quite easy to integrate 
with K8S. Mesos takes a bit more efforts, but still straightforward.
 

Re: [openstack-dev] [cinder]Question for availability_zone of cinder

2015-07-19 Thread Duncan Thomas
So this has come up a few times. My question is, does having one node
serving several backends really form multiple AZs? Not really, the c-vol
node becomes a single point of failure.

There might be value in moving the AZ setting into the per-backend
configurables, if it doesn't work there already, for testing if nothing
else, but I do worry that it encorages people to misunderstand or worse
intentionally fake multiple AZs.


On 19 July 2015 at 05:19, hao wang sxmatch1...@gmail.com wrote:

 Hi  stackers,

 I found now cinder only can configure one storage_availability_zone for
 cinder-volume. If using multi-backend in one cinder-volume node, could we
 have different AZ for each backend? So that we can specify each backend as
 a AZ and create volume in this AZ.

 Regards,
 Wang Hao

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Horizon] Sharable Angular mock

2015-07-19 Thread Chen, Shaoquan
Currently random mock objects are created inside of spec files to make writing 
unit tests easier. This approach works but has some drawback:

  *   mocks are not sharable between specs.
  *   mocks are usually too simple.
  *   mocks may not be consistent between spec files.
  *   mocks are not tested theme self.
  *   hard to maintain and manage.

In order to make it easy for developers to write high quality unit tests and 
e2e test, we want a set of high-quality mock objects. To make its easy to 
maintain:

  *   mocks should reside in ``.mock.js`` files and be loaded only in test 
runner page
  *   mocks should be loaded after production code files, before spec code 
files.
  *   mocks should be sharable between specs.
  *   mocks are not belong to specs, they should have a 1:1 relationship with 
the object it try to mock.
  *   mocks should be at a level as low as possible, maybe where JavaScript 
cannot reach directly.
  *   mocks must be tested theme self.
  *   mocks should be easy to find, use and manage.

I drafted a simple BP at 
https://blueprints.launchpad.net/horizon/+spec/horizon-angular-mocks to 
summaries the issue existing in Horizon IMO and how I could see to fix them. I 
also setup two patches to prove the concept at:

  *   https://review.openstack.org/#/c/202817/
  *   https://review.openstack.org/#/c/202830/

This mock is for the window object and another could be OpenStackHtttpBackend 
which mimic all the behaviors of Horizon Web services. As you can see, the 
window mock object can be easily applied to a bug Tyr fixed recently so that 
the spec code do not need to create an inline mock object. Because those shared 
mock are independent from specs, they can be more neutral and can have much 
more features than a over simplified mock for a specific test.

Please let me know if this interested you.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-19 Thread John Vrbanac
Don't include the curly brackets on the script arguments. The documentation is 
just using them to indicate that those are placeholders for real values.


John Vrbanac

From: Asha Seshagiri asha.seshag...@gmail.com
Sent: Sunday, July 19, 2015 2:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Reller, Nathan S.
Subject: Re: [openstack-dev] Barbican : Unable to store the secret when 
Barbican was Integrated with SafeNet HSM

Hi John ,

Thanks  for pointing me to the right script.
I appreciate your help .

I tried running the script with the following command :

[root@HSM-Client bin]# python pkcs11-key-generation --library-path 
{/usr/lib/libCryptoki2_64.so} --passphrase {test123} --slot-id 1  mkek --length 
32 --label 'an_mkek'
Traceback (most recent call last):
  File pkcs11-key-generation, line 120, in module
main()
  File pkcs11-key-generation, line 115, in main
kg = KeyGenerator()
  File pkcs11-key-generation, line 38, in __init__
ffi=ffi
  File /root/barbican/barbican/plugin/crypto/pkcs11.py, line 315, in __init__
self.lib = self.ffi.dlopen(library_path)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 127, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 572, in 
_make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 561, in 
_load_backend_lib
return backend.load_library(name, flags)
OSError: cannot load library {/usr/lib/libCryptoki2_64.so}: 
{/usr/lib/libCryptoki2_64.so}: cannot open shared object file: No such file or 
directory

Unable to run the script since the library libCryptoki2_64.so cannot be opened.

Tried the following solution  :

  *vi /etc/ld.so.conf
  *   Added both the paths of ld.so.conf in the  /etc/ld.so.conf file got  from 
the command find / -name libCryptoki2_64.so
 *   /usr/safenet/lunaclient/lib/libCryptoki2_64.so
 *   /usr/lib/libCryptoki2_64.so
  *   sudo ldconfig
  *   ldconfig -p

But the above solution failed and am geting the same error.

Any help would highly be apprecited.
Thanks in advance!

Thanks and Regards,
Asha Seshagiri

On Sat, Jul 18, 2015 at 11:12 PM, John Vrbanac 
john.vrba...@rackspace.commailto:john.vrba...@rackspace.com wrote:

Asha,

It looks like you don't have your mkek label correctly configured. Make sure 
that the mkek_label and hmac_label values in your config correctly reflect the 
keys that you've generated on your HSM.

The plugin will cache the key handle to the mkek and hmac when the plugin 
starts, so if it cannot find them, it'll fail to load the plugin altogether.


If you need help generating your mkek and hmac, refer to 
http://docs.openstack.org/developer/barbican/api/quickstart/pkcs11keygeneration.html
 for instructions on how to create them using a script.


As far as who uses HSMs, I know we (Rackspace) use them with Barbican.


John Vrbanac

From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Sent: Saturday, July 18, 2015 8:47 PM
To: openstack-dev
Cc: Reller, Nathan S.
Subject: [openstack-dev] Barbican : Unable to store the secret when Barbican 
was Integrated with SafeNet HSM

Hi All ,

I have configured Barbican to integrate with SafeNet  HSM.
Installed safenet client libraries , registered the barbican machine to point 
to HSM server  and also assigned HSM partition.

The following were the changes done in barbican.conf file


# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = store_crypto

# = Crypto plugin ===
[crypto]
namespace = barbican.crypto.plugin
enabled_crypto_plugins = p11_crypto

[p11_crypto_plugin]
# Path to vendor PKCS11 library
library_path = '/usr/lib/libCryptoki2_64.so'
# Password to login to PKCS11 session
login = 'test123'
# Label to identify master KEK in the HSM (must not be the same as HMAC label)
mkek_label = 'an_mkek'
# Length in bytes of master KEK
mkek_length = 32
# Label to identify HMAC key in the HSM (must not be the same as MKEK label)
hmac_label = 'my_hmac_label'
# HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1
slot_id = 1

Unable to store the secret when Barbican was integrated with HSM.

[root@HSM-Client crypto]# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id:12345' -d '{payload: my-secret-here, payload_content_type: 
text/plain}' http://localhost:9311/v1/secrets
{code: 500, description: Secret creation failure seen - please contact 
site administrator., title: Internal Server Error}[root@HSM-Client crypto]#


Please find the logs below :

2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils 
[req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Problem seen creating 
plugin: 'p11_crypto'
2015-07-18 

Re: [openstack-dev] [magnum]

2015-07-19 Thread Kai Qiang Wu
My thoughts:

I think we'd better check what google will do after such official
announcement. As community changes fast, and we'd really welcome someone to
contributing it consistently and actively.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Daneyon Hansen (danehans) daneh...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/18/2015 12:46 AM
Subject:[openstack-dev]  [magnum]



All,

Does anyone have insight into Google's plans for contributing to containers
within OpenStack?

http://googlecloudplatform.blogspot.tw/2015/07/Containers-Private-Cloud-Google-Sponsors-OpenStack-Foundation.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Kai Qiang Wu
Hi Peng,

As @Adrian pointed it out:

My fist suggestion is to find a way to make a nova virt driver for Hyper,
which could allow it to be used with all of our current Bay types in
Magnum.


I remembered you or other guys in your company proposed one bp about nova
virt driver for Hyper. What's the status of the bp now?
Is it accepted by nova projects or cancelled ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/19/2015 11:18 PM
Subject:Re: [openstack-dev] [magnum][bp] Power Magnum to run on
metal   withHyper



Peng,

You are not the first to think this way, and it's one of the reasons we did
not integrate Containers with OpenStack in a meaningful way a full year
earlier. Please pay attention closely.

1) OpenStack's key influences care about two personas: 1.1) Cloud Operators
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea
will get killed. Operators matter.

2) Cloud Operators need a consistent way to bill for the IaaS services the
provide. Nova emits all of the RPC messages needed to do this. Having a
second nova that does this slightly differently is a really annoying
problem that will make Operators hate the software. It's better to use
nova, have things work consistently, and plug in virt drivers to it.

3) Creation of a host is only part of the problem. That's the easy part.
Nova also does a bunch of other things too. For example, say you want to
live migrate a guest from one host to another. There is already
functionality in Nova for doing that.

4) Resources need to be capacity managed. We call this scheduling. Nova has
a pluggable scheduler to help with the placement of guests on hosts. Magnum
will not.

5) Hosts in a cloud need to integrate with a number of other services, such
as an image service, messaging, networking, storage, etc. If you only think
in terms of host creation, and do something without nova, then you need to
re-integrate with all of these things.

Now, I probably left out examples of lots of other things that Nova does.
What I have mentioned us enough to make my point that there are a lot of
things that Magnum is intentionally NOT doing that we expect to get from
Nova, and I will block all code that gratuitously duplicates functionality
that I believe belongs in Nova. I promised our community I would not
duplicate existing functionality without a very good reason, and I will
keep that promise.

Let's find a good way to fit Hyper with OpenStack in a way that best
leverages what exists today, and is least likely to be rejected. Please
note that the proposal needs to be changed from where it is today to
achieve this fit.

My fist suggestion is to find a way to make a nova virt driver for Hyper,
which could allow it to be used with all of our current Bay types in
Magnum.

Thanks,

Adrian


 Original message 
From: Peng Zhao p...@hyper.sh
Date: 07/19/2015 5:36 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
withHyper

Thanks Jay.

Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I
just think bay isn't a must in this case, and we don't need nova to
provision BM hosts, which makes things more complicated imo.

Peng


-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage
questions)openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
withHyper

Hong Bin,

I have some online discussion with Peng, seems hyper is now integrating
with Kubernetes and also have plan integrate with mesos for scheduling.
Once mesos integration finished, we can treat mesos+hyper as another kind
of bay.

Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:
  Peng,





  Several questions Here. You mentioned that HyperStack is a single big
  “bay”. Then, who is doing the multi-host scheduling, Hyper or something
  else? Were you suggesting to integrate Hyper with Magnum directly? Or you
  were suggesting to integrate Hyper with Magnum indirectly (i.e. through
  k8s, mesos and/or Nova)?





  Best regards,


  Hongbin





  From: Peng Zhao [mailto:p...@hyper.sh]
  Sent: July-17-15 12:34 PM
  To: OpenStack 

[openstack-dev] [Fuel][Plugins] additionnal categories for plugins

2015-07-19 Thread Samuel Bartel
Hi all,

I think we are missing a category for plugins. I was thinking to following
plugins
-TLS plugin related to security. For example everything related to tls
access to the dashboard/vnc and apis
-Plugin to deploy freezer with fuel in order to achieve abckup and restore
(on going)
-plugin to setup availaiblity zones (on going)

The actual categories are :
montiroing
storage
storage-cinder
storage-glance
network
hypervisor

these plugins are not matching to any of those categories.
Shoul we let the category field empty as requested in fuel plugin
documentation or can we consider to add additionnal categories?

Regards

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-19 Thread Anita Kuno
On 07/16/2015 05:13 AM, Thierry Carrez wrote:
 anyone with a stake in the game and their cat will upvote or
 downvote for no reason

I think we are overloaded on skewed information already and I'm not in
support of a structure that would likely offer gamed information.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-19 Thread Ed Leafe
On Jul 16, 2015, at 4:13 AM, Thierry Carrez thie...@openstack.org wrote:

 I don't really like the idea of a popularity contest to define HA or
 scales -- anyone with a stake in the game and their cat will upvote or
 downvote for no reason. I prefer to define HA in clear terms and have
 some group maintain the tag across the set of projects.

Hmmm… I was thinking of concepts like restaurant or movie reviews: since those 
depend on the tastes of the reviewer, they may or may not match your tastes. So 
we might consider tags that are backed by different groups of operators, each 
of whom may have a completely different environment. New operators coming into 
OpenStack will soon recognize which tags are relevant to their environment, and 
give them much more credence when it comes to evaluating the state of the 
various projects.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] The state of collaboration: 5 weeks

2015-07-19 Thread Emilien Macchi
I'm currently in holidays but I could not resist to take some time and
reply.

On 07/18/2015 06:32 PM, Dmitry Borodaenko wrote:
 It has been 5 weeks since Emilien has asked Fuel developers to
 contribute more
 actively to Puppet OpenStack project [0]. We had a lively discussion on
 openstack-dev, myself and other Fuel developers proposed some concrete
 steps
 that could be done to reconcile the two projects, the whole thread was
 reported
 and discussed on Linux Weekly News [1], and the things were looking up.
 
 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/066544.html
 [1] https://lwn.net/Articles/648331/
 
 And now, 5 weeks later, Emilien has reported to the Technical Committee
 that
 there has been no progress on reconciliation between Fuel and Puppet
 OpenStack,
 and used his authority as a PTL to request that the Fuel's proposal to join
 OpenStack [2] is rejected.

1) What you're writing here is not correct.
Please read again my first comment on the Governance patch:

They are making progress and I'm sure they are doing their best.

I've been following all your actions since our *thread*, and so far all
of this seems good to me, I'm honestly satisfied to know that you guys
have plans.

2) The word 'authority' is also wrong. I'm not part of TC and my vote
does not count at all for the final decision. If Fuel has to be an
OpenStack project, TC is free to vote by themselves without me and it
seems they already negatively voted before my thoughts.

3) I -1 the governance patch because to me, it seems too early for
having Fuel part of OpenStack. We ran that discussion 5 weeks ago, a lot
of actions have been taken for I hope TC will wait for actual results
(mainly highlighted in the review itself).
Let me quote my first comment on the patch that clearly show I'm not
against the idea:
I sincerely hope they'll realize what they plan to do, which is being
part of our community like other projects already succeed to do.


 [2] https://review.openstack.org/199232
 
 In further comments on the same review, Emilien has claimed that there's
 clearly less contribution to Puppet OpenStack from Fuel developers than
 before, and even brought up an example of a review in puppet-horizon
 that was
 proposed and then abandoned by Fuel team [3]. Jay went as far as calling
 that
 example an obvious failure of working with the upstream Puppet-OpenStack
 community.

Andrew took the best metric in your advantage... the review metric,
which is something given to anyone having signed the CLA.

I would rather focus on patchset, commits, bug fix, IRC, ML etc... which
really show collaboration in a group.
And I honestly think it's making progress, even though the numbers.

 [3] https://review.openstack.org/198119
 
 Needless to say, I found all these claims deeply disturbing, and had to
 look
 closely into what's going on.
 
 The case of the puppet-horizon commit has turned out to be surprisingly
 obvious.
 
 Even before looking into the review comments, I could see a technical
 reason
 for abandoning the commit: if there is a bug in a component, fixing that
 bug in
 the package is preferrable to fixing it in puppet, because it allows
 anybody to
 benefit from the fix, not just the people deploying that package with
 puppet.

You are not providing official Ubuntu packaging, but your own packages
mainly used by Fuel, while Puppet OpenStack modules are widely used by
OpenStack community.
Fixing that bug in Fuel packaging is the shortest  easiest way for you
to fix that, while we are really doing something wrong in puppet-horizon
about the 'compress' option.
So Fuel is now fixed and puppet-horizon broken.

 And if you do look at the review in question, you'll find that
 immediately (14
 minutes, and that at 6pm on Friday night!) after Jay has asked in the
 comments
 to the review why it was abandoned, the commit author from the Fuel team
 has
 explained that this patch was a workaround for a packaging problem, and
 that
 this was pointed out in the review by a Horizon core reviewer more than
 a week
 ago, and later corroborated by a Puppet OpenStack core reviewer. Further
 confirming that fixing this in the package instead of in puppet-horizon
 was an
 example of Fuel developers agreeing with other Puppet OpenStack
 contributors
 and doing the right thing.

This kind of bug has to be fixed between Horizon  Puppet team, and not
with some workaround in packaging tool.
We don't really want workarounds I guess, do we?

 Emilien has clearly found this case important enough to report to the
 TC, and
 yet didn't find it important enough to simply ask Fuel developers why they
 chose to abandon the commit. I guess you can call that an obvious
 failure to
 work together.
 
 Looking further into Fuel team's reviews for puppet-horizon, I found
 another
 equally disturbing example [4].
 
 [4] https://review.openstack.org/190548
 
 Here's what I see in this review:
 
 a) Fuel team has spent more than a month (since June 11) on 

Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-19 Thread Abhishek Shrivastava
Hi Ramy,

Thanks for the suggestion. One more thing I need to ask, as I have have
setup one more CI so is there any way that we can decide that only required
projects should get downloaded and installed during devstack installation
dynamically. As I see no such things that can be done to devstack-gate
scripts so the following scenario can be achieved.

On Sun, Jul 19, 2015 at 8:38 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Just the export I mentioned:

 export DEVSTACK_GATE_NEUTRON=1

 Devstack-gate scripts will do the right thing when it sees that set. You
 can see plenty of examples here [1].



 Ramy



 [1]
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 2:24 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Ramy,



 Thanks for the suggestion but since I am not including the neutron
 project, so downloading and including it will require any additional
 configuration in devstack-gate or not?



 On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  We ran into this issue as well. I never found the root cause, but I
 found a work-around: Use neutron-networking instead of the default
 nova-networking.



 If you’re using devstack-gate, it’s as  simple as:

 export DEVSTACK_GATE_NEUTRON=1



 Then run the job as usual.



 Ramy



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Friday, July 17, 2015 9:15 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests
 failing with SSH timeout.



 Hi Folks,



 In my CI I see the following tempest tests failure for a past couple of
 days.

 ·
 tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
  [361.274316s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  [320.122458s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  [317.399342s] ... FAILED

 ·
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  [257.858272s] ... FAILED

  The failure logs are always the same every time, i.e;



  *03:34:09* 2015-07-17 03:21:13,256 9505 ERROR
 [tempest.scenario.manager] (TestVolumeBootPattern:test_volume_boot_pattern) 
 Initializing SSH connection to 172.24.5.1 failed. Error: Connection to the 
 172.24.5.1 via SSH timed out.

 *03:34:09* User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 Traceback (most recent call last):

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/scenario/manager.py, line 312, in get_remote_client

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  linux_client.validate_authentication()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/common/utils/linux/remote_client.py, line 62, in 
 validate_authentication

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  self.ssh_client.test_connection_auth()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 151, in test_connection_auth

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  connection = self._get_ssh_connection()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 87, in _get_ssh_connection

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  password=self.password)

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 SSHTimeout: Connection to the 172.24.5.1 via SSH timed out.

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager

 *03:34:09* 2015-07-17 03:21:14,377 9505 INFO [tempest_lib.common.re



 Because of these every job is failing, so if someone can help me regarding 
 this please do reply.



 --

   *Thanks  Regards,*

 *Abhishek*

 *Cloudbyte Inc. http://www.cloudbyte.com*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-19 Thread Asselin, Ramy
There are two ways that I know of to customize what services are run:

1.  Setup your own feature matrix [1]

2.  Override enabled services [2]

Option 2 is probably what you’re looking for.

[1] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n152
[2] 
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n76

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 10:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Hi Ramy,

Thanks for the suggestion. One more thing I need to ask, as I have have setup 
one more CI so is there any way that we can decide that only required projects 
should get downloaded and installed during devstack installation dynamically. 
As I see no such things that can be done to devstack-gate scripts so the 
following scenario can be achieved.

On Sun, Jul 19, 2015 at 8:38 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Just the export I mentioned:
export DEVSTACK_GATE_NEUTRON=1
Devstack-gate scripts will do the right thing when it sees that set. You can 
see plenty of examples here [1].

Ramy

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 2:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Hi Ramy,

Thanks for the suggestion but since I am not including the neutron project, so 
downloading and including it will require any additional configuration in 
devstack-gate or not?

On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
We ran into this issue as well. I never found the root cause, but I found a 
work-around: Use neutron-networking instead of the default nova-networking.

If you’re using devstack-gate, it’s as  simple as:
export DEVSTACK_GATE_NEUTRON=1

Then run the job as usual.

Ramy

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Friday, July 17, 2015 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing 
with SSH timeout.

Hi Folks,

In my CI I see the following tempest tests failure for a past couple of days.

•
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 [361.274316s] ... FAILED

•
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 [320.122458s] ... FAILED

•
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
 [317.399342s] ... FAILED

•
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
 [257.858272s] ... FAILED

The failure logs are always the same every time, i.e;



03:34:09 2015-07-17 03:21:13,256 9505 ERROR[tempest.scenario.manager] 
(TestVolumeBootPattern:test_volume_boot_pattern) Initializing SSH connection to 
172.24.5.1 failed. Error: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 User: cirros, Password: None

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
Traceback (most recent call last):

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
tempest/scenario/manager.py, line 312, in get_remote_client

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
linux_client.validate_authentication()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
tempest/common/utils/linux/remote_client.py, line 62, in 
validate_authentication

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
self.ssh_client.test_connection_auth()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 151, in test_connection_auth

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
connection = self._get_ssh_connection()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 87, in _get_ssh_connection

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
password=self.password)

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
SSHTimeout: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager User: 
cirros, 

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Adrian Otto
Peng,

You are not the first to think this way, and it's one of the reasons we did not 
integrate Containers with OpenStack in a meaningful way a full year earlier. 
Please pay attention closely.

1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.

2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.

3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.

4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.

5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.

Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.

Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.

My fist suggestion is to find a way to make a nova virt driver for Hyper, which 
could allow it to be used with all of our current Bay types in Magnum.

Thanks,

Adrian


 Original message 
From: Peng Zhao p...@hyper.sh
Date: 07/19/2015 5:36 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

Thanks Jay.

Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.

Peng


-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper

Hong Bin,

I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.

Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com:
Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:p...@hyper.shmailto:p...@hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with 
Hyper

Hi, Adrian, Jay and all,

There could be a much longer version of this, but let me try to explain in a 
minimalist way.

Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to 
isolate different tenants' containers. In other words, bay is single-tenancy. 
For BM-based bay, the single tenancy is a worthy tradeoff, given the 
performance merits of LXC vs VM. However, for a VM-based bay, there is no 
performance gain, but single tenancy seems a must, due to the lack of isolation 
in container. Hyper, as a hypervisor-based substitute for container, brings the 
much-needed isolation, and therefore enables multi-tenancy. In HyperStack, we 
don't really need Ironic to provision multiple Hyper bays. On the other hand,  
the entire HyperStack cluster is a single big bay. Pretty similar to how Nova 
works.

Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. 
So when someone submits a Docker Compose app, HyperStack 

Re: [openstack-dev] 7/17 state of the gate (you know, fires)

2015-07-19 Thread Thierry Carrez
Matt Riedemann wrote:
 
 I think we're good now, let the rechecks begin!

Thanks so much for driving this, Matt. Don't burn out working on
weekends on it, though :)

(yes I realize the irony of posting that on a Sunday)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-19 Thread Jay Pipes

On 07/15/2015 03:27 PM, John Griffith wrote:

It's relatively limited right now I think, and part of the reason for
that is we've tried to ensure that the information that we put on tags
is subjective


I think you mean objective here, and throughout this, not 
subjective. We try as much as possible for the tag definitions to have 
a clear list of requirements -- preferably driven by some data source 
like a script that queries stackalytics -- that can be used in applying 
the tag to a project.


, and at the same time doesn't give any false impression

that something is good or bad.  We just wanted to have tags to
easily convey some general information about a project to help people
gain at least a little insight into a project, how it's managed, what
sort of community is contributing to it etc.

As we move into things like the compute starter kit it gets a bit less
subjective, but not really too much.


It got a bit *more* subjective, not less subjective :) Or perhaps you 
meant prescriptive?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-19 Thread Jim Rollenhagen
On Sat, Jul 18, 2015 at 08:39:23PM +0800, Sam Stoelinga wrote:
 +1 on Kevin Benton's comments.
 Ironic should have integration with switches where the switches are SDN
 compatible. The individual bare metal node should not care which vlan,
 vxlan or other translation is programmed at the switch. The individual bare
 metal nodes just knows I have 2 nics and and these are on Neutron network
 x. The SDN controller is responsible for making sure the baremetal node
 only has access to Neutron Network x through changing the switch
 configuration dynamically.
 
 Making an individual baremetal have access to several vlans and let the
 baremetal node configure a vlan tag at the baremetal node itself is a big
 security risk and should not be supported. Unless an operator specifically
 configures a baremetal node to be vlan trunk.

Right, trunking is the main use case here, and I think we should support
it. :)

To be clear, I'm not advocating that we always send the VLAN to the
instance. I agree that this patch isn't the right way to do it. But I
do think we need to consider that there are cases where we do need to
expose the VLAN, and we should support this.

For a little background, this patch came from code that is running in
production today, where we're trunking two VLANs down to the host -- it
isn't a theoretical use case.

// jim

 
 Sam Stoelinga
 
 On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton blak...@gmail.com wrote:
 
   which requires VLAN info to be pushed to the host. I keep hearing bare
  metal will never need to know about VLANs so I want to quash that ASAP.
 
  That's leaking implementation details though if the bare metal host only
  needs to be on one network. It also creates a security risk if the bare
  metal node is untrusted.
 
  If the tagging is to make it so it can access multiple networks, then that
  makes sense for now but it should ultimately be replaced by the vlan trunk
  ports extension being worked on this cycle that decouples the underlying
  network transport from what gets tagged to the VM/bare metal.
  On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com
  wrote:
 
  On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
   Check out my comments on the review. Only Neutron knows whether or not
  an
   instance needs to do manual tagging based on the plugin/driver loaded.
  
   For example, Ironic/bare metal ports can be bound by neutron with a
  correct
   driver so they shouldn't get the VLAN information at the instance level
  in
   those cases. Nova has no way to know whether Neutron is configured this
  way
   so Neutron should have an explicit response in the port binding
  information
   indicating that an instance needs to tag.
 
  Agree. However, I just want to point out that there are neutron drivers
  that exist today[0] that support bonded NICs with trunked VLANs, which
  requires VLAN info to be pushed to the host. I keep hearing bare metal
  will never need to know about VLANs so I want to quash that ASAP.
 
  As far as Neutron sending the flag to decide whether the instance should
  tag packets, +1, I think that should work.
 
  // jim
  
   On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen 
  j...@jimrollenhagen.com
   wrote:
  
On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
 On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
  On 07/16/2015 06:06 PM, Sean M. Collins wrote:
  On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
  So it looks like there is a missing part in this feature. There
should
  be a way to hide this information if the instance does not
  require
to
  configure vlan interfaces to make network functional.
 
  I just commented on the review, but the provider network API
  extension
  is admin only, most likely for the reasons that I think someone
  has
  already mentioned, that it exposes details of the phyiscal
  network
  layout that should not be exposed to tenants.
 
  So, clearly, under some circumstances the network operator wants
  to
  expose this information, because there was the request for that
feature.
  The question in my mind is what circumstances are those, and what
  additional information needs to be provided here.
 
  There is always a balance between the private cloud case which
  wants to
  enable more self service from users (and where the users are
  often also
  the operators), and the public cloud case where the users are
  outsiders
  and we want to hide as much as possible from them.
 
  For instance, would an additional attribute on a provider network
  that
  says this is cool to tell people about be an acceptable
  approach? Is
  there some other creative way to tell our infrastructure that
  these
  artifacts are meant to be exposed in this installation?
 
  Just kicking around ideas, because I know a pile of gate hardware
  for
  

Re: [openstack-dev] Barbican : Unable to store the secret when Barbican was Integrated with SafeNet HSM

2015-07-19 Thread Asha Seshagiri
Hi John ,

Thanks  for pointing me to the right script.
I appreciate your help .

I tried running the script with the following command :

[root@HSM-Client bin]# python pkcs11-key-generation --library-path
{/usr/lib/libCryptoki2_64.so} --passphrase {test123} --slot-id 1  mkek
--length 32 --label 'an_mkek'
Traceback (most recent call last):
  File pkcs11-key-generation, line 120, in module
main()
  File pkcs11-key-generation, line 115, in main
kg = KeyGenerator()
  File pkcs11-key-generation, line 38, in __init__
ffi=ffi
  File /root/barbican/barbican/plugin/crypto/pkcs11.py, line 315, in
__init__
self.lib = self.ffi.dlopen(library_path)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 127, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 572, in
_make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
  File /usr/lib64/python2.7/site-packages/cffi/api.py, line 561, in
_load_backend_lib
return backend.load_library(name, flags)
*OSError: cannot load library {/usr/lib/libCryptoki2_64.so}:
{/usr/lib/libCryptoki2_64.so}: cannot open shared object file: No such file
or directory*

*Unable to run the script since the library libCryptoki2_64.so cannot be
opened.*

Tried the following solution  :

   -  vi /etc/ld.so.conf
   - Added both the paths of ld.so.conf in the  /etc/ld.so.conf file got
from the command find / -name libCryptoki2_64.so
   - /usr/safenet/lunaclient/lib/libCryptoki2_64.so
  - /usr/lib/libCryptoki2_64.so
   - sudo ldconfig
   - ldconfig -p

But the above solution failed and am geting the same error.

Any help would highly be apprecited.
Thanks in advance!

Thanks and Regards,
Asha Seshagiri

On Sat, Jul 18, 2015 at 11:12 PM, John Vrbanac john.vrba...@rackspace.com
wrote:

  Asha,

 It looks like you don't have your mkek label correctly configured. Make
 sure that the mkek_label and hmac_label values in your config correctly
 reflect the keys that you've generated on your HSM.

 The plugin will cache the key handle to the mkek and hmac when the plugin
 starts, so if it cannot find them, it'll fail to load the plugin altogether.


  If you need help generating your mkek and hmac, refer to
 http://docs.openstack.org/developer/barbican/api/quickstart/pkcs11keygeneration.html
 for instructions on how to create them using a script.


  As far as who uses HSMs, I know we (Rackspace) use them with Barbican.


 John Vrbanac
  --
 *From:* Asha Seshagiri asha.seshag...@gmail.com
 *Sent:* Saturday, July 18, 2015 8:47 PM
 *To:* openstack-dev
 *Cc:* Reller, Nathan S.
 *Subject:* [openstack-dev] Barbican : Unable to store the secret when
 Barbican was Integrated with SafeNet HSM

  Hi All ,

  I have configured Barbican to integrate with SafeNet  HSM.
 Installed safenet client libraries , registered the barbican machine to
 point to HSM server  and also assigned HSM partition.

  The following were the changes done in barbican.conf file


  # = Secret Store Plugin ===
 [secretstore]
 namespace = barbican.secretstore.plugin
 enabled_secretstore_plugins = store_crypto

  # = Crypto plugin ===
 [crypto]
 namespace = barbican.crypto.plugin
 enabled_crypto_plugins = p11_crypto

  [p11_crypto_plugin]
 # Path to vendor PKCS11 library
 library_path = '/usr/lib/libCryptoki2_64.so'
 # Password to login to PKCS11 session
 login = 'test123'
 # Label to identify master KEK in the HSM (must not be the same as HMAC
 label)
 mkek_label = 'an_mkek'
 # Length in bytes of master KEK
  mkek_length = 32
 # Label to identify HMAC key in the HSM (must not be the same as MKEK
 label)
 hmac_label = 'my_hmac_label'
   # HSM Slot id (Should correspond to a configured PKCS11 slot). Default:
 1
 slot_id = 1

  Unable to store the secret when Barbican was integrated with HSM.

  [root@HSM-Client crypto]# curl -X POST -H
 'content-type:application/json' -H 'X-Project-Id:12345' -d '{payload:
 my-secret-here, payload_content_type: text/plain}'
 http://localhost:9311/v1/secrets
 *{code: 500, description: Secret creation failure seen - please
 contact site administrator., title: Internal Server
 Error}[root@HSM-Client crypto]#*


 Please find the logs below :

  2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
 [req-354affce-b3d6-41fd-b050-5e5c604004eb - 12345 - - -] Problem seen
 creating plugin: 'p11_crypto'
 2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils Traceback
 (most recent call last):
 2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
 /root/barbican/barbican/plugin/util/utils.py, line 42, in
 instantiate_plugins
 2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils
 plugin_instance = ext.plugin(*invoke_args, **invoke_kwargs)
 2015-07-18 17:15:32.642 29838 ERROR barbican.plugin.util.utils   File
 

Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-19 Thread Asselin, Ramy
Just the export I mentioned:
export DEVSTACK_GATE_NEUTRON=1
Devstack-gate scripts will do the right thing when it sees that set. You can 
see plenty of examples here [1].

Ramy

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Sunday, July 19, 2015 2:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests 
failing with SSH timeout.

Hi Ramy,

Thanks for the suggestion but since I am not including the neutron project, so 
downloading and including it will require any additional configuration in 
devstack-gate or not?

On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
We ran into this issue as well. I never found the root cause, but I found a 
work-around: Use neutron-networking instead of the default nova-networking.

If you’re using devstack-gate, it’s as  simple as:
export DEVSTACK_GATE_NEUTRON=1

Then run the job as usual.

Ramy

From: Abhishek Shrivastava 
[mailto:abhis...@cloudbyte.commailto:abhis...@cloudbyte.com]
Sent: Friday, July 17, 2015 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing 
with SSH timeout.

Hi Folks,

In my CI I see the following tempest tests failure for a past couple of days.

•
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 [361.274316s] ... FAILED

•
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 [320.122458s] ... FAILED

•
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
 [317.399342s] ... FAILED

•
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
 [257.858272s] ... FAILED

The failure logs are always the same every time, i.e;



03:34:09 2015-07-17 03:21:13,256 9505 ERROR[tempest.scenario.manager] 
(TestVolumeBootPattern:test_volume_boot_pattern) Initializing SSH connection to 
172.24.5.1 failed. Error: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 User: cirros, Password: None

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
Traceback (most recent call last):

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
tempest/scenario/manager.py, line 312, in get_remote_client

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
linux_client.validate_authentication()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
tempest/common/utils/linux/remote_client.py, line 62, in 
validate_authentication

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
self.ssh_client.test_connection_auth()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 151, in test_connection_auth

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
connection = self._get_ssh_connection()

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
 line 87, in _get_ssh_connection

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
password=self.password)

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
SSHTimeout: Connection to the 172.24.5.1 via SSH timed out.

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager User: 
cirros, Password: None

03:34:09 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager

03:34:09 2015-07-17 03:21:14,377 9505 INFO 
[tempest_lib.common.rehttp://tempest_lib.common.re



Because of these every job is failing, so if someone can help me regarding this 
please do reply.

--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com
__
OpenStack Development Mailing 

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Jay Lau
The nova guys propose move Hyper to Magnum but not Nova as Hyper cannot fit
into nova virt driver well.

As Hyper is now integrating with Kubernetes, I think that the integration
point may be creating a kubernetes hyper bay with ironic driver.

Thanks

2015-07-20 10:00 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:

 Hi Peng,

 As @Adrian pointed it out:

 *My fist suggestion is to find a way to make a nova virt driver for Hyper,
 which could allow it to be used with all of our current Bay types in
 Magnum.*


 I remembered you or other guys in your company proposed one bp about nova
 virt driver for Hyper. What's the status of the bp now?
 Is it accepted by nova projects or cancelled ?


 Thanks

 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for Adrian Otto ---07/19/2015 11:18:02
 PM---Peng, You are not the first to think this way, and it's one o]Adrian
 Otto ---07/19/2015 11:18:02 PM---Peng, You are not the first to think this
 way, and it's one of the reasons we did not integrate Cont

 From: Adrian Otto adrian.o...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 07/19/2015 11:18 PM
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
 withHyper
 --



 Peng,

 You are not the first to think this way, and it's one of the reasons we
 did not integrate Containers with OpenStack in a meaningful way a full year
 earlier. Please pay attention closely.

 1) OpenStack's key influences care about two personas: 1.1) Cloud
 Operators 1.2) Cloud Consumers. If you only think in terms of 1.2, then
 your idea will get killed. Operators matter.

 2) Cloud Operators need a consistent way to bill for the IaaS services the
 provide. Nova emits all of the RPC messages needed to do this. Having a
 second nova that does this slightly differently is a really annoying
 problem that will make Operators hate the software. It's better to use
 nova, have things work consistently, and plug in virt drivers to it.

 3) Creation of a host is only part of the problem. That's the easy part.
 Nova also does a bunch of other things too. For example, say you want to
 live migrate a guest from one host to another. There is already
 functionality in Nova for doing that.

 4) Resources need to be capacity managed. We call this scheduling. Nova
 has a pluggable scheduler to help with the placement of guests on hosts.
 Magnum will not.

 5) Hosts in a cloud need to integrate with a number of other services,
 such as an image service, messaging, networking, storage, etc. If you only
 think in terms of host creation, and do something without nova, then you
 need to re-integrate with all of these things.

 Now, I probably left out examples of lots of other things that Nova does.
 What I have mentioned us enough to make my point that there are a lot of
 things that Magnum is intentionally NOT doing that we expect to get from
 Nova, and I will block all code that gratuitously duplicates functionality
 that I believe belongs in Nova. I promised our community I would not
 duplicate existing functionality without a very good reason, and I will
 keep that promise.

 Let's find a good way to fit Hyper with OpenStack in a way that best
 leverages what exists today, and is least likely to be rejected. Please
 note that the proposal needs to be changed from where it is today to
 achieve this fit.

 My fist suggestion is to find a way to make a nova virt driver for Hyper,
 which could allow it to be used with all of our current Bay types in Magnum.

 Thanks,

 Adrian


  Original message 
 From: Peng Zhao p...@hyper.sh
 Date: 07/19/2015 5:36 AM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
 withHyper

 Thanks Jay.

 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos.
 I just think bay isn't a must in this case, and we don't need nova to
 provision BM hosts, which makes things more complicated imo.

 Peng


 -- Original --
 *From: * Jay Laujay.lau@gmail.com;
 *Date: * Sun, Jul 19, 2015 10:36 AM
 *To: * OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org;
 *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
 withHyper

 Hong Bin,

 I have some online discussion with Peng, seems hyper is now integrating
 

[openstack-dev] periodic-stable job reports (was Re: [Openstack-stable-maint] Stable check of $THINGS failed)

2015-07-19 Thread James Polley
As exciting as these emails are, I find them a bit unexciting. They don't
give me any context to let me know whether this was a one-off or if
something has been broken for a while.

Long term, having a dashboard (see https://review.openstack.org/#/c/192253/
for the spec) will help us get better visibility into what these jobs are
doing.

But purely to scratch my own itch to get a better handle on what's
happening right now, I grabbed some code Derek H has been using for a
dashboard for the TripleO jobs and bashed at it a bit to make it a bit more
generic. I've put the reports up at http://bruce.jamezpolley.com/reports/

This is *not* a first draft of the dashboard described in the spec, nor a
prototype; it's purely something I pulled together to help me understand
what data we can already pull out of Jenkins to help show the history of
our jobs. But since I've got it online, I figured I may as well share it in
case it's useful to anyone else as well.

If you think having some kind of dashboard would be more useful than these
emails, please go check out that spec so that we can start to make progress
towards something useful.

On Sun, Jul 19, 2015 at 4:22 PM A mailing list for the OpenStack Stable
Branch test reports. openstack-stable-ma...@lists.openstack.org wrote:

 Build failed.

 - periodic-ironic-docs-juno
 http://logs.openstack.org/periodic-stable/periodic-ironic-docs-juno/943be60/
 : FAILURE in 5m 51s
 - periodic-ironic-python26-juno
 http://logs.openstack.org/periodic-stable/periodic-ironic-python26-juno/0d25bc4/
 : FAILURE in 6m 36s
 - periodic-ironic-python27-juno
 http://logs.openstack.org/periodic-stable/periodic-ironic-python27-juno/b8bcec1/
 : FAILURE in 6m 51s
 - periodic-ironic-docs-kilo
 http://logs.openstack.org/periodic-stable/periodic-ironic-docs-kilo/47ee329/
 : SUCCESS in 6m 06s
 - periodic-ironic-python27-kilo
 http://logs.openstack.org/periodic-stable/periodic-ironic-python27-kilo/c426b36/
 : SUCCESS in 8m 17s

 ___
 Openstack-stable-maint mailing list
 openstack-stable-ma...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group IRC meeting - cancel this week

2015-07-19 Thread Dugger, Donald D
As discussed at the last meeting we'll cancel this week (blame it on travel for 
the mid-cycle meetup).

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

2015-07-19 Thread Peng Zhao
Adrian,


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, VPC, Volume, etc., are handled by 
HyperStack, not nova. Therefore, a second core besides nova is inevitable. 
Speaking of duplication, HyperStack leverages Cinder and Neutron, which 
protects ROI.


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform.


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng

 
 
-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Sun, Jul 19, 2015 11:11 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal  
withHyper

 
 Peng,
 
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate  the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of  these things.
 
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code  that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.
 
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum.
 
 
 Thanks,
 
 
 Adrian
 
 
  Original message 
 From: Peng Zhao p...@hyper.sh 
 Date: 07/19/2015 5:36 AM (GMT-08:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper 
 
 Thanks Jay.
 
 
 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.
 
 
 Peng
   
 
 
  -- Original --
  From:  Jay Laujay.lau@gmail.com;
 Date:  Sun, Jul 19, 2015 10:36 AM
 To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper
 
  
   Hong Bin,
 
 
 I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
 
 
 Thanks
 
 
 2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

Peng,
 
 
 
Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with  Magnum directly? Or you were suggesting to 
integrate Hyper 

Re: [openstack-dev] [CI] How to set a proxy for zuul.

2015-07-19 Thread Tang Chen

Hi Asselin, Abhishek,

Thanks for the reply. :)


On 07/19/2015 02:41 AM, Asselin, Ramy wrote:


HI Abhi  Tang,

Sorry I missed this thread. Let me know if you've resolved your issues.

My repo is undergoing migrations to reuse components in 
openstack-infra/puppet-openstackci.


For single-use-nodes, the file you need has been removed here [1]: But 
I see now that it is still needed, or a different function is needed 
based on this version used by infra: [2]. I will explore a solution.


A couple other notes, please use ci-sandbox  [3] instead of sandbox.



OK.

Zuul use behind a proxy: seems you got past this? Could you share your 
solution?




The root cause is that zuul uses a library named paramiko to create 
connection

with low level socket APIs. And it doesn't provide proxy functionary.

I tried to use tools like proxychain, to redirect the zuul connection to 
go through
my proxy. But it doesn't work. If I use proxychain to run zuul service, 
it doesn't

output anything to the log file, and the service will die soon.

I think, there are two solutions:
1. Add proxy functionality to paramiko.
2. Add proxy functionality to zuul. That means, maybe,
zuul does not use paramiko to create connection, but by itself.

Solution 1 is much simpler, so for now, I just modified the source code 
of paramiko.


I'm using python-socksipy package, and modified
/usr/local/lib/python2.7/dist-packages/paramiko/client.py like this:

diff --git a/client.py b/client.py
index 15ff696..d7225ed 100644
--- a/client.py
+++ b/client.py
@@ -24,6 +24,7 @@ from binascii import hexlify
 import getpass
 import os
 import socket
+import socks
 import warnings
 import pdb

@@ -235,6 +236,7 @@ class SSHClient (ClosingContextManager):
 ``gss_deleg_creds`` and ``gss_host`` arguments.
 

+   
 if not sock:
 for (family, socktype, proto, canonname, sockaddr) in 
socket.getaddrinfo(hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM):

 if socktype == socket.SOCK_STREAM:
@@ -251,6 +253,13 @@ class SSHClient (ClosingContextManager):
 except:
 pass
 retry_on_signal(lambda: sock.connect(addr))
+   
+
+   if not sock:
+   sock = socks.socksocket()
+   sock.setproxy(socks.PROXY_TYPE_SOCKS5, MY_PROXY_IP, 
MY_PROXY_PORT, username=XXX, password=XXX)


  # This is review.openstack.org
+   addr = ('104.130.159.134', 29418)
+   retry_on_signal(lambda: sock.connect(addr))

 t = self._transport = Transport(sock, gss_kex=gss_kex, 
gss_deleg_creds=gss_deleg_creds)

 t.use_compression(compress=compress)


Of course, this is just a draft. It is only for my case, not for all.


BTW, I'm now working on Fujitsu CI System, and really want to join into 
the development of openstack-infra.
I think the proxy functionality is necessary for many companies. So if 
you are planing to add the proxy support,

I think I can help.

Thanks. :)

Also, feel free to join 3^rd party ci IRC  meetings on freenode [4]. 
It's a great place to ask questions and meet others setting up or 
maintaining these systems.


Thanks,

Ramy

IRC: asselin

[1] 
https://github.com/rasselin/os-ext-testing/commit/dafe822be7813522a6c7361993169da20b37ffb7


[2] 
https://github.com/openstack-infra/project-config/blob/master/zuul/openstack_functions.py


[3] http://git.openstack.org/cgit/openstack-dev/ci-sandbox/

[4] http://eavesdrop.openstack.org/#Third_Party_Meeting

*From:*Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
*Sent:* Monday, July 13, 2015 11:51 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [CI] How to set a proxy for zuul.

Also if you want to change it you will need to talk with Asselin Ramy 
who is the owner of the repo you followed.


On Tue, Jul 14, 2015 at 12:18 PM, Abhishek Shrivastava 
abhis...@cloudbyte.com mailto:abhis...@cloudbyte.com wrote:


Basically it is not required, and if you see
/etc/jenkins_jobs/config folder you will find one
dsvm-cinder-tempest.yaml which is to be used basically not
examples.yaml. So its not an issue.

On Tue, Jul 14, 2015 at 12:07 PM, Tang Chen
tangc...@cn.fujitsu.com mailto:tangc...@cn.fujitsu.com wrote:

On 07/14/2015 01:46 PM, Abhishek Shrivastava wrote:

Instead of it use reusable_node option.


Thanks. Problem resolved. :)

BTW, single_use_node is written in layout.yaml by default.
If it doesn't exist anymore, do we need a patch to fix it ?

For someone who uses CI for the first time, it is really a
problem..

And also, if I want to post patch for zuul, where should I
post the patch ?

Thanks.




On Tue, Jul 14, 2015 at 9:12 AM, Tang Chen
tangc...@cn.fujitsu.com mailto:tangc...@cn.fujitsu.com
wrote:

Hi Abhishek, All,

  

Re: [openstack-dev] [magnum][bp] Power Magnum to runon metal withHyper

2015-07-19 Thread Peng Zhao
It looks like that Nova team has no plan to accept either nova-docker driver or 
nova-hyper. The focus of Nova is Server-like instance, not App-centric 
container. That is fine. It's the best to let Nova be Nova, and build sth. else 
for container. After all, different use cases, different needs, different 
solutions.
 
Peng


-- Original --
From:  Kai Qiang Wuwk...@cn.ibm.com;
Date:  Mon, Jul 20, 2015 10:00 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to runonmetal   
withHyper

 
 
Hi Peng,
 
 As @Adrian pointed it out:
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum. 
 
 
 I remembered you or other guys in your company proposed one bp about nova virt 
driver for Hyper. What's the status of the bp now?
 Is it accepted by nova projects or cancelled ?
 
 
 Thanks
 
 Best Wishes,
 

 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing
 
 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,  
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 
100193
 

 Follow your heart. You are miracle! 
 
 Adrian Otto ---07/19/2015 11:18:02 PM---Peng, You are not the first to think 
this way, and it's one of the reasons we did not integrate Cont
 
 From:  Adrian Otto adrian.o...@rackspace.com
 To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
 Date:  07/19/2015 11:18 PM
 Subject:   Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal   
withHyper
 


 
 
 Peng,
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum.
 
 Thanks,
 
 Adrian
 
 
  Original message 
 From: Peng Zhao p...@hyper.sh 
 Date: 07/19/2015 5:36 AM (GMT-08:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper 
 
 Thanks Jay.
 
 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.
 
 Peng
  
 
 -- Original --
 From:  Jay Laujay.lau@gmail.com;
 Date:  Sun, Jul 19, 2015 10:36 AM
 To:  OpenStack Development Mailing List (not for usage 

[openstack-dev] [Neutron] HELP CONFIRM OR DISCUSS:create a port when network contain ipv4 subnets and ipv6 subnets, allocate ipv6 address to the port.

2015-07-19 Thread zhaobo
Hi ,
Could anyone please check the bug below?
https://bugs.launchpad.net/neutron/+bug/1467791


This bug description:
When the created network contains one ipv4 subnet and an ipv6 subnet which 
turned on slaac or stateless.
When I create a port use cmd like:
neutron port-create --fixed-ip subnet_id=$[ipv4_subnet_id] $[network_id/name]
The specified fixed-ip is ipv4_subnet, but the returned port contained the ipv6 
subnet.




If user just want a port with ipv4 , why returned port had allocated an ipv6 
address.
And I know this is an design behavior from 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ipv6-prefixes.html#proposed-change
But we are still confused about this operation.


Thank you anyone could help for confirm this issue , and wish you return the 
message asap.


ZhaoBo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

2015-07-19 Thread Peng Zhao
Adrian,


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, VPC, Volume, etc., are handled by 
HyperStack, not nova. Therefore, a second core besides nova is inevitable. 
Speaking of duplication, HyperStack leverages Cinder and Neutron, which 
protects ROI.


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform. 


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng
 
-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Sun, Jul 19, 2015 11:11 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal  
withHyper

 
 Peng,
 
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate  the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of  these things.
 
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code  that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.
 
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum.
 
 
 Thanks,
 
 
 Adrian
 
 
  Original message 
 From: Peng Zhao p...@hyper.sh 
 Date: 07/19/2015 5:36 AM (GMT-08:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper 
 
 Thanks Jay.
 
 
 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.
 
 
 Peng
   
 
 
  -- Original --
  From:  Jay Laujay.lau@gmail.com;
 Date:  Sun, Jul 19, 2015 10:36 AM
 To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper
 
  
   Hong Bin,
 
 
 I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
 
 
 Thanks
 
 
 2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

Peng,
 
 
 
Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with  Magnum directly? Or you were suggesting to 
integrate Hyper 

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwithHyper

2015-07-19 Thread Peng Zhao
Hi, Jay, Adrian and Wu,


I have some problems with my mail server to reply Adrian's message. So let me 
write here.


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, VPC, Volume, etc., are handled by 
HyperStack, not nova. Therefore, a second core besides nova is inevitable. 
Speaking of duplication, HyperStack leverages Cinder and Neutron, which 
protects ROI.


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform.


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng


 

 
 
-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Mon, Jul 20, 2015 11:18 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwithHyper

 
The nova guys propose move Hyper to Magnum but not Nova as Hyper cannot fit 
into nova virt driver well.


As Hyper is now integrating with Kubernetes, I think that the integration point 
may be creating a kubernetes hyper bay with ironic driver.


Thanks


2015-07-20 10:00 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:
 
Hi Peng,
 
 As @Adrian pointed it out:
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum. 
 
 
 I remembered you or other guys in your company proposed one bp about nova virt 
driver for Hyper. What's the status of the bp now?
 Is it accepted by nova projects or cancelled ?
 
 
 Thanks
 
 Best Wishes,
 

 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing
 
 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,  
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 
100193
 

 Follow your heart. You are miracle! 
 
 Adrian Otto ---07/19/2015 11:18:02 PM---Peng, You are not the first to think 
this way, and it's one of the reasons we did not integrate Cont
 
 From:  Adrian Otto adrian.o...@rackspace.com
 To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
 Date:  07/19/2015 11:18 PM
 Subject:   Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal   
withHyper
 


 
 
 Peng,
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that 

Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

2015-07-19 Thread Peng Zhao
Adrian,


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, etc., are handled by HyperStack, not 
nova. Therefore, a second core besides nova is inevitable. 


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform. 


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng
 
-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Sun, Jul 19, 2015 11:11 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal  
withHyper

 
 Peng,
 
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate  the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of  these things.
 
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code  that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.
 
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum.
 
 
 Thanks,
 
 
 Adrian
 
 
  Original message 
 From: Peng Zhao p...@hyper.sh 
 Date: 07/19/2015 5:36 AM (GMT-08:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper 
 
 Thanks Jay.
 
 
 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.
 
 
 Peng
   
 
 
  -- Original --
  From:  Jay Laujay.lau@gmail.com;
 Date:  Sun, Jul 19, 2015 10:36 AM
 To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper
 
  
   Hong Bin,
 
 
 I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
 
 
 Thanks
 
 
 2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

Peng,
 
 
 
Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with  Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?
 
 
 
Best regards,
 
Hongbin
 
 
  

Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests failing with SSH timeout.

2015-07-19 Thread Abhishek Shrivastava
This is ok for the services it will install, but how can we also restrict
the downloading of all the projects(i.e; downloading only required
projects) ?

On Sun, Jul 19, 2015 at 11:39 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  There are two ways that I know of to customize what services are run:

  1.  Setup your own feature matrix [1]

 2.  Override enabled services [2]



 Option 2 is probably what you’re looking for.



 [1]
 http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n152

 [2]
 http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n76



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 10:37 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Ramy,



 Thanks for the suggestion. One more thing I need to ask, as I have have
 setup one more CI so is there any way that we can decide that only required
 projects should get downloaded and installed during devstack installation
 dynamically. As I see no such things that can be done to devstack-gate
 scripts so the following scenario can be achieved.



 On Sun, Jul 19, 2015 at 8:38 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Just the export I mentioned:

 export DEVSTACK_GATE_NEUTRON=1

 Devstack-gate scripts will do the right thing when it sees that set. You
 can see plenty of examples here [1].



 Ramy



 [1]
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n467



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Sunday, July 19, 2015 2:24 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [openstack-infra] [CI] [tempest] Tempest
 tests failing with SSH timeout.



 Hi Ramy,



 Thanks for the suggestion but since I am not including the neutron
 project, so downloading and including it will require any additional
 configuration in devstack-gate or not?



 On Sat, Jul 18, 2015 at 11:41 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  We ran into this issue as well. I never found the root cause, but I
 found a work-around: Use neutron-networking instead of the default
 nova-networking.



 If you’re using devstack-gate, it’s as  simple as:

 export DEVSTACK_GATE_NEUTRON=1



 Then run the job as usual.



 Ramy



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* Friday, July 17, 2015 9:15 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [openstack-infra] [CI] [tempest] Tempest tests
 failing with SSH timeout.



 Hi Folks,



 In my CI I see the following tempest tests failure for a past couple of
 days.

 ·
 tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
  [361.274316s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  [320.122458s] ... FAILED

 ·
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  [317.399342s] ... FAILED

 ·
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  [257.858272s] ... FAILED

  The failure logs are always the same every time, i.e;



  *03:34:09* 2015-07-17 03:21:13,256 9505 ERROR
 [tempest.scenario.manager] (TestVolumeBootPattern:test_volume_boot_pattern) 
 Initializing SSH connection to 172.24.5.1 failed. Error: Connection to the 
 172.24.5.1 via SSH timed out.

 *03:34:09* User: cirros, Password: None

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager 
 Traceback (most recent call last):

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/scenario/manager.py, line 312, in get_remote_client

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  linux_client.validate_authentication()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File tempest/common/utils/linux/remote_client.py, line 62, in 
 validate_authentication

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  self.ssh_client.test_connection_auth()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 151, in test_connection_auth

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager
  connection = self._get_ssh_connection()

 *03:34:09* 2015-07-17 03:21:13.256 9505 ERROR tempest.scenario.manager   
 File 
 /opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py,
  line 87, in _get_ssh_connection

 *03:34:09* 2015-07-17 

Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwithHyper

2015-07-19 Thread Peng Zhao
I had some problem with my email server today, so you may see several identical 
messages from me in the ML. Please ignore and sorry about that.


Peng
 
 
-- Original --
From:  Peng Zhaop...@hyper.sh;
Date:  Mon, Jul 20, 2015 11:41 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwithHyper

 


Hi, Jay, Adrian and Wu,


I have some problems with my mail server to reply Adrian's message. So let me 
write here.


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, VPC, Volume, etc., are handled by 
HyperStack, not nova. Therefore, a second core besides nova is inevitable. 
Speaking of duplication, HyperStack leverages Cinder and Neutron, which 
protects ROI.


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform.


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng


 

 
 
-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Mon, Jul 20, 2015 11:18 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwithHyper

 
The nova guys propose move Hyper to Magnum but not Nova as Hyper cannot fit 
into nova virt driver well.


As Hyper is now integrating with Kubernetes, I think that the integration point 
may be creating a kubernetes hyper bay with ironic driver.


Thanks


2015-07-20 10:00 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:
 
Hi Peng,
 
 As @Adrian pointed it out:
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum. 
 
 
 I remembered you or other guys in your company proposed one bp about nova virt 
driver for Hyper. What's the status of the bp now?
 Is it accepted by nova projects or cancelled ?
 
 
 Thanks
 
 Best Wishes,
 

 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing
 
 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,  
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 
100193
 

 Follow your heart. You are miracle! 
 
 Adrian Otto ---07/19/2015 11:18:02 PM---Peng, You are not the first to think 
this way, and it's one of the reasons we did not integrate Cont
 
 From:  Adrian Otto adrian.o...@rackspace.com
 To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
 Date:  07/19/2015 11:18 PM
 Subject:   Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal   
withHyper
 


 
 
 Peng,
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 

Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

2015-07-19 Thread Peng Zhao
Adrian,


Let's say someone creates a Hyper bay. The bay will be sth. like 
BM+Hyper+Cinder+Neutron+k8s/mesos/swarm. This is exactly a mini HyperStack. 
What nova does in this scenario is to provision the Hyper+BM hosts. Things like 
LiveMigration, Multi-tenancy, Billing, VPC, Volume, etc., are handled by 
HyperStack, not nova. Therefore, a second core besides nova is inevitable. 
Speaking of duplication, HyperStack leverages Cinder and Neutron, which 
protects ROI.


Looking at the overall puzzle, one of the biggest missing pieces is a solution 
of the native CaaS. And HyperStack wants to fill that gap. Hyper bay is a valid 
case, but more for someone who wants to provides CaaS within their IaaS (nova) 
platform.


We plan to present a working beta of HyperStack on Tokyo summit. The next step 
is to integrate HyperStack with bay for more advanced deployment.


Best,
Peng


 
-- Original --
From:  Adrian Ottoadrian.o...@rackspace.com;
Date:  Sun, Jul 19, 2015 11:11 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal  
withHyper

 
 Peng,
 
 
 You are not the first to think this way, and it's one of the reasons we did 
not integrate Containers with OpenStack in a meaningful way a full year 
earlier. Please pay attention closely.
 
 
 1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.
 
 
 2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate  the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.
 
 
 3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.
 
 
 4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.
 
 
 5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of  these things.
 
 
 Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code  that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.
 
 
 Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.
 
 
 My fist suggestion is to find a way to make a nova virt driver for Hyper, 
which could allow it to be used with all of our current Bay types in Magnum.
 
 
 Thanks,
 
 
 Adrian
 
 
  Original message 
 From: Peng Zhao p...@hyper.sh 
 Date: 07/19/2015 5:36 AM (GMT-08:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper 
 
 Thanks Jay.
 
 
 Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.
 
 
 Peng
   
 
 
  -- Original --
  From:  Jay Laujay.lau@gmail.com;
 Date:  Sun, Jul 19, 2015 10:36 AM
 To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper
 
  
   Hong Bin,
 
 
 I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
 
 
 Thanks
 
 
 2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

Peng,
 
 
 
Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with  Magnum directly? Or you were suggesting to 
integrate Hyper