[openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-10 Thread Jay Lau
Hi,

Does anyone know why in instance_group.py, why do we have the following
logic for transferring metadetails to metadata? Why not transfer metadata
directly from client?

https://github.com/openstack/nova/blob/master/nova/objects/instance_group.py#L99-L101

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-10 Thread Jay Lau
I was asking this because I got a -2 for
https://review.openstack.org/109505 , just want to know why this new term
metadetails was invented when we already have details, metadata,
system_metadata, instance_metadata, and properties (on images and
volumes).

Thanks!


2014-08-11 10:09 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Hi,

 Does anyone know why in instance_group.py, why do we have the following
 logic for transferring metadetails to metadata? Why not transfer metadata
 directly from client?


 https://github.com/openstack/nova/blob/master/nova/objects/instance_group.py#L99-L101

 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Jay Lau
Thanks Jay Pipes! I see, but setting metadata for server group might be
more flexible to handle all of the policy cases, such as hard
affinity/anti-affinity, soft affinity/anti-affinity, topology
affinity/anti-affinity etc, we may have more use cases in future related to
server group metadata.

Regarding get rid of instance_group table, yes, it is a good idea for
having near, not-near, hard, and soft, but it is a big change for
current nova server group design, I'm not sure if we can have some clear
conclusion in the coming one or two releases.

Thanks!


2014-08-12 7:01 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 08/11/2014 05:58 PM, Jay Lau wrote:

 I think the metadata in server group is an important feature and it
 might be used by
 https://blueprints.launchpad.net/nova/+spec/soft-affinity-
 for-server-group

 Actually, we are now doing an internal development for above bp and want
 to contribute this back to community later. We are now setting hard/soft
 flags in server group metadata to identify if the server group want
 hard/soft affinity.

 I prefer Dan's first suggestion, what do you think?
 =
 If we care to have this functionality, then I propose we change the
 attribute on the object (we can handle this with versioning) and reflect
 it as metadata in the API.
 =


 -1

 If hard and soft is something that really needs to be supported, then this
 should be a field in the instance_groups table, not some JSON blob in a
 random metadata field.

 Better yet, get rid of the instance_groups table altogether and have
 near, not-near, hard, and soft be launch modifiers similar to the
 instance type. IMO, there's really no need to store a named group at all,
 but that goes back to my original ML post about the server groups topic:

 https://www.mail-archive.com/openstack-dev@lists.openstack.
 org/msg23055.html

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-12 Thread Jay Lau
Thanks Qiming. ;-)

Yes, this is one solution for running user data when using docker container
in HEAT. I see that the properties include almost all of the parameters
used in docker run.

Do you know if docker container support cloud-init in a image? My
understanding is NOT as I did not see userdata in docker property.



2014-08-12 16:21 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:


 Hi,

 Are you aware of the dockter_container resource type
 (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a
 'CMD' property which is a list of command to run after the container is
 spawned.

 Is that what you want?

 Regards,
   Qiming

 On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote:
  Hi,
 
  I'm now doing some investigation for docker + HEAT integration and come
 up
  one question want to get your help.
 
  What is the best way for a docker container to run some user data once
 the
  docker container was provisioned?
 
  I think there are two ways: using cloud-init or the CMD section in
  Dockerfile, right? just wondering does anyone has some experience with
  cloud-init for docker container, does the configuration same with VM?
 
  --
  Thanks,
 
  Jay

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-12 Thread Jay Lau
I did not have the environment set up now, but by reviewing code, I think
that the logic should be as following:
1) When using nova docker driver, we can use cloud-init or/and CMD in
docker images to run post install scripts.
myapp:
Type: OS::Nova::Server
Properties:
flavor: m1.small
image: my-app:latest   docker image
user-data:  

2) When using heat docker driver, we can only use CMD in docker image or
heat template to run post install scripts.
wordpress:
type: DockerInc::Docker::Container
depends_on: [database]
properties:
  image: wordpress
  links:
db: mysql
  port_bindings:
80/tcp: [{HostPort: 80}]
  docker_endpoint:
str_replace:
  template: http://host:2345/
  params:
host: {get_attr: [docker_host, networks, private, 0]}
cmd: /bin/bash 



2014-08-12 17:11 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:

 Don't have an answer to this.  You may try it though.

 Regards,
   Qiming

 On Tue, Aug 12, 2014 at 04:52:58PM +0800, Jay Lau wrote:
  Thanks Qiming. ;-)
 
  Yes, this is one solution for running user data when using docker
 container
  in HEAT. I see that the properties include almost all of the parameters
  used in docker run.
 
  Do you know if docker container support cloud-init in a image? My
  understanding is NOT as I did not see userdata in docker property.
 
 
 
  2014-08-12 16:21 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:
 
  
   Hi,
  
   Are you aware of the dockter_container resource type
   (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a
   'CMD' property which is a list of command to run after the container is
   spawned.
  
   Is that what you want?
  
   Regards,
 Qiming
  
   On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote:
Hi,
   
I'm now doing some investigation for docker + HEAT integration and
 come
   up
one question want to get your help.
   
What is the best way for a docker container to run some user data
 once
   the
docker container was provisioned?
   
I think there are two ways: using cloud-init or the CMD section in
Dockerfile, right? just wondering does anyone has some experience
 with
cloud-init for docker container, does the configuration same with VM?
   
--
Thanks,
   
Jay
  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
  --
  Thanks,
 
  Jay

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-12 Thread Jay Lau
Thanks Eric for the confirmation ;-)


2014-08-12 23:30 GMT+08:00 Eric Windisch ewindi...@docker.com:




 On Tue, Aug 12, 2014 at 5:53 AM, Jay Lau jay.lau@gmail.com wrote:

 I did not have the environment set up now, but by reviewing code, I think
 that the logic should be as following:
 1) When using nova docker driver, we can use cloud-init or/and CMD in
 docker images to run post install scripts.
 myapp:
 Type: OS::Nova::Server
 Properties:
 flavor: m1.small
 image: my-app:latest   docker image
 user-data:  

 2) When using heat docker driver, we can only use CMD in docker image or
 heat template to run post install scripts.
 wordpress:
 type: DockerInc::Docker::Container
 depends_on: [database]
 properties:
   image: wordpress
   links:
 db: mysql
   port_bindings:
 80/tcp: [{HostPort: 80}]
   docker_endpoint:
 str_replace:
   template: http://host:2345/
   params:
 host: {get_attr: [docker_host, networks, private, 0]}
 cmd: /bin/bash 



 I can confirm this is correct for both use-cases. Currently, using Nova,
 one may only specify the CMD in the image itself, or as glance metadata.
 The cloud metadata service should be assessable and usable from Docker.

 The Heat plugin allow settings the CMD as a resource property. The
 user-data is only passed to the instance that runs Docker, not the
 containers. Configuring the CMD and/or environment variables for the
 container is the correct approach.

 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-14 Thread Jay Lau
I see a few mentions of OpenStack services themselves being containerized
in Docker. Is this a serious trend in the community?

http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread Jay Lau
It seems that VCDriver do not support live migration till now.

I recalled in ATL summit, the VMWare team is going to do some enhancement
to enable live migration:
1) Make sure one nova compute can only manage one cluster or resource pool,
this can make sure VMs in different cluster/resource pool can migrate to
each other.
2) Till now, I see that VCDriver did not implement
check_can_live_migrate_destination(), this caused live migration will
always be failed when using VCDriver.

Thanks.


2014-08-18 16:14 GMT+08:00 한승진 yongi...@gmail.com:

 Is there anybody working on below bug?

 https://bugs.launchpad.net/nova/+bug/1192192

 The comments are ends 2014-03-26

 I guess we should fix the VCDriver source codes.

 If someone is doing now, can you share how to solve the problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jay Lau
I see that there are some openstack docker images in public docker repo,
perhaps you can check them on github to see how to use them.

[root@db03b04 ~]# docker search openstack
NAME
DESCRIPTION STARS OFFICIAL
AUTOMATED
ewindisch/dockenstackOpenStack development environment
(using D...   6[OK]
jyidiego/openstack-clientAn ubuntu 12.10 LTS image that has
nova, s...   1
dkuffner/docker-openstack-stress A docker container for openstack
which pro...   0[OK]
garland/docker-openstack-keystone
0[OK]
mpaone/openstack
0
nirmata/openstack-base
0
balle/openstack-ipython2-client  Features Python 2.7.5, Ipython
2.1.0 and H...   0
booleancandy/openstack_clients
0[OK]
leseb/openstack-keystone
0
raxcloud/openstack-client
0
paulczar/openstack-agent
0
booleancandy/openstack-clients
0
jyidiego/openstack-client-rumm-ansible
0
bodenr/jumpgate  SoftLayer Jumpgate WSGi OpenStack
REST API...   0[OK]
sebasmagri/docker-marconiDocker images for the Marconi
Message Queu...   0[OK]
chamerling/openstack-client
0[OK]
centurylink/openstack-cli-wetty  This image provides a Wetty
terminal with ...   0[OK]


2014-08-18 16:47 GMT+08:00 Philip Cheong philip.che...@elastx.se:

 I think it's a very interesting test for docker. I too have been think
 about this for some time to try and dockerise OpenStack services, but as
 the usual story goes, I have plenty things I'd love to try, but there are
 only so many hours in a day...

 Would definitely be interested to hear if anyone has attempted this and
 what the outcome was.

 Any suggestions on what the most appropriate service would be to begin
 with?


 On 14 August 2014 14:54, Jay Lau jay.lau@gmail.com wrote:

 I see a few mentions of OpenStack services themselves being containerized
 in Docker. Is this a serious trend in the community?

 http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Philip Cheong*
 *Elastx *| Public and Private PaaS
 email: philip.che...@elastx.se
 office: +46 8 557 728 10
 mobile: +46 702 8170 814
 twitter: @Elastx https://twitter.com/Elastx
 http://elastx.se

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread Jay Lau
Till now, live migration is not supported by VCDriver in both Juno and
Icehouse.

For icehouse, yes, one nova compute can manage multiple clusters, but live
migration will be failed for such case as  target host and source host will
be considered to the same host (Only one nova compute).


2014-08-18 19:32 GMT+08:00 한승진 yongi...@gmail.com:

 Thanks for reply Jay~!

 From icehouse, one nova-compute can manage multi clusters I think.

 In this case, how should we progress in order to archive the live
 migration functions.

 Thanks.

 John Haan.


 2014-08-18 19:00 GMT+09:00 Jay Lau jay.lau@gmail.com:

 It seems that VCDriver do not support live migration till now.

 I recalled in ATL summit, the VMWare team is going to do some enhancement
 to enable live migration:
 1) Make sure one nova compute can only manage one cluster or resource
 pool, this can make sure VMs in different cluster/resource pool can migrate
 to each other.
 2) Till now, I see that VCDriver did not implement
 check_can_live_migrate_destination(), this caused live migration will
 always be failed when using VCDriver.

 Thanks.


 2014-08-18 16:14 GMT+08:00 한승진 yongi...@gmail.com:

  Is there anybody working on below bug?

 https://bugs.launchpad.net/nova/+bug/1192192

 The comments are ends 2014-03-26

 I guess we should fix the VCDriver source codes.

 If someone is doing now, can you share how to solve the problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jay Lau
2014-08-19 4:11 GMT+08:00 Eric Windisch ewindi...@docker.com:




 On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan jran...@gmail.com wrote:

 I believe that everything can not go as a dock container. For e.g.

 1. compute nodes
 2. baremetal provisioning
 3. L3 router etc


 Containers are a good solution for all of the above, for some value of
 container. There is some terminology overloading here, however.


Hi Eric, one more question, not quite understand what you mean for
Containers are a good solution for all of the above, you mean docker
container can manage all of three above? How? Can you please show more
details? Thanks!


 There are Linux namespaces, capability sets, and cgroups which may not be
 appropriate for using around some workloads. These, however, are granular.
 For instance, one may run a container without networking namespaces,
 allowing the container to directly manipulate host networking. Such a
 container would still see nothing outside its own chrooted filesystem, PID
 namespace, etc.

 Docker in particular offers a number of useful features around filesystem
 management, images, etc. These features make it easier to deploy and manage
 systems, even if many of the Linux containers features are disabled for
 one reason or another.

 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jay Lau
Thanks Eric for the detailed explanation, clear. Will check more for
related links, thanks!


2014-08-19 7:09 GMT+08:00 Eric Windisch ewindi...@docker.com:


 On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan jran...@gmail.com wrote:

 I believe that everything can not go as a dock container. For e.g.

 1. compute nodes
 2. baremetal provisioning
 3. L3 router etc


 Containers are a good solution for all of the above, for some value of
 container. There is some terminology overloading here, however.


 Hi Eric, one more question, not quite understand what you mean for
 Containers are a good solution for all of the above, you mean docker
 container can manage all of three above? How? Can you please show more
 details? Thanks!


 I'm not sure this is the right forum for a nuanced explanation of every
 use-case and every available option, but I can give some examples. Keep in
 mind, again, that even in absence of security constraints offered by
 Docker, that Docker provides imaging facilities and server management
 solutions that are highly useful. For instance, there are use-cases of
 Docker that might leverage it simply for attestation or runtime artifact
 management.

 First, one could in the case of an L3 router or baremetal provisioning
 where host networking is required,  one might specify 'docker run -net
 host' to allow the process(es) running inside of the container to operate
 as if running on the host, but only as it pertains to networking.
 Essentially, it would uncontain the networking aspect of the process(es).

 As of Docker 1.2, to be released this week, one may also specify docker
 run --cap-add to provide granular control of the addition of Linux
 capabilities that might be needed by processes (see
 http://linux.die.net/man/7/capabilities). This allows granular loosing of
 restrictions which might allow container-breakout, without fully opening
 the gates.  From a security perspective, I'd rather provide some
 restrictions than none at all.

 On compute nodes, it should be possible to run qemu/kvm inside of a
 container. The nova-compute program does many things on a host and it may
 be difficult to provide a simplified set of restrictions for it without
 running a privileged container (or one with many --cap-add statements,
 --net host, etc). Again, while containment might be minimized, the
 deployment facilities of Docker are still very useful.  That said, all of
 the really interesting things done by Nova that require privileges are
 done by rootwrap... a rootwrap which leveraged Docker would make
 containerization of Nova more meaningful and would be a boon for Nova
 security overall.

 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Jay Lau
I know that Congress is still under development, but it is better that it
can provide some info for How to use it just like docker
https://wiki.openstack.org/wiki/Docker , this might attract more people
contributing to it.


2014-08-21 22:07 GMT+08:00 Madhu Mohan mmo...@mvista.com:

 Hi,

 I am quite new to the Congress and Openstack as well and this question may
 seem very trivial and basic.

 I am trying to figure out the policy enforcement logic,

 Can some body help me understand how exactly, a policy enforcement action
 is taken.

 From the example policy there is an action defined as:



 *action(disconnect_network) nova:network-(vm, network) :-
 disconnect_network(vm, network) *
 I assume that this statement when applied would translate to deletion of
 entry in the database.

 But, how does this affect the actual setup (i.e) How is this database
 update translated to actual disconnection of the VM from the network.
 How does nova know that it has to disconnect the VM from the network ?

 Thanks and Regards,
 Madhu Mohan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New feature on Nova

2014-08-21 Thread Jay Lau
There is already a blueprint tracing KVM host maintain:
https://blueprints.launchpad.net/nova/+spec/host-maintenance , but I think
that nova will not handle the case of auto live migration for maintenance
host, this should be a use case of Congress:
https://wiki.openstack.org/wiki/Congress


2014-08-21 23:00 GMT+08:00 thomas.pessi...@orange.com:

 Hello,



 Sorry if I am not on the right mailing list. I would like to get some
 information.



 I would like to know if I am a company who wants to add a feature on an
 openstack module. How do we have to proceed ? And so, what is the way this
 new feature be adopted by the community.



 The feature is, the maintenance mode.  That is to say, disable a compute
 node and do live migration on all the instances which are  running on the
 host.

 I know we can do an evacuate, but evacuate restart the instances. I have
 already written a shell script to do this using command-cli.



 Regards,

 _

 Ce message et ses pieces jointes peuvent contenir des informations 
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
 ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme ou 
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged 
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and delete 
 this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have been 
 modified, changed or falsified.
 Thank you.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Jay Lau
Hi Tim,

That's great! Does the tutorial is uploaded to Gerrit for review?

Thanks.


2014-08-21 23:56 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

  Hi Jay,

  We have a tutorial in review right now.  It should be merged in a couple
 of days.  Thanks for the suggestion!

  Tim


  On Aug 21, 2014, at 7:54 AM, Jay Lau jay.lau@gmail.com wrote:

  I know that Congress is still under development, but it is better that
 it can provide some info for How to use it just like docker
 https://wiki.openstack.org/wiki/Docker , this might attract more people
 contributing to it.


 2014-08-21 22:07 GMT+08:00 Madhu Mohan mmo...@mvista.com:

 Hi,

  I am quite new to the Congress and Openstack as well and this question
 may seem very trivial and basic.

 I am trying to figure out the policy enforcement logic,

  Can some body help me understand how exactly, a policy enforcement
 action is taken.

  From the example policy there is an action defined as:



 *action(disconnect_network) nova:network-(vm, network) :-
 disconnect_network(vm, network) *
  I assume that this statement when applied would translate to deletion of
 entry in the database.

  But, how does this affect the actual setup (i.e) How is this database
 update translated to actual disconnection of the VM from the network.
  How does nova know that it has to disconnect the VM from the network ?

  Thanks and Regards,
  Madhu Mohan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
  Thanks,

  Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Upcoming alpha release

2014-08-21 Thread Jay Lau
Just go through the tutorial, it is very clear, thanks Tim and the Congress
team.

In the README.rst, the link http://docs.openstack.org/developer/congress
cannot be opened.

One minor comments, I noticed that you are using Neutron for the example,
can you please add another case without neutron? Not all of the developers
are always enabling neutron when installing with devstack. Thanks.



2014-08-22 4:14 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi all,

 We're aiming for an alpha release of Congress tomorrow (Friday).  If you
 have a spare server and a little time, it’d be great if you could try it
 out: install it, write some policies, run tests, etc.  If you could send
 some feedback along the following lines, that would be helpful.

 1. What problems did you run into?  File a bug if you like, or drop me an
 email.

 2. Which operating system did you use, and was the install successful or
 not?

 Here are some docs that we hope are enough to get you started.

 - README with install instructions:
 https://github.com/stackforge/congress/blob/master/README.rst

 - Tutorial in the form of an end-to-end example:

 https://github.com/stackforge/congress/blob/master/doc/source/tutorial-tenant-sharing.rst

 - Troubleshooting guide:

 https://github.com/stackforge/congress/blob/master/doc/source/troubleshooting.rst

 Thanks!
 Tim


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Lau
Hi Jay,

There was actually a discussion about file a blueprint for object
notification http://markmail.org/message/ztehzx2wc6dacnk2

But for patch https://review.openstack.org/#/c/107954/ , I'd like we keep
it as it is now to resolve the requirement of server group notifications
for 3rd party client.

Thanks.

2014-09-22 22:41 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 09/22/2014 07:24 AM, Daniel P. Berrange wrote:

 On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:

 Hi Folks,

 I'd like to get some opinions on the use of pairs of notification
 messages for simple events.   I get that for complex operations on
 an instance (create, rebuild, etc) a start and end message are useful
 to help instrument progress and how long the operations took. However
 we also use this pattern for things like aggregate creation, which is
 just a single DB operation - and it strikes me as kind of overkill and
 probably not all that useful to any external system compared to a
 single event .create event after the DB operation.


 A start + end pair is not solely useful for timing, but also potentially
 detecting if it completed successfully. eg if you receive an end event
 notification you know it has completed. That said, if this is a use case
 we want to target, then ideally we'd have a third notification for this
 failure case, so consumers don't have to wait  timeout to detect error.

  There is a change up for review to add notifications for service groups
 which is following this pattern (https://review.openstack.org/
 #/c/107954/)
 - the author isn't doing  anything wrong in that there just following
 that
 pattern, but it made me wonder if we shouldn't have some better guidance
 on when to use a single notification rather that a .start/.end pair.

 Does anyone else have thoughts on this , or know of external systems that
 would break if we restricted .start and .end usage to long-lived instance
 operations ?


 I think we should aim to /always/ have 3 notifications using a pattern of

try:
   ...notify start...

   ...do the work...

   ...notify end...
except:
   ...notify abort...


 Precisely my viewpoint as well. Unless we standardize on the above, our
 notifications are less than useful, since they will be open to
 interpretation by the consumer as to what precisely they mean (and the
 consumer will need to go looking into the source code to determine when an
 event actually occurred...)

 Smells like a blueprint to me. Anyone have objections to me writing one up
 for Kilo?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][API] Still need to update v2 API in Icehouse release

2013-12-26 Thread Jay Lau
Hi,

In Icehouse development, do we have some guidelines for nova api change? If
I want to make some changes for nova api, do I need to update both v2 and
v3 or just v3?

There are some patches related to this, hope can get some comments from you.

https://review.openstack.org/#/c/52733/
https://review.openstack.org/#/c/52867/
https://review.openstack.org/#/c/63853/

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][API] Still need to update v2 API in Icehouse release

2013-12-27 Thread Jay Lau
Thanks Joe.

Still have another question, for which case shall we need to add new
extension to the v2 AP? If freeze the v2 api at icehouse-2, then does it
mean that we are not allowed to make any changes to v2 api? All api changes
should go to v3 directly, right?

Thanks,

Jay


2013/12/27 Joe Gordon joe.gord...@gmail.com




 On Thu, Dec 26, 2013 at 1:03 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi,

 In Icehouse development, do we have some guidelines for nova api change?
 If I want to make some changes for nova api, do I need to update both v2
 and v3 or just v3?


 For every new extension to the v2 API we require the equivalent change to
 the v3 api, so that there is nothing in v2 that V3 doesn't support. But
 requiring the opposite doesn't make any sense to me, and seems like a waste
 of human resources.

 https://etherpad.openstack.org/p/icehouse-summit-nova-v3-api says we want
 to freeze the v2 api at icehouse-2, a plan which I full support.



 There are some patches related to this, hope can get some comments from
 you.

 https://review.openstack.org/#/c/52733/
 https://review.openstack.org/#/c/52867/
 https://review.openstack.org/#/c/63853/

 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][HEAT][Template][Dashboard] HEAT editing dashboard

2013-12-30 Thread Jay Lau
Hi,

I noticed that there is a bp
https://blueprints.launchpad.net/horizon/+spec/heat-template-managementwhich
want to improve the UI for HEAT template, does anyone working on this?

If not, does anyone who have some mock up dashboard for editing heat
template?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Should we add the api to move the removed or damaged hosts

2014-01-02 Thread Jay Lau
Its duplicate with
https://blueprints.launchpad.net/nova/+spec/remove-nova-compute

Thanks,

Jay


2014/1/3 黎林果 lilinguo8...@gmail.com

 Hi,
All

Should we add the api to move the removed or damaged hosts?


See also:
 https://blueprints.launchpad.net/nova/+spec/add-delete-host-api



 Best regards!

 Lee

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Gantt][filters] AvailabilityZoneFilter in Gantt and Oslo

2014-01-05 Thread Jay Lau
Greetings,

Here come a question related to AvailabilityZoneFilter.

A new project Gantt which is a common scheduler for OpenStack is now under
incubation, and now most of the code are get from nova scheduler.

I'm planning to make Gantt use common scheduler from oslo, and I think that
this is the right direction and it is better do this at this stage for
Gantt to make sure it has a good code base. But  AvailabilityZoneFilter has
different logic in oslo and Gantt.

In oslo, AvailabilityZoneFilter only handles availability_zone from
request_spec; but in Gantt, AvailabilityZoneFilter can handle
availability_zone from both request_spec and aggregation metadata, we
cannot sync AvailabilityZoneFilterfrom oslo to Gantt.

What about split the AvailabilityZoneFilter in Gantt to two filters, one is
AvailabilityZoneFilter which has same logic with oslo, the other we can
name it as AggregateAvailabilityZoneFilter which will only handle
availability_zone from aggregation metadata, this can make sure Gantt can
sync AvailabilityZoneFilter from oslo and make AvailabilityZoneFilter a
common scheduler filter for both Gantt and Cinder. What do you think?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Gantt][filters] AvailabilityZoneFilter in Gantt and Oslo

2014-01-05 Thread Jay Lau
Thanks Jay Pipes and Boris for the comments.

@Jay Pipes, agree, I can name it as AvailabilityZoneAggregateFilter.

@Boris,

For your first question, we may want to get some answer from @Robert
Collins.
For second question, perhaps you can refer to
https://blueprints.launchpad.net/nova/+spec/forklift-scheduler-breakout ,
from this bp specification, *Gantt's goal is **is to deprecate
nova-scheduler in I and remove in J*, so perhaps your bp can be implemented
on Gantt?
For third question, the goal of Gantt is for a common scheduler for
OpenStack, you can refer to
https://blueprints.launchpad.net/nova/+spec/forklift-scheduler-breakout for
more detail.

Thanks,

Jay



2014/1/6 Boris Pavlovic bpavlo...@mirantis.com

 Hi Jay,

 I have  3 points:

 First of all:
 https://github.com/openstack/gantt/
 Why this project has all history of Nova? It seems very odd way to create
 new project aka clone Nova remove all from Nova..

 Second:
 This blueprint 
 https://blueprints.launchpad.net/nova/+spec/no-db-schedulershould be 
 implement before switching to separated scheduler aaS.
  The main reason is that: scheduler business logic is deeply connected
 with host states that are deeply connected with db models, that makes a
 really hard (or impossible we already tried) process of making common
 scheduler.

 Third:
 Why this project, that is actually just Nova copy paste is under
 openstack?


 Best regards,
 Boris Pavlovic



 On Sun, Jan 5, 2014 at 6:13 PM, Jay Lau jay.lau@gmail.com wrote:

 Greetings,

 Here come a question related to AvailabilityZoneFilter.

 A new project Gantt which is a common scheduler for OpenStack is now
 under incubation, and now most of the code are get from nova scheduler.

 I'm planning to make Gantt use common scheduler from oslo, and I think
 that this is the right direction and it is better do this at this stage for
 Gantt to make sure it has a good code base. But  AvailabilityZoneFilter has
 different logic in oslo and Gantt.

 In oslo, AvailabilityZoneFilter only handles availability_zone from
 request_spec; but in Gantt, AvailabilityZoneFilter can handle
 availability_zone from both request_spec and aggregation metadata, we
 cannot sync AvailabilityZoneFilterfrom oslo to Gantt.

 What about split the AvailabilityZoneFilter in Gantt to two filters, one
 is AvailabilityZoneFilter which has same logic with oslo, the other we can
 name it as AggregateAvailabilityZoneFilter which will only handle
 availability_zone from aggregation metadata, this can make sure Gantt can
 sync AvailabilityZoneFilter from oslo and make AvailabilityZoneFilter a
 common scheduler filter for both Gantt and Cinder. What do you think?

 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-07 Thread Jay Lau
A bug was filed: https://bugs.launchpad.net/nova/+bug/1266711

Thanks,

Jay


2014/1/7 Lu, Lianhao lianhao...@intel.com

 Hi guys,

 This afternoon I suddenly find that there are quite a lot of nova py27
 unit test failures on Jenkins, like
 http://logs.openstack.org/15/62815/5/gate/gate-nova-python27/82d5d52/console.html
 .

 It seems to me that the registerCloseCallback method is not available any
 more in virConnect class. I'm not sure whether this is caused by a new
 version of libvirt python binding?

 Any comments?

 -Lianhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] libvirt unit test errors

2014-01-07 Thread Jay Lau
Gary,

Please search email with title as [openstack-dev] [nova][infra] nova py27
unit test failures in libvirt, a bug has been filed for this:
https://bugs.launchpad.net/nova/+bug/1266711

Thanks,

Jay



2014/1/7 Gary Kotton gkot...@vmware.com

 Hi,
 Anyone aware of the following:

 2014-01-07 11:59:47.428 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_428
  | Requirement already satisfied (use --upgrade to upgrade): markupsafe in 
 ./.tox/py27/lib/python2.7/site-packages (from 
 Jinja2=2.3-sphinx=1.1.2,1.2)2014-01-07 11:59:47.429 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_11_59_47_429
  | Cleaning up...2014-01-07 12:01:32.134 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_32_134
  | Unimplemented block at ../../relaxng.c:38242014-01-07 12:01:33.893 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_01_33_893
  | Unimplemented block at ../../relaxng.c:38242014-01-07 12:10:25.292 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_10_25_292
  | libvirt:  error : internal error: could not initialize domain event 
 timer2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.783 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_783
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list 
 2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpZV93Uv2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.784 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_784
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpC2pLuK2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
 /tmp/tmp.pS70eSqBIL/tmpd1ZnJj2014-01-07 12:11:32.785 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_785
  | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \2014-01-07 12:11:32.786 
 http://logs.openstack.org/10/60010/2/check/gate-nova-python27/ebd53ea/console.html#_2014-01-07_12_11_32_786
  | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \2014-01-07 

[openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
Greetings,

I have a question related to cold migration.

Now in OpenStack nova, we support live migration, cold migration and resize.

For live migration, we do not need to confirm after live migration finished.

For resize, we need to confirm, as we want to give end user an opportunity
to rollback.

The problem is cold migration, because cold migration and resize share same
code path, so once I submit a cold migration request and after the cold
migration finished, the VM will goes to verify_resize state, and I need to
confirm resize. I felt a bit confused by this, why do I need to verify
resize for a cold migration operation? Why not reset the VM to original
state directly after cold migration?

Also, I think that probably we need split compute.api.resize() to two apis:
one is for resize and the other is for cold migrations.

1) The VM state can be either ACTIVE and STOPPED for a resize operation
2) The VM state must be STOPPED for a cold migrate operation.

Any comments?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
Thanks Russell, OK, will file a bug for first issue.

For second question, I want to show some of my comments here. I think that
we should disable cold migration for an ACTIVE VM as cold migrating will
first destroy the VM then re-create the VM when using KVM, I did not see a
use case why someone want to do such a case.

Even further, this might make end user confused, its really strange both
cold migration and live migration can migrate an ACTIVE VM. Cold migration
should only target STOPPED VM instance.

What do you think?

Thanks,

Jay



2014/1/8 Russell Bryant rbry...@redhat.com

 On 01/08/2014 04:52 AM, Jay Lau wrote:
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
 resize.
 
  For live migration, we do not need to confirm after live migration
 finished.
 
  For resize, we need to confirm, as we want to give end user an
  opportunity to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
  cold migration finished, the VM will goes to verify_resize state, and I
  need to confirm resize. I felt a bit confused by this, why do I need to
  verify resize for a cold migration operation? Why not reset the VM to
  original state directly after cold migration?

 The confirm step definitely makes more sense for the resize case.  I'm
 not sure if there was a strong reason why it was also needed for cold
 migration.

 If nobody comes up with a good reason to keep it, I'm fine with removing
 it.  It can't be changed in the v2 API, though.  This would be a v3 only
 change.

  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

 I'm not sure why would require different states here, though.  ACTIVE
 and STOPPED are allowed now.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
2014/1/8 John Garbutt j...@johngarbutt.com

 On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
  In nova/compute/api.py#2289, function resize, there's a parameter named
  flavor_id, if it is None, it is considered as cold migration. Thus, nova
  should skip resize verifying. However, it doesn't.
 
  Like Jay said, we should skip this step during cold migration, does it
 make
  sense?

 Not sure.

  On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:
 
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
  resize.
 
  For live migration, we do not need to confirm after live migration
  finished.
 
  For resize, we need to confirm, as we want to give end user an
 opportunity
  to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
 cold
  migration finished, the VM will goes to verify_resize state, and I need
 to
  confirm resize. I felt a bit confused by this, why do I need to verify
  resize for a cold migration operation? Why not reset the VM to original
  state directly after cold migration?

 I think the idea was allow users/admins to check everything went OK,
 and only delete the original VM when the have confirmed the move went
 OK.

 I thought there was an auto_confirm setting. Maybe you want
 auto_confirm cold migrate, but not auto_confirm resize?







*[Jay] John, yes, that can also reach my goal. Now we only have
resize_confirm_window to handle auto confirm without considering it is
resize or cold migration. # Automatically confirm resizes after N seconds.
Set to 0 to# disable. (integer value)#resize_confirm_window=0 *
*Perhaps we can add another parameter say cold_migrate_confirm_window to
handle confirm for cold migration.*


  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

 We just stop the VM them perform the migration.
 I don't think we need to require its stopped first.
 Am I missing something?

*[Jay] Yes, but just curious why someone want to cold migrate an ACTIVE VM?
They can use live migration instead and this can also make sure the VM
migrate seamlessly.*


 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
2014/1/9 Russell Bryant rbry...@redhat.com

 On 01/08/2014 09:53 AM, John Garbutt wrote:
  On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
  In nova/compute/api.py#2289, function resize, there's a parameter named
  flavor_id, if it is None, it is considered as cold migration. Thus, nova
  should skip resize verifying. However, it doesn't.
 
  Like Jay said, we should skip this step during cold migration, does it
 make
  sense?
 
  Not sure.
 
  On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:
 
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
  resize.
 
  For live migration, we do not need to confirm after live migration
  finished.
 
  For resize, we need to confirm, as we want to give end user an
 opportunity
  to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after
 the cold
  migration finished, the VM will goes to verify_resize state, and I
 need to
  confirm resize. I felt a bit confused by this, why do I need to verify
  resize for a cold migration operation? Why not reset the VM to original
  state directly after cold migration?
 
  I think the idea was allow users/admins to check everything went OK,
  and only delete the original VM when the have confirmed the move went
  OK.
 
  I thought there was an auto_confirm setting. Maybe you want
  auto_confirm cold migrate, but not auto_confirm resize?

 I suppose we could add an API parameter to auto-confirm these things.
 That's probably a good compromise.

OK, will use auto-confirm to handle this.



  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.
 
  We just stop the VM them perform the migration.
  I don't think we need to require its stopped first.
  Am I missing something?

 Don't think so ... I think we should leave it as is.

OK, will leave this as it is for now.


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Oslo] What is the policy of sync up from Oslo to other projects

2014-01-12 Thread Jay Lau
Hi,

Just want to know do we have some policy to sync up from Oslo to other
projects?

I often noticed that there are some important changes to Oslo but the code
did not sync up to other projects on time.

Do we want to sync up from Oslo to other projects one by one or sync up
batch patches at one time?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Oslo] What is the policy of sync up from Oslo to other projects

2014-01-12 Thread Jay Lau
Thanks Doug, clear now.

Regards,

Jay


2014/1/13 Doug Hellmann doug.hellm...@dreamhost.com




 On Sun, Jan 12, 2014 at 8:59 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi,

 Just want to know do we have some policy to sync up from Oslo to other
 projects?

 I often noticed that there are some important changes to Oslo but the
 code did not sync up to other projects on time.

 Do we want to sync up from Oslo to other projects one by one or sync up
 batch patches at one time?


 Hi, Jay,

 The expectations are described under
 https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator

 We've fallen behind, so batch updates probably make the most sense at this
 point. We have a blueprint to add a feature to the update script to make it
 easy to include the git log info from the incubator repository in a commit
 message when syncing to another project (
 https://blueprints.launchpad.net/oslo/+spec/improve-update-script).

 Doug




 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt[ Patches to sync gantt up to the current nova tree

2014-01-12 Thread Jay Lau
According to
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12973.html,
it is said that we should not stop nova scheduler working for now.

BTW: I think that we may need to consider the following patches to enable
Nova use scheduler from Oslo first to reduce future work before sync up to
Gantt.

https://review.openstack.org/#/c/66105/
https://review.openstack.org/#/c/65418/
https://review.openstack.org/#/c/65424/

Thanks,

Jay


2014/1/13 Clint Byrum cl...@fewbar.com

 Excerpts from Doug Hellmann's message of 2014-01-12 14:45:11 -0800:
  On Sun, Jan 12, 2014 at 5:30 PM, Dugger, Donald D 
 donald.d.dug...@intel.com
   wrote:
 
So I have 25 patches that I need to push to backport changes that have
   been made to the nova tree that apply to the gantt tree.  The problem
 is
   how do we want to approve these patches?  Given that they have already
 been
   reviewed and approved in the nova tree do we have to go through the
   overhead of doing new reviews in the gantt tree and, if not, how do we
   bypass that mechanism?
  
 
  Why is code being copied from nova directly?
 

 I suspect because gantt forked a while ago, but development has been
 allowed to continue in nova's scheduler code. Seems like that should be
 stopped at some point soon to reduce the extra sync effort.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt[ Patches to sync gantt up to the current nova tree

2014-01-12 Thread Jay Lau
Cool! Thanks Don.

Regards,

Jay


2014/1/13 Dugger, Donald D donald.d.dug...@intel.com

  Jay-



 Those patches are not a problem.  Once they are approved and pushed into
 the nova tree I will backport them over to gantt.  To re-iterate, the idea
 is to keep the current development going in nova and only when the gantt
 tree is functional and provides the same functionality as the scheduler
 inside the nova will we consider moving all development to gantt.



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 *From:* Jay Lau [mailto:jay.lau@gmail.com]
 *Sent:* Sunday, January 12, 2014 5:39 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [gantt[ Patches to sync gantt up to the
 current nova tree



 According to
 https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12973.html,
 it is said that we should not stop nova scheduler working for now.

 BTW: I think that we may need to consider the following patches to enable
 Nova use scheduler from Oslo first to reduce future work before sync up to
 Gantt.

 https://review.openstack.org/#/c/66105/
 https://review.openstack.org/#/c/65418/
 https://review.openstack.org/#/c/65424/


 Thanks,

 Jay



 2014/1/13 Clint Byrum cl...@fewbar.com

 Excerpts from Doug Hellmann's message of 2014-01-12 14:45:11 -0800:

  On Sun, Jan 12, 2014 at 5:30 PM, Dugger, Donald D 
 donald.d.dug...@intel.com
   wrote:
 
So I have 25 patches that I need to push to backport changes that have
   been made to the nova tree that apply to the gantt tree.  The problem
 is
   how do we want to approve these patches?  Given that they have already
 been
   reviewed and approved in the nova tree do we have to go through the
   overhead of doing new reviews in the gantt tree and, if not, how do we
   bypass that mechanism?
  
 
  Why is code being copied from nova directly?
 

 I suspect because gantt forked a while ago, but development has been
 allowed to continue in nova's scheduler code. Seems like that should be
 stopped at some point soon to reduce the extra sync effort.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-13 Thread Jay Lau
Greetings,

Now cold migration do not support migrate a VM instance with target host,
what about add this feature to enable cold migration with a target host?

I encounter this issue is because I was creating a HA service and the
service will monitor if there are some hosts failure, and the HA service
can enable customer write some plugins to predict host status.

If the host is going down, then the customized plugin will report the
status to HA service. Then HA service will do live migration for VMs in
ACTIVE state and do cold migration for VMs in STOPPED status. The problem
is that HA service can help select the target host for both cold migration
and live migration. live migration support migrate VM with target host so I
can transfer the target host returned by HA service to live migration; cold
migration does not support migrate with target host.

So what about adding this feature to nova?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-13 Thread Jay Lau
Thanks Russell, will add this to V3 api ad leave V2 API as it is.

Regards,

Jay


2014/1/13 Russell Bryant rbry...@redhat.com

 On 01/13/2014 03:16 AM, Jay Lau wrote:
  Greetings,
 
  Now cold migration do not support migrate a VM instance with target
  host, what about add this feature to enable cold migration with a target
  host?

 Sounds reasonable to me.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-14 Thread Jay Lau
The bug was filed: https://bugs.launchpad.net/nova/+bug/1268622

The HA service was still under development and not open sourced.

Thanks,

Jay



2014/1/14 Lingxian Kong anlin.k...@gmail.com

 hi Jay:

 Could you send me the relational bp or bug reference if you have proposed?
 Thanks very much!

 and by the way, is the code of the HA service you have implemented open
 source?


 2014/1/13 Jay Lau jay.lau@gmail.com

 Thanks Russell, will add this to V3 api ad leave V2 API as it is.

 Regards,

 Jay


 2014/1/13 Russell Bryant rbry...@redhat.com

 On 01/13/2014 03:16 AM, Jay Lau wrote:
  Greetings,
 
  Now cold migration do not support migrate a VM instance with target
  host, what about add this feature to enable cold migration with a
 target
  host?

 Sounds reasonable to me.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *---*
 *Lingxian Kong*
 Huawei Technologies Co.,LTD.
 IT Product Line CloudOS PDU
 China, Xi'an
 Mobile: +86-18602962792
 Email: konglingx...@huawei.com; anlin.k...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][HEAT][Dashboard] Edit JSON tempalte on OpenStack Dashboard

2014-01-15 Thread Jay Lau
Thanks Tim and Liz, comments in line.



2014/1/15 Tim Schnell tim.schn...@rackspace.com

 On 1/15/14 9:01 AM, Liz Blanchard lsure...@redhat.com wrote:


 
 On Jan 9, 2014, at 4:41 AM, Jay Lau jay.lau@gmail.com wrote:
 
 
 My bad, the image cannot be viewed.
 
 
 Upload again.
 
 
 
 Thanks,
 
 
 Jay
 
 
 
 2014/1/9 Jay Lau jay.lau@gmail.com
 
 Hi,
 
 
 Now when using OpenStack dashboard to launch a stack, we need to first
 import the template then create the stack, but there is no way to enable
 admin to view/modify the template.

 Hi Jay,

 Sorry I meant to respond to this a few days ago. Currently, in the
 dashboard you have the ability to copy/paste a template into a text area
 and then edit it before you attempt the stack-create. This seems to solve
 the use case that you have mentioned although I agree that there is room
 for improvement in the user experience.

 The reason I bring up the distinction is because the template that you
 reference below has a much broader scope. It includes storing and managing
 templates for future use. If you are intending to add all of this
 functionality into the Dashboard then I would suggest waiting for a
 template storage solution to get done. There is currently a blueprint and
 discussion happening in Glance to take on this ability and then I would
 imagine that the Dashboard can consume it.

 see:
 https://blueprints.launchpad.net/glance/+spec/metadata-artifact-repository

 If you are intending to simply add an additional page for editing the
 template if the user chooses to retrieve it via the URL option then my
 only suggestion would be to have a flag that gets stored in the session
 that allows the user to bypass the editing step for launching templates
 since some users will never need to edit templates.

@Tim, I think that it might be ok to enable admin still be able to see the
overall JSON/YAML template by the edit page even if s/he does not want to
edit it. ;-)
I'm planning to add both edit/create page to heat, and I noticed that AWS
already have such feature:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html,
I'm now trying if we can leverage some from it.
So I think that the dashboard should probably add a new button named as
create. Yes, now we cannot store template, so my thinking is that once
admin create template finished, then s/he can simply export it and save it
locally for future use.


 
 
 What about add a new page between import template and launch stack to
 give admin an opportunity to edit the template?
 
 
 I noticed that we already have a blueprint tracing this:
 https://blueprints.launchpad.net/horizon/+spec/heat-template-management
 https://blueprints.launchpad.net/horizon/+spec/heat-template-management
 
 
 
 I did some investigation, seems we can leverage
 http://jsoneditoronline.org/ http://jsoneditoronline.org/ to enable
 OpenStack dashboard have this ability (Only for JSON template).
 
 
 
 Hi Jay,
 
 
 I really like the idea of allowing users to edit these templates easily
 right through the dashboard. Are all template files in JSON? I would say
 we should try to find a solution to add to the dashboard that would
 support all (or as many as possible) formats
  of template that we can.

 I agree with Liz here, templates can be in JSON or YAML format so if we do
 add a javascript library to provide syntax highlighting and things then I
 would want it to work with YAML as well as JSON.

 -Tim

 
 
 Best,
 Liz
 
 
 
 
 
 
 
 
 
 Thanks,
 
 
 Jay
 
 
 
 
 
 
 json.png___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][compute] Why prune all compute node stats when sync up compute nodes

2014-01-15 Thread Jay Lau
Greeting,

In compute/manager.py, there is a periodic task named as
update_available_resource(), it will update resource for each compute
periodically.

 @periodic_task.periodic_task
def update_available_resource(self, context):
See driver.get_available_resource()

Periodic process that keeps that the compute host's understanding of
resource availability and usage in sync with the underlying
hypervisor.

:param context: security context

new_resource_tracker_dict = {}
nodenames = set(self.driver.get_available_nodes())
for nodename in nodenames:
rt = self._get_resource_tracker(nodename)
rt.update_available_resource(context)  Update here
new_resource_tracker_dict[nodename] = rt

In resource_tracker.py,
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384

self._update(context, resources, prune_stats=True)

It always set prune_stats as True, this caused some problems for me. As now
I'm putting some metrics to compute_node_stats table, those metrics does
not change frequently, so I did not update it frequently. But the periodic
task always prune the new metrics that I added.

What about add a configuration parameter in nova.cont to make prune_stats
as configurable?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why don't we deal with claims when live migrating an instance?

2014-01-16 Thread Jay Lau
Hi Scott,

I'm now trying to fix this issue at
https://blueprints.launchpad.net/nova/+spec/auto-confirm-cold-migration

After the fix, we do not need to confirm the cold migration.

http://lists.openstack.org/pipermail/openstack-dev/2014-January/023726.html

Thanks,

Jay


2014/1/17 Scott Devoid dev...@anl.gov

 Related question: Why does resize get called (and the VM put in RESIZE
 VERIFY state) when migrating from one machine to another, keeping the same
 flavor?


 On Thu, Jan 16, 2014 at 9:54 AM, Brian Elliott bdelli...@gmail.comwrote:


 On Jan 15, 2014, at 4:34 PM, Clint Byrum cl...@fewbar.com wrote:

  Hi Chris. Your thread may have gone unnoticed as it lacked the Nova tag.
  I've added it to the subject of this reply... that might attract them.
  :)
 
  Excerpts from Chris Friesen's message of 2014-01-15 12:32:36 -0800:
  When we create a new instance via _build_instance() or
  _build_and_run_instance(), in both cases we call instance_claim() to
  reserve and test for resources.
 
  During a cold migration I see us calling prep_resize() which calls
  resize_claim().
 
  How come we don't need to do something like this when we live migrate
 an
  instance?  Do we track the hypervisor overhead somewhere in the
 instance?
 
  Chris
 

 It is a good point and it should be done.  It is effectively a bug.

 Brian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]can someone help me? when I use cmd nova migration-list error.

2014-01-19 Thread Jay Lau
It is being fixed https://review.openstack.org/#/c/61717/

Thanks,

Jay


2014/1/20 li zheming lizhemin...@gmail.com

 hi all:
   when I use cmd nova migration-list, it return error,like this:
openstack@ openstack@devstack:/home$devstack:/home$ nova
 migration-list
ERROR: 'unicode' object has no attribute 'iteritems'

I step the codes and find the codes have some error.


python-novaclient/novaclient/base.py

class Manager(utils.HookableMixin):
 ..
 def _list(self, url, response_key, obj_class=None, body=None):
 if body:
 _resp, body = self.api.client.post(url, body=body)
 else:
 _resp, body = self.api.client.get(url)

 if obj_class is None:
 obj_class = self.resource_class

 data = body[response_key]
 # NOTE(ja): keystone returns values as list as {'values': [ ... ]}
 # unlike other services which just return the list...
 if isinstance(data, dict):
 try:
 data = data['values']
 except KeyError:
 pass

 with self.completion_cache('human_id', obj_class, mode=w):
 with self.completion_cache('uuid', obj_class, mode=w):
 return [obj_class(self, res, loaded=True)
 for res in data if res]

 I set a breakpoint in data = data['values'], and find the date
 is

 {u'objects': []}}, it has no key named values.

 it except a keyError and pass.

 if go  for res in data if res , the res is unicode object,
 this will

  occur error in the next fun.

 do you met this issue? and someone who know why the comment say 
 keystone
 returns values as list as {'values': [ ... ]}

 but I think this is not relevant about keystone. may be I
 misunderstand this codes. please give me more info about this code.

   thank you very much!






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-19 Thread Jay Lau
Hey,

I have to bring up this topic again. A patch
https://review.openstack.org/#/c/66101/ has been uploaded for review to
resolve the cold migration auto confirm issue.

One question want to get some input from you guys: Do we need to
distinguish V2 and V3 API for this behavior? My thinking is that we do not
need to distinguish this for V2 and V3 API for the auto confirm feature,
the reason is as following:
1) It is a new feature for Icehouse, for cold migration, Icehouse will auto
confirm all cold migration operations.
2) We cannot know if the instance was cold migrated by V2 or V3, if really
want to distinguish, we may need to add some data in system metadata to
mark if the VM was cold migrated by V2 or V3 API.

Any comments?

Thanks,

Jay



2014/1/9 John Garbutt j...@johngarbutt.com

 On 8 January 2014 15:29, Jay Lau jay.lau@gmail.com wrote:
  2014/1/8 John Garbutt j...@johngarbutt.com
 
  On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
   In nova/compute/api.py#2289, function resize, there's a parameter
 named
   flavor_id, if it is None, it is considered as cold migration. Thus,
 nova
   should skip resize verifying. However, it doesn't.
  
   Like Jay said, we should skip this step during cold migration, does it
   make
   sense?
 
  Not sure.
 
   On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com
 wrote:
  
   Greetings,
  
   I have a question related to cold migration.
  
   Now in OpenStack nova, we support live migration, cold migration and
   resize.
  
   For live migration, we do not need to confirm after live migration
   finished.
  
   For resize, we need to confirm, as we want to give end user an
   opportunity
   to rollback.
  
   The problem is cold migration, because cold migration and resize
 share
   same code path, so once I submit a cold migration request and after
 the
   cold
   migration finished, the VM will goes to verify_resize state, and I
 need
   to
   confirm resize. I felt a bit confused by this, why do I need to
 verify
   resize for a cold migration operation? Why not reset the VM to
 original
   state directly after cold migration?
 
  I think the idea was allow users/admins to check everything went OK,
  and only delete the original VM when the have confirmed the move went
  OK.
 
  I thought there was an auto_confirm setting. Maybe you want
  auto_confirm cold migrate, but not auto_confirm resize?
 
  [Jay] John, yes, that can also reach my goal. Now we only have
  resize_confirm_window to handle auto confirm without considering it is
  resize or cold migration.
  # Automatically confirm resizes after N seconds. Set to 0 to
  # disable. (integer value)
  #resize_confirm_window=0
 
  Perhaps we can add another parameter say cold_migrate_confirm_window to
  handle confirm for cold migration.

 I like Russell's suggestion, but maybe implement it as always doing
 auto_confirm for cold migrate in v3 API, and leaving it as is for
 resize.

 See if people like that, I should check with our ops guys.

   Also, I think that probably we need split compute.api.resize() to two
   apis: one is for resize and the other is for cold migrations.
  
   1) The VM state can be either ACTIVE and STOPPED for a resize
 operation
   2) The VM state must be STOPPED for a cold migrate operation.
 
  We just stop the VM them perform the migration.
  I don't think we need to require its stopped first.
  Am I missing something?
 
  [Jay] Yes, but just curious why someone want to cold migrate an ACTIVE
 VM?
  They can use live migration instead and this can also make sure the VM
  migrate seamlessly.

 If a disk is failing, people like to turn off the VMs to reduce load
 on that host while performing the migrations.

 And live-migrate (sadly) does not yet work in all configurations yet,
 so its useful where live-migrate is not possible.

 Also live-migrate with block_migration can use quite a lot more
 network bandwidth than cold migration, at least in the XenServer case.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-21 Thread Jay Lau
2014/1/22 Christopher Yeoh cbky...@gmail.com

 On Mon, Jan 20, 2014 at 6:10 PM, Jay Lau jay.lau@gmail.com wrote:

 Hey,

 I have to bring up this topic again. A patch
 https://review.openstack.org/#/c/66101/ has been uploaded for review to
 resolve the cold migration auto confirm issue.

 One question want to get some input from you guys: Do we need to
 distinguish V2 and V3 API for this behavior? My thinking is that we do not
 need to distinguish this for V2 and V3 API for the auto confirm feature,
 the reason is as following:
 1) It is a new feature for Icehouse, for cold migration, Icehouse will
 auto confirm all cold migration operations.
 2) We cannot know if the instance was cold migrated by V2 or V3, if
 really want to distinguish, we may need to add some data in system metadata
 to mark if the VM was cold migrated by V2 or V3 API.


 I think we do need to distinguish between the V2 and V3 API. Once
 released, the APIs have to remain stable in their behaviour even between
 openstack releases. So the behaviour for the V2 API has to remain the same
 otherwise we risk breaking existing applications which use the V2 API
 (which should not have to care if they are running against Havana or
 Icehouse).


@Chris, thanks for your comments. So do you have any comments for how we
can distinguish between the V2 and V3 API? My current thinking is perhaps
we can add some system metadata to do this.




 Any comments?

 Thanks,

 Jay



 2014/1/9 John Garbutt j...@johngarbutt.com

 On 8 January 2014 15:29, Jay Lau jay.lau@gmail.com wrote:
  2014/1/8 John Garbutt j...@johngarbutt.com
 
  On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
   In nova/compute/api.py#2289, function resize, there's a parameter
 named
   flavor_id, if it is None, it is considered as cold migration. Thus,
 nova
   should skip resize verifying. However, it doesn't.
  
   Like Jay said, we should skip this step during cold migration, does
 it
   make
   sense?
 
  Not sure.
 
   On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com
 wrote:
  
   Greetings,
  
   I have a question related to cold migration.
  
   Now in OpenStack nova, we support live migration, cold migration
 and
   resize.
  
   For live migration, we do not need to confirm after live migration
   finished.
  
   For resize, we need to confirm, as we want to give end user an
   opportunity
   to rollback.
  
   The problem is cold migration, because cold migration and resize
 share
   same code path, so once I submit a cold migration request and
 after the
   cold
   migration finished, the VM will goes to verify_resize state, and I
 need
   to
   confirm resize. I felt a bit confused by this, why do I need to
 verify
   resize for a cold migration operation? Why not reset the VM to
 original
   state directly after cold migration?
 
  I think the idea was allow users/admins to check everything went OK,
  and only delete the original VM when the have confirmed the move went
  OK.
 
  I thought there was an auto_confirm setting. Maybe you want
  auto_confirm cold migrate, but not auto_confirm resize?
 
  [Jay] John, yes, that can also reach my goal. Now we only have
  resize_confirm_window to handle auto confirm without considering it is
  resize or cold migration.
  # Automatically confirm resizes after N seconds. Set to 0 to
  # disable. (integer value)
  #resize_confirm_window=0
 
  Perhaps we can add another parameter say cold_migrate_confirm_window to
  handle confirm for cold migration.

 I like Russell's suggestion, but maybe implement it as always doing
 auto_confirm for cold migrate in v3 API, and leaving it as is for
 resize.

 See if people like that, I should check with our ops guys.

   Also, I think that probably we need split compute.api.resize() to
 two
   apis: one is for resize and the other is for cold migrations.
  
   1) The VM state can be either ACTIVE and STOPPED for a resize
 operation
   2) The VM state must be STOPPED for a cold migrate operation.
 
  We just stop the VM them perform the migration.
  I don't think we need to require its stopped first.
  Am I missing something?
 
  [Jay] Yes, but just curious why someone want to cold migrate an ACTIVE
 VM?
  They can use live migration instead and this can also make sure the VM
  migrate seamlessly.

 If a disk is failing, people like to turn off the VMs to reduce load
 on that host while performing the migrations.

 And live-migrate (sadly) does not yet work in all configurations yet,
 so its useful where live-migrate is not possible.

 Also live-migrate with block_migration can use quite a lot more
 network bandwidth than cold migration, at least in the XenServer case.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev

Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-21 Thread Jay Lau
Thanks Chris and Alex :-)

@Chris, OK, I will update patch then.

@Alex, yes, John also give some explanation as following:

=
If a disk is failing, people like to turn off the VMs to reduce load
on that host while performing the migrations.

And live-migrate (sadly) does not yet work in all configurations yet,
so its useful where live-migrate is not possible.

Also live-migrate with block_migration can use quite a lot more
network bandwidth than cold migration, at least in the XenServer case.
=



2014/1/22 Alex Xu x...@linux.vnet.ibm.com

  On 2014年01月08日 23:12, Jay Lau wrote:

  Thanks Russell, OK, will file a bug for first issue.

 For second question, I want to show some of my comments here. I think that
 we should disable cold migration for an ACTIVE VM as cold migrating will
 first destroy the VM then re-create the VM when using KVM, I did not see a
 use case why someone want to do such a case.

 Even further, this might make end user confused, its really strange both
 cold migration and live migration can migrate an ACTIVE VM. Cold migration
 should only target STOPPED VM instance.


 I think cold migrate an ACTIVE VM is ok. The different of cold migration
 and live migration is there isn't down time for vm with live migration.
 Cold migration is make the vm
 down first, then migrate it.



 What do you think?

  Thanks,

  Jay



 2014/1/8 Russell Bryant rbry...@redhat.com

 On 01/08/2014 04:52 AM, Jay Lau wrote:
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
 resize.
 
  For live migration, we do not need to confirm after live migration
 finished.
 
  For resize, we need to confirm, as we want to give end user an
  opportunity to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
  cold migration finished, the VM will goes to verify_resize state, and I
  need to confirm resize. I felt a bit confused by this, why do I need to
  verify resize for a cold migration operation? Why not reset the VM to
  original state directly after cold migration?

  The confirm step definitely makes more sense for the resize case.  I'm
 not sure if there was a strong reason why it was also needed for cold
 migration.

 If nobody comes up with a good reason to keep it, I'm fine with removing
 it.  It can't be changed in the v2 API, though.  This would be a v3 only
 change.

  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

  I'm not sure why would require different states here, though.  ACTIVE
 and STOPPED are allowed now.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-12 Thread Jay Lau
Greetings,

I was now doing some integration with VMWare VCDriver and have some
questions during the integration work.

1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped, so
do we have any plan when to drop this driver?
2) There are many good features in VMWare was not supportted by VCDriver,
such as live migration, cold migration and resize within one vSphere
cluster, also we cannot get individual ESX Server details via VCDriver.

Do we have some planning to make those features worked?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-14 Thread Jay Lau
Cool, thanks Gary.

Do you have some bugs or bp filed in launchpad to trace those issues?


2014-02-14 17:11 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 We are currently looking into that.
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, February 13, 2014 11:14 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver
 problems

 Thanks Gary.

 What about live migration with VCDriver, currently I cannot do live
 migration in the condition of between ESX servers in one cluster.

 2014-02-13 16:47 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 The commit
 https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbhttps://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=1a39cfb2c41b8ce978956de28a2773a98b75bcfe4c0f135905dc6aa3257b9570
  added
 a warning that the ESX driver is not tested. My understanding is that there
 are a number of people using the ESX driver so it should not be deprecated.
 In order to get the warning removed we will need to have CI on the driver.
 As far as I know there is no official decision to deprecate it.
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, February 13, 2014 4:00 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver
 problems

 Greetings,

 I was now doing some integration with VMWare VCDriver and have some
 questions during the integration work.

 1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped,
 so do we have any plan when to drop this driver?
 2) There are many good features in VMWare was not supportted by VCDriver,
 such as live migration, cold migration and resize within one vSphere
 cluster, also we cannot get individual ESX Server details via VCDriver.

 Do we have some planning to make those features worked?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=b1d6c73107a271d9b3e2c6948bb4bc32185a964d0af83eb60a510d180a4d75f6




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-15 Thread Jay Lau
Hey,

I have one question related with OpenStack vmwareapi.VMwareVCDriver
resize/cold migration.

The following is my configuration:

 DC
|
|Cluster1
|  |
|  |9.111.249.56
|
|Cluster2
   |
   |9.111.249.49

*Scenario 1:*
I started two nova computes manage the two clusters:
1) nova-compute1.conf
cluster_name=Cluster1

2) nova-compute2.conf
cluster_name=Cluster2

3) Start up two nova computes on host1 and host2 separately
4) Create one VM instance and the VM instance was booted on Cluster2 node
9.111.249.49
| OS-EXT-SRV-ATTR:host | host2 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |
domain-c16(Cluster2) |
5) Cold migrate the VM instance
6) After migration finished, the VM goes to VERIFY_RESIZE status, and nova
show indicates that the VM now located on host1:Cluster1
| OS-EXT-SRV-ATTR:host | host1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |
domain-c12(Cluster1) |
7) But from vSphere client, it indicates the the VM was still running on
Cluster2
8) Try to confirm the resize, confirm will be failed. The root cause is
that nova compute on host2 has no knowledge of domain-c12(Cluster1)

2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2810, in
do_confirm_resize
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
migration=migration)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2836, in
_confirm_resize
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
network_info)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 420,
in confirm_migration
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
_vmops = self._get_vmops_for_compute_node(instance['node'])
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 523,
in _get_vmops_for_compute_node
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
resource = self._get_resource_for_node(nodename)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 515,
in _get_resource_for_node
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
raise exception.NotFound(msg)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


*Scenario 2:*

1) Started two nova computes manage the two clusters, but the two computes
have same nova conf.
1) nova-compute1.conf
cluster_name=Cluster1
cluster_name=Cluster2

2) nova-compute2.conf
cluster_name=Cluster1
cluster_name=Cluster2

3) Then create and resize/cold migrate a VM, it can always succeed.


*Questions:*
For multi-cluster management, does vmware require all nova compute have
same cluster configuration to make sure resize/cold migration can succeed?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-16 Thread Jay Lau
Thanks Gary, clear now. ;-)


2014-02-16 21:40 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 There are two issues here.
 The first is a bug fix that is in review:
 - https://review.openstack.org/#/c/69209/ (this is where they have the
 same configuration)
 The second is WIP:
 - https://review.openstack.org/#/c/69262/ (we need to restore)
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Sunday, February 16, 2014 6:39 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to
 resize/cold migration

 Hey,

 I have one question related with OpenStack vmwareapi.VMwareVCDriver
 resize/cold migration.

 The following is my configuration:

  DC
 |
 |Cluster1
 |  |
 |  |9.111.249.56
 |
 |Cluster2
|
|9.111.249.49

 *Scenario 1:*
 I started two nova computes manage the two clusters:
 1) nova-compute1.conf
 cluster_name=Cluster1

 2) nova-compute2.conf
 cluster_name=Cluster2

 3) Start up two nova computes on host1 and host2 separately
 4) Create one VM instance and the VM instance was booted on Cluster2 node
 9.111.249.49
 | OS-EXT-SRV-ATTR:host | host2 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c16(Cluster2) |
 5) Cold migrate the VM instance
 6) After migration finished, the VM goes to VERIFY_RESIZE status, and
 nova show indicates that the VM now located on host1:Cluster1
 | OS-EXT-SRV-ATTR:host | host1 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c12(Cluster1) |
 7) But from vSphere client, it indicates the the VM was still running on
 Cluster2
 8) Try to confirm the resize, confirm will be failed. The root cause is
 that nova compute on host2 has no knowledge of domain-c12(Cluster1)

 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2810, in
 do_confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 migration=migration)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2836, in
 _confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 network_info)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 420,
 in confirm_migration
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 _vmops = self._get_vmops_for_compute_node(instance['node'])
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 523,
 in _get_vmops_for_compute_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 resource = self._get_resource_for_node(nodename)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 515,
 in _get_resource_for_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 raise exception.NotFound(msg)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


 *Scenario 2:*

 1) Started two nova computes manage the two clusters, but the two computes
 have same nova conf.
 1) nova-compute1.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 2) nova-compute2.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 3) Then create and resize/cold migrate a VM, it can always succeed.


 *Questions:*
 For multi-cluster management, does vmware require all nova compute have
 same cluster configuration to make sure resize/cold migration can succeed?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-16 Thread Jay Lau
Hi Gary,

One more question, when using VCDriver, I can use it in the following two
ways:
1) start up many nova computes and those nova computes manage same vcenter
clusters.
2) start up many nova computes and those nova computes manage different
vcenter clusters.

Do we have some best practice for above two scenarios or else can you
please provide some best practise for VCDriver? I did not get much info
from admin guide.

Thanks,

Jay


2014-02-16 23:01 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Gary, clear now. ;-)


 2014-02-16 21:40 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 There are two issues here.
 The first is a bug fix that is in review:
 - https://review.openstack.org/#/c/69209/ (this is where they have the
 same configuration)
 The second is WIP:
 - https://review.openstack.org/#/c/69262/ (we need to restore)
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Sunday, February 16, 2014 6:39 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to
 resize/cold migration

 Hey,

 I have one question related with OpenStack vmwareapi.VMwareVCDriver
 resize/cold migration.

 The following is my configuration:

  DC
 |
 |Cluster1
 |  |
 |  |9.111.249.56
 |
 |Cluster2
|
|9.111.249.49

 *Scenario 1:*
 I started two nova computes manage the two clusters:
 1) nova-compute1.conf
 cluster_name=Cluster1

 2) nova-compute2.conf
 cluster_name=Cluster2

 3) Start up two nova computes on host1 and host2 separately
 4) Create one VM instance and the VM instance was booted on Cluster2
 node  9.111.249.49
 | OS-EXT-SRV-ATTR:host | host2 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c16(Cluster2) |
 5) Cold migrate the VM instance
 6) After migration finished, the VM goes to VERIFY_RESIZE status, and
 nova show indicates that the VM now located on host1:Cluster1
 | OS-EXT-SRV-ATTR:host | host1 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c12(Cluster1) |
 7) But from vSphere client, it indicates the the VM was still running on
 Cluster2
 8) Try to confirm the resize, confirm will be failed. The root cause is
 that nova compute on host2 has no knowledge of domain-c12(Cluster1)

 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2810, in
 do_confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 migration=migration)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2836, in
 _confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 network_info)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 420,
 in confirm_migration
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 _vmops = self._get_vmops_for_compute_node(instance['node'])
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 523,
 in _get_vmops_for_compute_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 resource = self._get_resource_for_node(nodename)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 515,
 in _get_resource_for_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 raise exception.NotFound(msg)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


 *Scenario 2:*

 1) Started two nova computes manage the two clusters, but the two
 computes have same nova conf.
 1) nova-compute1.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 2) nova-compute2.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 3) Then create and resize/cold migrate a VM, it can always succeed.


 *Questions:*
 For multi-cluster management, does vmware require all nova compute have
 same cluster configuration to make sure resize/cold migration can succeed?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin

[openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-18 Thread Jay Lau
Greetings,

Not sure if it is suitable to ask this question in openstack-dev list. Here
come a question related to network and want to get some input or comments
from you experts.

My case is as this: For some security issue, I want to put both MAC and
internal IP address to a pool and when create VM, I can get MAC and its
mapped IP address and assign the MAC and IP address to the VM.

For example, suppose I have following MAC and IP pool:
1) 78:2b:cb:af:78:b0, 192.168.0.10
2) 78:2b:cb:af:78:b1, 192.168.0.11
3) 78:2b:cb:af:78:b2, 192.168.0.12
4) 78:2b:cb:af:78:b3, 192.168.0.13

Then I can create four VMs using above MAC and IP address, each row in
above can be mapped to a VM.

Does any of you have any idea for the solution of this?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-18 Thread Jay Lau
Thanks Dong for the great help, it does worked with command line!

This seems not available via dashboard, right?

Thanks,

Jay



2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com:

 Hi Jay,

 In neutron API, you could create port with specified mac_address and
 fix_ip, and then create vm with this port.
 But the mapping of them need to manage by yourself.


 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道:

  Greetings,
 
  Not sure if it is suitable to ask this question in openstack-dev list.
 Here come a question related to network and want to get some input or
 comments from you experts.
 
  My case is as this: For some security issue, I want to put both MAC and
 internal IP address to a pool and when create VM, I can get MAC and its
 mapped IP address and assign the MAC and IP address to the VM.
 
  For example, suppose I have following MAC and IP pool:
  1) 78:2b:cb:af:78:b0, 192.168.0.10
  2) 78:2b:cb:af:78:b1, 192.168.0.11
  3) 78:2b:cb:af:78:b2, 192.168.0.12
  4) 78:2b:cb:af:78:b3, 192.168.0.13
 
  Then I can create four VMs using above MAC and IP address, each row in
 above can be mapped to a VM.
 
  Does any of you have any idea for the solution of this?
 
  --
  Thanks,
 
  Jay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-18 Thread Jay Lau
  |
nova |
| config_drive
|  |
| status   |
BUILD|
| updated  |
2014-02-19T00:07:20Z |
| hostId
|  |
| OS-EXT-SRV-ATTR:host |
None |
| OS-SRV-USG:terminated_at |
None |
| key_name |
adminkey |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |
None |
| name |
vm0001   |
| adminPass|
6zHF9aXBHs5t |
| tenant_id|
f181a9c2b1b4443dbd91b1b7de716185 |
| created  |
2014-02-19T00:07:20Z |
| os-extended-volumes:volumes_attached |
[]   |
| metadata |
{}   |
+--+--+

Thanks,

Jay



2014-02-19 8:11 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Dong for the great help, it does worked with command line!

 This seems not available via dashboard, right?

 Thanks,

 Jay



 2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com:

 Hi Jay,

 In neutron API, you could create port with specified mac_address and
 fix_ip, and then create vm with this port.
 But the mapping of them need to manage by yourself.


 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道:

  Greetings,
 
  Not sure if it is suitable to ask this question in openstack-dev list.
 Here come a question related to network and want to get some input or
 comments from you experts.
 
  My case is as this: For some security issue, I want to put both MAC and
 internal IP address to a pool and when create VM, I can get MAC and its
 mapped IP address and assign the MAC and IP address to the VM.
 
  For example, suppose I have following MAC and IP pool:
  1) 78:2b:cb:af:78:b0, 192.168.0.10
  2) 78:2b:cb:af:78:b1, 192.168.0.11
  3) 78:2b:cb:af:78:b2, 192.168.0.12
  4) 78:2b:cb:af:78:b3, 192.168.0.13
 
  Then I can create four VMs using above MAC and IP address, each row in
 above can be mapped to a VM.
 
  Does any of you have any idea for the solution of this?
 
  --
  Thanks,
 
  Jay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-18 Thread Jay Lau
Thanks Liu Dong.

In case that you may not get my previous question, so here just post it
again to see if you can give a help.

Is it possible to bind MAC to a FLOATING IP?

Thanks,

Jay



2014-02-19 10:38 GMT+08:00 Dong Liu willowd...@gmail.com:

 yes, it does not worked via dashboard

 Dong Liu

 于 2014-02-19 8:11, Jay Lau 写道:

 Thanks Dong for the great help, it does worked with command line!

 This seems not available via dashboard, right?

 Thanks,

 Jay



 2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com:


 Hi Jay,

 In neutron API, you could create port with specified mac_address and
 fix_ip, and then create vm with this port.
 But the mapping of them need to manage by yourself.


 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com
 mailto:jay.lau@gmail.com 写道:


   Greetings,
  
   Not sure if it is suitable to ask this question in openstack-dev
 list. Here come a question related to network and want to get some
 input or comments from you experts.
  
   My case is as this: For some security issue, I want to put both
 MAC and internal IP address to a pool and when create VM, I can get
 MAC and its mapped IP address and assign the MAC and IP address to
 the VM.
  
   For example, suppose I have following MAC and IP pool:
   1) 78:2b:cb:af:78:b0, 192.168.0.10
   2) 78:2b:cb:af:78:b1, 192.168.0.11
   3) 78:2b:cb:af:78:b2, 192.168.0.12
   4) 78:2b:cb:af:78:b3, 192.168.0.13
  
   Then I can create four VMs using above MAC and IP address, each
 row in above can be mapped to a VM.
  
   Does any of you have any idea for the solution of this?
  
   --
   Thanks,
  
   Jay
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Jay Lau
Thanks Liu Dong.

It is a VM mac address, so do you have any idea for how can I make sure the
VM mac address can bind to a floating ip address?

Also what do you mean by floatingip mac?

Really thanks very much for your kind help, it is really helped me a lot!

Thanks,

Jay



2014-02-19 16:21 GMT+08:00 Dong Liu willowd...@gmail.com:

 Jay, what the mac belong to? Is it a vm mac, or a mac of floatingip.
 If it is a vm mac, you can associate any floatingip to vm port.
 If it is a floatingip mac, I have no idea.

 2014-02-19 11:44, Jay Lau :

 Thanks Liu Dong.

 In case that you may not get my previous question, so here just post it
 again to see if you can give a help.

 Is it possible to bind MAC to a FLOATING IP?

 Thanks,

 Jay



 2014-02-19 10:38 GMT+08:00 Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com:


 yes, it does not worked via dashboard

 Dong Liu

 于 2014-02-19 8:11, Jay Lau 写道:

 Thanks Dong for the great help, it does worked with command line!

 This seems not available via dashboard, right?

 Thanks,

 Jay



 2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com
 mailto:willowd...@gmail.com mailto:willowd...@gmail.com__:



  Hi Jay,

  In neutron API, you could create port with specified
 mac_address and
  fix_ip, and then create vm with this port.
  But the mapping of them need to manage by yourself.


  在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com
 mailto:jay.lau@gmail.com
  mailto:jay.lau@gmail.com
 mailto:jay.lau@gmail.com__ 写道:



Greetings,
   
Not sure if it is suitable to ask this question in
 openstack-dev
  list. Here come a question related to network and want to
 get some
  input or comments from you experts.
   
My case is as this: For some security issue, I want to
 put both
  MAC and internal IP address to a pool and when create VM, I
 can get
  MAC and its mapped IP address and assign the MAC and IP
 address to
  the VM.
   
For example, suppose I have following MAC and IP pool:
1) 78:2b:cb:af:78:b0, 192.168.0.10
2) 78:2b:cb:af:78:b1, 192.168.0.11
3) 78:2b:cb:af:78:b2, 192.168.0.12
4) 78:2b:cb:af:78:b3, 192.168.0.13
   
Then I can create four VMs using above MAC and IP
 address, each
  row in above can be mapped to a VM.
   
Does any of you have any idea for the solution of this?
   
--
Thanks,
   
Jay
_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.__openstack.org
 mailto:OpenStack-dev@lists.openstack.org

   
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev


  _
  OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.__openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev

 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev




 --
 Thanks,

 Jay


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay

Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Jay Lau
Thanks Liu Dong. Clear now! ;-)


2014-02-19 20:17 GMT+08:00 Dong Liu willowd...@gmail.com:

 Sorry for replying so late.

 Yes, that is what I mean, btw, if you only need floatingip to bind vm mac,
 you do not need to specified --fix_ip, just specify the --mac-address is ok.

 What I mean floatingip-mac is that, actually, when you creating a
 floatingip, neutron will automatic create a port use that public ip, this
 port has a mac-address, I mean this one.


 在 2014年2月19日,18:22,Jay Lau jay.lau@gmail.com 写道:

 Hi Liu Dong,

 Just found a solution for this as following, the method is using fixed ip
 as a bridge for MAC and floating ip.

 Can you please help check if it is the way that you want me to do? If not,
 can you please give some suggestion for your idea?

 Thanks,

 Jay

 ==My steps==
 Suppose I want to bind MAC fa:16:3e:9d:e9:11 to floating ip 9.21.52.22, I
 was doing as following:

 *1) Create a port for fixed ip with the MAC address fa:16:3e:9d:e9:11*
 [root@db01b05 ~(keystone_admin)]#  neutron port-create IntAdmin
 --mac-address fa:16:3e:9d:e9:11 --fixed-ip ip_address=10.0.1.2
 Created a new port:

 +---+-+
 | Field |
 Value
 |

 +---+-+
 | admin_state_up|
 True
 |
 | allowed_address_pairs
 |
 |
 | binding:capabilities  | {port_filter:
 true}   |
 | binding:host_id
 |
 |
 | binding:vif_type  |
 ovs
 |
 | device_id
 |
 |
 | device_owner
 |
 |
 | fixed_ips | {subnet_id:
 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address: 10.0.1.2} |
 | id|
 b259770d-7f9c-485a-8f84-bf7b1bbc5706
 |
 | mac_address   |
 fa:16:3e:9d:e9:11
 |
 | name
 |
 |
 | network_id|
 fb1a75f9-e468-408b-a172-5d2b3802d862
 |
 | security_groups   |
 aa3f3025-ba71-476d-a126-25a9e3b34c9a
 |
 | status|
 DOWN
 |
 | tenant_id |
 f181a9c2b1b4443dbd91b1b7de716185
 |

 +---+-+
 [root@db01b05 ~(keystone_admin)]# neutron port-list | grep 10.0.1.2
 | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |  | fa:16:3e:9d:e9:11 |
 {subnet_id: 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address:
 10.0.1.2}   |

 *2) Create a floating ip with the port id created in step 1)*
 [root@db01b05 ~(keystone_admin)]# neutron floatingip-create --port-id
 b259770d-7f9c-485a-8f84-bf7b1bbc5706 Ex
 Created a new floatingip:
 +-+--+
 | Field   | Value|
 +-+--+
 | fixed_ip_address| 10.0.1.2 |
 | floating_ip_address | 9.21.52.22   |
 | floating_network_id | 9b758062-2be8-4244-a5a9-3f878f74e006 |
 | id  | 7c0db4ff-8378-4b91-9a6e-87ec06016b0f |
 | port_id | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |
 | router_id   | 43ceb267-2a4b-418a-bc9a-08d39623d3c0 |
 | tenant_id   | f181a9c2b1b4443dbd91b1b7de716185 |
 +-+--+

 *3) Boot the VM with the port id in step 1)*
 [root@db01b05 ~(keystone_admin)]#  nova boot --image
 centos64-x86_64-cfntools --flavor 2 --key-name adminkey --nic
 port-id=b259770d-7f9c-485a-8f84-bf7b1bbc5706 vm0001

 +--+--+
 | Property |
 Value|

 +--+--+
 | OS-EXT-STS:task_state|
 scheduling   |
 | image|
 centos64-x86_64-cfntools |
 | OS-EXT-STS:vm_state  |
 building |
 | OS-EXT-SRV-ATTR:instance_name|
 instance-0026|
 | OS-SRV-USG:launched_at   |
 None |
 | flavor   |
 m1.small |
 | id   |
 c0cebd6b-94ae-4305-8619-c013d45f0727 |
 | security_groups  | [{u'name':
 u'default'}]  |
 | user_id  |
 345dd87da2364fa78ffe97ed349bb71b |
 | OS-DCF:diskConfig|
 MANUAL   |
 | accessIPv4
 |  |
 | accessIPv6
 |  |
 | progress |
 0|
 | OS-EXT-STS:power_state   |
 0|
 | OS-EXT-AZ:availability_zone  |
 nova

[openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Hi,

Does HEAT support provisioning windows cluster?  If so, can I also use
user-data to do some post install work for windows instance? Is there any
example template for this?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Just noticed that there is already a bp tracing this, but no milestone was
set for it and the bp has been there for one year.

https://blueprints.launchpad.net/heat/+spec/windows-instances

Do we have any plan to finish this? Many customers are using windows
cluster, it is really cool if we can support provisioning windows cluster
with heat.

Thanks,

Jay



2014-02-20 18:02 GMT+08:00 Jay Lau jay.lau@gmail.com:


 Hi,

 Does HEAT support provisioning windows cluster?  If so, can I also use
 user-data to do some post install work for windows instance? Is there any
 example template for this?

 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-21 Thread Jay Lau
Thanks Serg and Alessandro for the detailed explanation, very helpful!

I will try to see if I can leverage something from
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.htmlfor
windows support.

Thanks,

Jay



2014-02-22 0:44 GMT+08:00 Alessandro Pilotti 
apilo...@cloudbasesolutions.com:

  Hi guys,

  Windows Heat templates are currently supported by using Cloudbase-Init.

  Here's the wiki document that I attached some weeks ago to the blueprint
 referenced in this thread: http://wiki.cloudbase.it/heat-windows
 There are a few open points that IMO require some discussion.

  One topic that deserves attention is what to do with the cfn-tools: we
 opted for using for the moment the AWS version ported to Heat, since those
 already contain the required Windows integration, but we're are willing to
 contribute to the cfn-tools project if this makes still sense.

  Talking about Windows clusters, the main issue is related to the fact
 that the typical Windows cluster configuration requires shared storage for
 the quorum and Nova / Cinder don't allow attaching volumes to multiple
 instances, although there's a BP targetting this potential feature:
 https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

  There are solutions to work around this issue that we are putting in
 place in the templates, but shared volumes are an important requirement for
 providing proper support for most advanced Windows workloads on OpenStack.

  Talking about specific workloads, we are going to release very soon an
 initial set of templates with support for Active Directory, SQL Server,
 Exchange, Sharepoint and IIS.


  Alessandro



  On 20 Feb 2014, at 12:24, Alexander Tivelkov ativel...@mirantis.com
 wrote:

  Hi Jay,

  Windows support in Heat is being developed, but is not complete yet,
 afaik. You may already use Cloudbase Init to do the post-deploy actions on
 windows - check [1] for the details.

  Meanwhile, running a windows cluster is a much more complicated task
 then just deploying a number of windows instances (if I understand you
 correctly and you speak about Microsoft Failover Cluster, see [2]): to
 build it in the cloud you will have to execute quite a complex workflow
 after the nodes are actually deployed, which is not possible with Heat (at
 least for now).

  Murano project ([3]) does this on top of Heat, as it was initially
 designed as Windows Data Center as a Service, so I suggest you too take a
 look at it. You may also check this video ([4]) which demonstrates how
 Murano is used to deploy a failover cluster of Windows 2012 with a
 clustered MS SQL server on top of it.


  [1] http://wiki.cloudbase.it/heat-windows
 [2] http://technet.microsoft.com/library/hh831579
 [3] https://wiki.openstack.org/Murano
 [4] http://www.youtube.com/watch?v=Y_CmrZfKy18

  --
  Regards,
 Alexander Tivelkov


 On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau jay.lau@gmail.com wrote:


  Hi,

  Does HEAT support provisioning windows cluster?  If so, can I also use
 user-data to do some post install work for windows instance? Is there any
 example template for this?

  Thanks,

  Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-22 Thread Jay Lau
So there is no need to implement libvirt driver for the
host_maintenance_mode API as host_maintenance_mode is mainly for VMWare and
XenServer, also we can use evacuate and os-services/disable for libvirt
host maintain, right?

Thanks,

Jay



2014-02-23 5:22 GMT+08:00 Bruce Montague bruce_monta...@symantec.com:


 On Fri Feb 21 21:14:56 UTC 2014 Joe Gordon joe.gordon0 at gmail.com
 wrote:

  On Thu, Feb 20, 2014 at 9:38 AM, Matt Riedemann mriedem at
 linux.vnet.ibm.com wrote:
 
 
  On 2/19/2014 4:05 PM, Matt Riedemann wrote:
 
  The os-hosts OS API extension [1] showed up before I was working on the
  project and I see that only the VMware and XenAPI drivers implement it,
  but was wondering why the libvirt driver doesn't - either no one wants
  it, or there is some technical reason behind not implementing it for
  that driver?
 
 
  If  I remember correctly maintenance mode is a special thing in Xen.


 Maintenance mode is pretty heavily used with VMware vCenter. When an
 environment supports universal live migration of all VMs, it makes sense
 to migrate all VMs running on a physical machine off of that machine before
 bringing it down for maintenance, such as upgrading the hardware. Provides
 some classes of end-users with a more 24x7x365 experience.


  [1]
 
 
 http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html
 
 
 
  By the way, am I missing something when I think that this extension is
  already covered if you're:
 
  1. Looking to get the node out of the scheduling loop, you can just
 disable
  it with os-services/disable?
 
  2. Looking to evacuate instances off a failed host (or one that's in
  maintenance mode), just use the evacuate server action.
 
  I don't think your missing anything.
 
 
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 



 -bruce



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-22 Thread Jay Lau
Sorry to bring this up again, just want to ask more, currently, I can only
use neutron to bind IP and MAC but cannot reach this goal via nova-network,
right?

Thanks,

Jay



2014-02-19 21:05 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Liu Dong. Clear now! ;-)


 2014-02-19 20:17 GMT+08:00 Dong Liu willowd...@gmail.com:

 Sorry for replying so late.

 Yes, that is what I mean, btw, if you only need floatingip to bind vm
 mac, you do not need to specified --fix_ip, just specify the --mac-address
 is ok.

 What I mean floatingip-mac is that, actually, when you creating a
 floatingip, neutron will automatic create a port use that public ip,
 this port has a mac-address, I mean this one.


 在 2014年2月19日,18:22,Jay Lau jay.lau@gmail.com 写道:

 Hi Liu Dong,

 Just found a solution for this as following, the method is using fixed ip
 as a bridge for MAC and floating ip.

 Can you please help check if it is the way that you want me to do? If
 not, can you please give some suggestion for your idea?

 Thanks,

 Jay

 ==My steps==
 Suppose I want to bind MAC fa:16:3e:9d:e9:11 to floating ip 9.21.52.22, I
 was doing as following:

 *1) Create a port for fixed ip with the MAC address fa:16:3e:9d:e9:11*
 [root@db01b05 ~(keystone_admin)]#  neutron port-create IntAdmin
 --mac-address fa:16:3e:9d:e9:11 --fixed-ip ip_address=10.0.1.2
 Created a new port:

 +---+-+
 | Field |
 Value
 |

 +---+-+
 | admin_state_up|
 True
 |
 | allowed_address_pairs
 |
 |
 | binding:capabilities  | {port_filter:
 true}   |
 | binding:host_id
 |
 |
 | binding:vif_type  |
 ovs
 |
 | device_id
 |
 |
 | device_owner
 |
 |
 | fixed_ips | {subnet_id:
 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address: 10.0.1.2} |
 | id|
 b259770d-7f9c-485a-8f84-bf7b1bbc5706
 |
 | mac_address   |
 fa:16:3e:9d:e9:11
 |
 | name
 |
 |
 | network_id|
 fb1a75f9-e468-408b-a172-5d2b3802d862
 |
 | security_groups   |
 aa3f3025-ba71-476d-a126-25a9e3b34c9a
 |
 | status|
 DOWN
 |
 | tenant_id |
 f181a9c2b1b4443dbd91b1b7de716185
 |

 +---+-+
 [root@db01b05 ~(keystone_admin)]# neutron port-list | grep 10.0.1.2
 | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |  | fa:16:3e:9d:e9:11 |
 {subnet_id: 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address:
 10.0.1.2}   |

 *2) Create a floating ip with the port id created in step 1)*
 [root@db01b05 ~(keystone_admin)]# neutron floatingip-create --port-id
 b259770d-7f9c-485a-8f84-bf7b1bbc5706 Ex
 Created a new floatingip:
 +-+--+
 | Field   | Value|
 +-+--+
 | fixed_ip_address| 10.0.1.2 |
 | floating_ip_address | 9.21.52.22   |
 | floating_network_id | 9b758062-2be8-4244-a5a9-3f878f74e006 |
 | id  | 7c0db4ff-8378-4b91-9a6e-87ec06016b0f |
 | port_id | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |
 | router_id   | 43ceb267-2a4b-418a-bc9a-08d39623d3c0 |
 | tenant_id   | f181a9c2b1b4443dbd91b1b7de716185 |
 +-+--+

 *3) Boot the VM with the port id in step 1)*
 [root@db01b05 ~(keystone_admin)]#  nova boot --image
 centos64-x86_64-cfntools --flavor 2 --key-name adminkey --nic
 port-id=b259770d-7f9c-485a-8f84-bf7b1bbc5706 vm0001

 +--+--+
 | Property |
 Value|

 +--+--+
 | OS-EXT-STS:task_state|
 scheduling   |
 | image|
 centos64-x86_64-cfntools |
 | OS-EXT-STS:vm_state  |
 building |
 | OS-EXT-SRV-ATTR:instance_name|
 instance-0026|
 | OS-SRV-USG:launched_at   |
 None |
 | flavor   |
 m1.small |
 | id   |
 c0cebd6b-94ae-4305-8619-c013d45f0727 |
 | security_groups  | [{u'name':
 u'default'}]  |
 | user_id  |
 345dd87da2364fa78ffe97ed349bb71b |
 | OS-DCF:diskConfig|
 MANUAL   |
 | accessIPv4
 |  |
 | accessIPv6

[openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
Greetings,

Here I want to bring up an old topic here and want to get some input from
you experts.

Currently in nova and cinder, we only have some initial placement polices
to help customer deploy VM instance or create volume storage to a specified
host, but after the VM or the volume was created, there was no policy to
monitor the hypervisors or the storage servers to take some actions in the
following case:

1) Load Balance Policy: If the load of one server is too heavy, then
probably we need to  migrate some VMs from high load servers to some idle
servers automatically to make sure the system resource usage can be
balanced.
2) HA Policy: If one server get down for some hardware failure or whatever
reasons, there is no policy to make sure the VMs can be evacuated or live
migrated (Make sure migrate the VM before server goes down) to other
available servers to make sure customer applications will not be affect too
much.
3) Energy Saving Policy: If a single host load is lower than configured
threshold, then low down the frequency of the CPU to save energy;
otherwise, increase the CPU frequency. If the average load is lower than
configured threshold, then shutdown some hypervisors to save energy;
otherwise, power on some hypervisors to load balance.  Before power off a
hypervisor host, the energy policy need to live migrate all VMs on the
hypervisor to other available hypervisors; After Power on a hypervisor
host, the Load Balance Policy will help live migrate some VMs to the new
powered hypervisor.
4) Customized Policy: Customer can also define some customized policies
based on their specified requirement.
5) Some run-time policies for block storage or even network.

I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there indeed
many customers want such features.

I have filed a bp here [1] long ago, but after some discussion with
Russell, we think that this should not belong to nova but other projects.
Till now, I did not find a good place where we can put this in, can any of
you show some comments?

[1]
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
Thanks Sylvain and Tim for the great sharing.

@Tim, I also go through with Congress and have the same feeling with
Sylvai, it is likely that Congress is doing something simliar with Gantt
providing a holistic way for deploying. What I want to do is to provide
some functions which is very similar with VMWare DRS that can do some
adaptive scheduling automatically.

@Sylvain, can you please show more detail for what  Pets vs. Cattles
analogy means?


2014-02-26 9:11 GMT+08:00 Sylvain Bauza sylvain.ba...@gmail.com:

 Hi Tim,

 As per I'm reading your design document, it sounds more likely related to
 something like Solver Scheduler subteam is trying to focus on, ie.
 intelligent agnostic resources placement on an holistic way [1]
 IIRC, Jay is more likely talking about adaptive scheduling decisions based
 on feedback with potential counter-measures that can be done for decreasing
 load and preserving QoS of nodes.

 That said, maybe I'm wrong ?

 [1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler


 2014-02-26 1:09 GMT+01:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 The Congress project aims to handle something similar to your use cases.
  I just sent a note to the ML with a Congress status update with the tag
 [Congress].  It includes links to our design docs.  Let me know if you have
 trouble finding it or want to follow up.

 Tim

 - Original Message -
 | From: Sylvain Bauza sylvain.ba...@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Tuesday, February 25, 2014 3:58:07 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 | Hi Jay,
 |
 |
 | Currently, the Nova scheduler only acts upon user request (either
 | live migration or boot an instance). IMHO, that's something Gantt
 | should scope later on (or at least there could be some space within
 | the Scheduler) so that Scheduler would be responsible for managing
 | resources on a dynamic way.
 |
 |
 | I'm thinking of the Pets vs. Cattles analogy, and I definitely think
 | that Compute resources could be treated like Pets, provided the
 | Scheduler does a move.
 |
 |
 | -Sylvain
 |
 |
 |
 | 2014-02-26 0:40 GMT+01:00 Jay Lau  jay.lau@gmail.com  :
 |
 |
 |
 |
 | Greetings,
 |
 |
 | Here I want to bring up an old topic here and want to get some input
 | from you experts.
 |
 |
 | Currently in nova and cinder, we only have some initial placement
 | polices to help customer deploy VM instance or create volume storage
 | to a specified host, but after the VM or the volume was created,
 | there was no policy to monitor the hypervisors or the storage
 | servers to take some actions in the following case:
 |
 |
 | 1) Load Balance Policy: If the load of one server is too heavy, then
 | probably we need to migrate some VMs from high load servers to some
 | idle servers automatically to make sure the system resource usage
 | can be balanced.
 |
 | 2) HA Policy: If one server get down for some hardware failure or
 | whatever reasons, there is no policy to make sure the VMs can be
 | evacuated or live migrated (Make sure migrate the VM before server
 | goes down) to other available servers to make sure customer
 | applications will not be affect too much.
 |
 | 3) Energy Saving Policy: If a single host load is lower than
 | configured threshold, then low down the frequency of the CPU to save
 | energy; otherwise, increase the CPU frequency. If the average load
 | is lower than configured threshold, then shutdown some hypervisors
 | to save energy; otherwise, power on some hypervisors to load
 | balance. Before power off a hypervisor host, the energy policy need
 | to live migrate all VMs on the hypervisor to other available
 | hypervisors; After Power on a hypervisor host, the Load Balance
 | Policy will help live migrate some VMs to the new powered
 | hypervisor.
 |
 | 4) Customized Policy: Customer can also define some customized
 | policies based on their specified requirement.
 |
 | 5) Some run-time policies for block storage or even network.
 |
 |
 |
 | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
 | indeed many customers want such features.
 |
 |
 |
 | I have filed a bp here [1] long ago, but after some discussion with
 | Russell, we think that this should not belong to nova but other
 | projects. Till now, I did not find a good place where we can put
 | this in, can any of you show some comments?
 |
 |
 |
 | [1]
 |
 https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
 |
 | --
 |
 |
 | Thanks,
 |
 | Jay
 |
 | ___
 | OpenStack-dev mailing list
 | OpenStack-dev@lists.openstack.org
 | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 |
 |
 |
 | ___
 | OpenStack-dev mailing list
 | OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-25 Thread Jay Lau
@Zhangleiqiang, thanks for the info, yes, it does provide load balance and
DPM.

What I want to do is not only those two policies but also HA or some
customized policies just like openstack nova filters, also I hope that this
policy can manage not only compute resource, but also storage, network etc.





2014-02-26 12:16 GMT+08:00 Zhangleiqiang zhangleiqi...@huawei.com:

  Hi, Jay  Sylvain:



 I found  the OpenStack-Neat Project (http://openstack-neat.org/) have
 already aimed to do the things similar to DRS and DPM.



 Hope it will be helpful.





 --

 Leiqzhang



 Best Regards



 *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
 *Sent:* Wednesday, February 26, 2014 9:11 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage compute/storage resource



 Hi Tim,



 As per I'm reading your design document, it sounds more likely related to
 something like Solver Scheduler subteam is trying to focus on, ie.
 intelligent agnostic resources placement on an holistic way [1]

 IIRC, Jay is more likely talking about adaptive scheduling decisions based
 on feedback with potential counter-measures that can be done for decreasing
 load and preserving QoS of nodes.



 That said, maybe I'm wrong ?



 [1]https://blueprints.launchpad.net/nova/+spec/solver-scheduler



 2014-02-26 1:09 GMT+01:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 The Congress project aims to handle something similar to your use cases.
  I just sent a note to the ML with a Congress status update with the tag
 [Congress].  It includes links to our design docs.  Let me know if you have
 trouble finding it or want to follow up.

 Tim


 - Original Message -
 | From: Sylvain Bauza sylvain.ba...@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Tuesday, February 25, 2014 3:58:07 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 | Hi Jay,
 |
 |
 | Currently, the Nova scheduler only acts upon user request (either
 | live migration or boot an instance). IMHO, that's something Gantt
 | should scope later on (or at least there could be some space within
 | the Scheduler) so that Scheduler would be responsible for managing
 | resources on a dynamic way.
 |
 |
 | I'm thinking of the Pets vs. Cattles analogy, and I definitely think
 | that Compute resources could be treated like Pets, provided the
 | Scheduler does a move.
 |
 |
 | -Sylvain
 |
 |
 |
 | 2014-02-26 0:40 GMT+01:00 Jay Lau  jay.lau@gmail.com  :
 |
 |
 |
 |
 | Greetings,
 |
 |
 | Here I want to bring up an old topic here and want to get some input
 | from you experts.
 |
 |
 | Currently in nova and cinder, we only have some initial placement
 | polices to help customer deploy VM instance or create volume storage
 | to a specified host, but after the VM or the volume was created,
 | there was no policy to monitor the hypervisors or the storage
 | servers to take some actions in the following case:
 |
 |
 | 1) Load Balance Policy: If the load of one server is too heavy, then
 | probably we need to migrate some VMs from high load servers to some
 | idle servers automatically to make sure the system resource usage
 | can be balanced.
 |
 | 2) HA Policy: If one server get down for some hardware failure or
 | whatever reasons, there is no policy to make sure the VMs can be
 | evacuated or live migrated (Make sure migrate the VM before server
 | goes down) to other available servers to make sure customer
 | applications will not be affect too much.
 |
 | 3) Energy Saving Policy: If a single host load is lower than
 | configured threshold, then low down the frequency of the CPU to save
 | energy; otherwise, increase the CPU frequency. If the average load
 | is lower than configured threshold, then shutdown some hypervisors
 | to save energy; otherwise, power on some hypervisors to load
 | balance. Before power off a hypervisor host, the energy policy need
 | to live migrate all VMs on the hypervisor to other available
 | hypervisors; After Power on a hypervisor host, the Load Balance
 | Policy will help live migrate some VMs to the new powered
 | hypervisor.
 |
 | 4) Customized Policy: Customer can also define some customized
 | policies based on their specified requirement.
 |
 | 5) Some run-time policies for block storage or even network.
 |
 |
 |
 | I borrow the idea from VMWare DRS (Thanks VMWare DRS), and there
 | indeed many customers want such features.
 |
 |
 |
 | I have filed a bp here [1] long ago, but after some discussion with
 | Russell, we think that this should not belong to nova but other
 | projects. Till now, I did not find a good place where we can put
 | this in, can any of you show some comments?
 |
 |
 |
 | [1]
 |
 https

Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-26 Thread Jay Lau
Hi Tim,

I'm not sure if we can put resource monitor and adjust to solver-scheduler
(Gantt), but I have proposed this to Gantt design [1], you can refer to [1]
and search jay-lau-513.

IMHO, Congress does monitoring and also take actions, but the actions seems
mainly for adjusting single VM network or storage. It did not consider
migrating VM according to hypervisor load.

Not sure if this topic deserved to be a design session for the coming
summit, but I will try to propose.

[1] https://etherpad.openstack.org/p/icehouse-external-scheduler

Thanks,

Jay

2014-02-27 1:48 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay and Sylvain,

 The solver-scheduler sounds like a good fit to me as well.  It clearly
 provisions resources in accordance with policy.  Does it monitor those
 resources and adjust them if the system falls out of compliance with the
 policy?

 I mentioned Congress for two reasons. (i) It does monitoring.  (ii) There
 was mention of compute, networking, and storage, and I couldn't tell if the
 idea was for policy that spans OS components or not.  Congress was designed
 for policies spanning OS components.

 Tim

 - Original Message -
 | From: Jay Lau jay.lau@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Tuesday, February 25, 2014 10:13:14 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 |
 |
 | Thanks Sylvain and Tim for the great sharing.
 |
 | @Tim, I also go through with Congress and have the same feeling with
 | Sylvai, it is likely that Congress is doing something simliar with
 | Gantt providing a holistic way for deploying. What I want to do is
 | to provide some functions which is very similar with VMWare DRS that
 | can do some adaptive scheduling automatically.
 |
 | @Sylvain, can you please show more detail for what Pets vs. Cattles
 | analogy means?
 |
 |
 |
 |
 | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza  sylvain.ba...@gmail.com  :
 |
 |
 |
 | Hi Tim,
 |
 |
 | As per I'm reading your design document, it sounds more likely
 | related to something like Solver Scheduler subteam is trying to
 | focus on, ie. intelligent agnostic resources placement on an
 | holistic way [1]
 | IIRC, Jay is more likely talking about adaptive scheduling decisions
 | based on feedback with potential counter-measures that can be done
 | for decreasing load and preserving QoS of nodes.
 |
 |
 | That said, maybe I'm wrong ?
 |
 |
 | [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
 |
 |
 |
 | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs  thinri...@vmware.com  :
 |
 |
 |
 |
 | Hi Jay,
 |
 | The Congress project aims to handle something similar to your use
 | cases. I just sent a note to the ML with a Congress status update
 | with the tag [Congress]. It includes links to our design docs. Let
 | me know if you have trouble finding it or want to follow up.
 |
 | Tim
 |
 |
 |
 | - Original Message -
 | | From: Sylvain Bauza  sylvain.ba...@gmail.com 
 | | To: OpenStack Development Mailing List (not for usage questions)
 | |  openstack-dev@lists.openstack.org 
 | | Sent: Tuesday, February 25, 2014 3:58:07 PM
 | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 | | for OpenStack run time policy to manage
 | | compute/storage resource
 | |
 | |
 | |
 | | Hi Jay,
 | |
 | |
 | | Currently, the Nova scheduler only acts upon user request (either
 | | live migration or boot an instance). IMHO, that's something Gantt
 | | should scope later on (or at least there could be some space within
 | | the Scheduler) so that Scheduler would be responsible for managing
 | | resources on a dynamic way.
 | |
 | |
 | | I'm thinking of the Pets vs. Cattles analogy, and I definitely
 | | think
 | | that Compute resources could be treated like Pets, provided the
 | | Scheduler does a move.
 | |
 | |
 | | -Sylvain
 | |
 | |
 | |
 | | 2014-02-26 0:40 GMT+01:00 Jay Lau  jay.lau@gmail.com  :
 | |
 | |
 | |
 | |
 | | Greetings,
 | |
 | |
 | | Here I want to bring up an old topic here and want to get some
 | | input
 | | from you experts.
 | |
 | |
 | | Currently in nova and cinder, we only have some initial placement
 | | polices to help customer deploy VM instance or create volume
 | | storage
 | | to a specified host, but after the VM or the volume was created,
 | | there was no policy to monitor the hypervisors or the storage
 | | servers to take some actions in the following case:
 | |
 | |
 | | 1) Load Balance Policy: If the load of one server is too heavy,
 | | then
 | | probably we need to migrate some VMs from high load servers to some
 | | idle servers automatically to make sure the system resource usage
 | | can be balanced.
 | |
 | | 2) HA Policy: If one server get down for some hardware failure or
 | | whatever reasons, there is no policy to make sure the VMs can be
 | | evacuated or live

Re: [openstack-dev] [Heat] Thoughts on adding a '--progress' option?

2014-02-28 Thread Jay Lau
Does heat resource-list and heat event-list help?

[gyliu@drsserver hadoop_heat(keystone_admin)]$ heat resource-list a1
++--++--+
| logical_resource_id| resource_type|
resource_status| updated_time |
++--++--+
| CfnUser| AWS::IAM::User   |
CREATE_COMPLETE| 2014-02-28T16:50:11Z |
| HadoopM| AWS::EC2::Instance   |
CREATE_IN_PROGRESS | 2014-02-28T16:50:11Z |
| HadoopMasterWaitHandle | AWS::CloudFormation::WaitConditionHandle |
CREATE_COMPLETE| 2014-02-28T16:50:11Z |
| HadoopSlaveKeys| AWS::IAM::AccessKey  |
CREATE_IN_PROGRESS | 2014-02-28T16:50:11Z |
| HadoopMasterWaitCondition  | AWS::CloudFormation::WaitCondition   |
INIT_COMPLETE  | 2014-02-28T16:50:31Z |
| LaunchConfig   | AWS::AutoScaling::LaunchConfiguration|
INIT_COMPLETE  | 2014-02-28T16:50:31Z |
| HadoopSGroup   | AWS::AutoScaling::AutoScalingGroup   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| HadoopSlaveScaleDownPolicy | AWS::AutoScaling::ScalingPolicy  |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| HadoopSlaveScaleUpPolicy   | AWS::AutoScaling::ScalingPolicy  |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| MEMAlarmHigh   | AWS::CloudWatch::Alarm   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| MEMAlarmLow| AWS::CloudWatch::Alarm   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
++--++--+

[gyliu@drsserver hadoop_heat(keystone_admin)]$ heat event-list -r
HadoopMasterWaitCondition a1
+---+---+++--+
| logical_resource_id   | id| resource_status_reason |
resource_status| event_time   |
+---+---+++--+
| HadoopMasterWaitCondition | 37389 | state changed  |
CREATE_IN_PROGRESS | 2014-02-28T16:51:07Z |
| HadoopMasterWaitCondition | 37390 | state changed  |
CREATE_COMPLETE| 2014-02-28T16:52:46Z |
+---+---+++--+

Thanks,

Jay


2014-02-28 15:28 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:


 The creation a stack is usually a time costly process, considering that
 there are cases where software packages need to be installed and
 configured.

 There are also cases where a stack consists of more than one VM instance
 and the dependency between instances.  The instances may have to be
 created one by one.

 Are Heat people considering adding some progress updates during the
 deployment?  For example, a simple log that can be printed by heatclient
 telling the user what progress has been made:

 Refreshing known resources types
 Receiving template ...
 Validating template ...
 Creating resource my_lb [AWS::EC2:LoadBalancer]
 Creating resource lb_instance1 [AWS::EC2::Instance]
 Creating resource latency_watcher [AWS::CloudWatch::Alarm]
 
 ...


 This would be useful for users to 'debug' their templates, especially
 when the template syntax is okay but its activities are not the intended
 one.

 Do we have to rely on heat-cfn-api to get these notifications?

 Any thoughts?

   - Qiming

 Research Staff Member
 IBM Research - China
 tengqim AT cn DOT ibm DOT com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-28 Thread Jay Lau
Hi Yathiraj and Tim,

Really appreciate your comments here ;-)

I will prepare some detailed slides or documents before summit and we can
have a review then. It would be great if OpenStack can provide DRS
features.

Thanks,

Jay



2014-03-01 6:00 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 I think the Solver Scheduler is a better fit for your needs than Congress
 because you know what kinds of constraints and enforcement you want.  I'm
 not sure this topic deserves an entire design session--maybe just talking a
 bit at the summit would suffice (I *think* I'll be attending).

 Tim

 - Original Message -
 | From: Jay Lau jay.lau@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, February 26, 2014 6:30:54 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 |
 |
 |
 | Hi Tim,
 |
 | I'm not sure if we can put resource monitor and adjust to
 | solver-scheduler (Gantt), but I have proposed this to Gantt design
 | [1], you can refer to [1] and search jay-lau-513.
 |
 | IMHO, Congress does monitoring and also take actions, but the actions
 | seems mainly for adjusting single VM network or storage. It did not
 | consider migrating VM according to hypervisor load.
 |
 | Not sure if this topic deserved to be a design session for the coming
 | summit, but I will try to propose.
 |
 |
 |
 |
 | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
 |
 |
 |
 | Thanks,
 |
 |
 | Jay
 |
 |
 |
 | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  :
 |
 |
 | Hi Jay and Sylvain,
 |
 | The solver-scheduler sounds like a good fit to me as well. It clearly
 | provisions resources in accordance with policy. Does it monitor
 | those resources and adjust them if the system falls out of
 | compliance with the policy?
 |
 | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
 | There was mention of compute, networking, and storage, and I
 | couldn't tell if the idea was for policy that spans OS components or
 | not. Congress was designed for policies spanning OS components.
 |
 |
 | Tim
 |
 | - Original Message -
 |
 | | From: Jay Lau  jay.lau@gmail.com 
 | | To: OpenStack Development Mailing List (not for usage questions)
 | |  openstack-dev@lists.openstack.org 
 |
 |
 | | Sent: Tuesday, February 25, 2014 10:13:14 PM
 | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 | | for OpenStack run time policy to manage
 | | compute/storage resource
 | |
 | |
 | |
 | |
 | |
 | | Thanks Sylvain and Tim for the great sharing.
 | |
 | | @Tim, I also go through with Congress and have the same feeling
 | | with
 | | Sylvai, it is likely that Congress is doing something simliar with
 | | Gantt providing a holistic way for deploying. What I want to do is
 | | to provide some functions which is very similar with VMWare DRS
 | | that
 | | can do some adaptive scheduling automatically.
 | |
 | | @Sylvain, can you please show more detail for what Pets vs.
 | | Cattles
 | | analogy means?
 | |
 | |
 | |
 | |
 | | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza  sylvain.ba...@gmail.com 
 | | :
 | |
 | |
 | |
 | | Hi Tim,
 | |
 | |
 | | As per I'm reading your design document, it sounds more likely
 | | related to something like Solver Scheduler subteam is trying to
 | | focus on, ie. intelligent agnostic resources placement on an
 | | holistic way [1]
 | | IIRC, Jay is more likely talking about adaptive scheduling
 | | decisions
 | | based on feedback with potential counter-measures that can be done
 | | for decreasing load and preserving QoS of nodes.
 | |
 | |
 | | That said, maybe I'm wrong ?
 | |
 | |
 | | [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
 | |
 | |
 | |
 | | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs  thinri...@vmware.com  :
 | |
 | |
 | |
 | |
 | | Hi Jay,
 | |
 | | The Congress project aims to handle something similar to your use
 | | cases. I just sent a note to the ML with a Congress status update
 | | with the tag [Congress]. It includes links to our design docs. Let
 | | me know if you have trouble finding it or want to follow up.
 | |
 | | Tim
 | |
 | |
 | |
 | | - Original Message -
 | | | From: Sylvain Bauza  sylvain.ba...@gmail.com 
 | | | To: OpenStack Development Mailing List (not for usage
 | | | questions)
 | | |  openstack-dev@lists.openstack.org 
 | | | Sent: Tuesday, February 25, 2014 3:58:07 PM
 | | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A
 | | | proposal
 | | | for OpenStack run time policy to manage
 | | | compute/storage resource
 | | |
 | | |
 | | |
 | | | Hi Jay,
 | | |
 | | |
 | | | Currently, the Nova scheduler only acts upon user request (either
 | | | live migration or boot an instance). IMHO, that's something Gantt
 | | | should scope later on (or at least there could be some space
 | | | within

Re: [openstack-dev] [nova] Automatic Evacuation

2014-03-03 Thread Jay Lau
Yes, it would be great if we can have a simple framework for future run
time policy plugins. ;-)

2014-03-03 23:12 GMT+08:00 laserjetyang laserjety...@gmail.com:

 there are a lot of rules for HA or LB, so I think it might be a better
 idea to scope the framework and leave the policy as plugins.


 On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski 
 andrew.la...@rackspace.comwrote:

 On 03/01/14 at 07:24am, Jay Lau wrote:

 Hey,

 Sorry to bring this up again. There are also some discussions here:
 http://markmail.org/message/5zotly4qktaf34ei

 You can also search [Runtime Policy] in your email list.

 Not sure if we can put this to Gantt and enable Gantt provide both
 initial
 placement and rum time polices like HA, load balance etc.


 I don't have an opinion at the moment as to whether or not this sort of
 functionality belongs in Gantt, but there's still a long way to go just to
 get the scheduling functionality we want out of Gantt and I would like to
 see the focus stay on that.





 Thanks,

 Jay



 2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:

  On 02/20/2014 06:04 PM, Sean Dague wrote:
  On 02/20/2014 05:32 PM, Russell Bryant wrote:
  On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
  Hi,
 
  Would like to know if there's any interest on having
  'automatic evacuation' feature when a compute node goes down. I
  found 3 bps related to this topic: [1] Adding a periodic task
  and using ServiceGroup API for compute-node status [2] Using
  ceilometer to trigger the evacuate api. [3] Include some kind
  of H/A plugin  by using a 'resource optimization service'
 
  Most of those BP's have comments like 'this logic should not
  reside in nova', so that's why i am asking what should be the
  best approach to have something like that.
 
  Should this be ignored, and just rely on external monitoring
  tools to trigger the evacuation? There are complex scenarios
  that require lot of logic that won't fit into nova nor any
  other OS component. (For instance: sometimes it will be faster
  to reboot the node or compute-nova than starting the
  evacuation, but if it fail X times then trigger an evacuation,
  etc )
 
  Any thought/comment// about this?
 
  Regards Leandro
 
  [1]
 
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
 
 [2]
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-
 instance-automatically
 
 
 [3]
 
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 
 
 
 My opinion is that I would like to see this logic done outside of Nova.
 
  Right now Nova is the only service that really understands the
  compute topology of hosts, though it's understanding of liveness is
  really not sufficient to handle this kind of HA thing anyway.
 
  I think that's the real problem to solve. How to provide
  notifications to somewhere outside of Nova on host death. And the
  question is, should Nova be involved in just that part, keeping
  track of node liveness and signaling up for someone else to deal
  with it? Honestly that part I'm more on the fence about. Because
  putting another service in place to just handle that monitoring
  seems overkill.
 
  I 100% agree that all the policy, reacting, logic for this should
  be outside of Nova. Be it Heat or somewhere else.

 I think we agree.  I'm very interested in continuing to enhance Nova
 to make sure that the thing outside of Nova has all of the APIs it
 needs to get the job done.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-20 Thread Jay Lau
It is better that we can have some diagram workflow just like
Gerrit_Workflow https://wiki.openstack.org/wiki/Gerrit_Workflow to show
the new process.

Thanks!


2014-03-21 4:23 GMT+08:00 Dolph Mathews dolph.math...@gmail.com:


 On Thu, Mar 20, 2014 at 10:49 AM, Russell Bryant rbry...@redhat.comwrote:

 We recently discussed the idea of using gerrit to review blueprint
 specifications [1].  There was a lot of support for the idea so we have
 proceeded with putting this together before the start of the Juno
 development cycle.

 We now have a new project set up, openstack/nova-specs.  You submit
 changes to it just like any other project in gerrit.  Find the README
 and a template for specifications here:

   http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst

   http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst


 This is great! This is the same basic process we've used for API-impacting
 changes in keystone and it has worked really well for us, and we're eager
 to adopt the same thing on a more general level.

 The process seems overly complicated to me, however. As a blueprint
 proposer, I find it odd that I have to propose my blueprint as part of
 approved/ -- why not just have a single directory to file things away that
 have been implemented? Is it even necessary to preserve them? (why not just
 git rm when implemented?) Gerrit already provides a permalink (to the
 review).




 The blueprint process wiki page has also been updated to reflect that we
 will be using this for Nova:

   https://wiki.openstack.org/wiki/Blueprints#Nova

 Note that *all* Juno blueprints, including ones that were previously
 approved, must go through this new process.  This will help ensure that
 blueprints previously approved still make sense, as well as ensure that
 all Juno specs follow a more complete and consistent format.

 Before the flood of spec reviews start, we would really like to get
 feedback on the content of the spec template.  It includes things like
 deployer impact which could use more input.  Feel free to provide
 feedback on list, or just suggest updates via proposed changes in gerrit.

 I suspect this process to evolve a bit throughout Juno, but I'm very
 excited about the positive impact it is likely to have on our overall
 result.

 Thanks!

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][NOVA][VCDriver][live-migration] VCDriver live migration problem

2014-03-21 Thread Jay Lau
Hi,

Currently we cannot do live migration with VCDriver in nova, live migration
is really an important feature, so any plan to fix this?

I noticed that there is already bug tracing this but seems no progress
since last year's November: https://bugs.launchpad.net/nova/+bug/1192192

Here just bring this problem up to see if there are any plan to fix this.
After some investigation, I think that this might deserve to be a blueprint
but not a bug.

We may need to resolve issues for the following cases:
1) How to live migration with only one nova compute? (one nova compute can
manage multiple clusters and there can be multi hosts in one cluster)
2) Support live migration between clusters
3) Support live migration between resource pools
4) Support live migration between hosts
5) Support live migration between cluster and host
6) Support live migration between cluster and resource pool
7) Support live migration between resource pool and host
8) Might be more cases.

Please show your comments if any and correct me if anything is not correct.

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA][VMWare][live-migration] VCDriver live migration problem

2014-03-22 Thread Jay Lau
Thanks Shawn, I have updated the title with VMWare.

Yes, I know that live migration works. But the problem is when a cluster
admin want to live migrate a VM instance, s/he will not know the target
host where to migrate, as s/he cannot get target host from nova compute
because currently VCDriver can only report cluster or resource pool as
hypervisor host but not ESX server.

IMHO, the VCDriver should support live migration between cluster, resource
pool and ESX host, so we may need do at least the following enhancements:
1) Enable live migration with even one nova compute. My current thinking is
that enhance target host as host:node when live migrate a VM instance and
the live migration task need
2) Enable VCDriver report all ESX servers.

We can discuss more during next week's IRC meeting.

Thanks!


2014-03-22 17:13 GMT+08:00 Shawn Hartsock harts...@acm.org:

 Hi Jay. We usually use [vmware] to tag discussion of VMware things. I
 almost didn't see this message.

 In short, there is a plan and we're currently blocked because we have
 to address several other pressing issues in the driver before we can
 address this one. Part of this is due to the fact that we can't press
 harder on blueprints or changes to the VCDriver right now.

 I actually reported this bug and we've discussed this at
 https://wiki.openstack.org/wiki/Meetings/VMwareAPI the basic problem
 is that live-migration actually works but you can't presently
 formulate a command that activates the feature from the CLI under some
 configurations. That's because of the introduction of clusters in the
 VCDriver in Havana.

 To fix this, we have to come up with a way to target a host inside the
 cluster (as I pointed out in the bug) or we have to have some way for
 a live migration to occur between clusters and a way to validate that
 this can happen first.

 As for the priority of this bug, it's been set to Medium which puts it
 well behind many of the Critical or High tasks on our radar. As for
 fixing the bug, no new outward behaviors or API are going to be
 introduced and this was working at one point and now it's stopped. To
 call this a new feature seems a bit strange.

 So, moving forward... perhaps we need to re-evaluate the priority
 order on some of these things. I tabled Juno planning during the last
 VMwareAPI subteam meeting but I plan on starting the discussion next
 week. We have a priority order for blueprints that we set as a team
 and these are publicly recorded in our meeting logs and on the wiki.
 I'll try to do better advertising these things. You are of course
 invited... and yeah... if you're interested in what we're fixing next
 in the VCDriver that next IRC meeting is where we'll start the
 discussion.

 On Sat, Mar 22, 2014 at 1:18 AM, Jay Lau jay.lau@gmail.com wrote:
  Hi,
 
  Currently we cannot do live migration with VCDriver in nova, live
 migration
  is really an important feature, so any plan to fix this?
 
  I noticed that there is already bug tracing this but seems no progress
 since
  last year's November: https://bugs.launchpad.net/nova/+bug/1192192
 
  Here just bring this problem up to see if there are any plan to fix this.
  After some investigation, I think that this might deserve to be a
 blueprint
  but not a bug.
 
  We may need to resolve issues for the following cases:
  1) How to live migration with only one nova compute? (one nova compute
 can
  manage multiple clusters and there can be multi hosts in one cluster)
  2) Support live migration between clusters
  3) Support live migration between resource pools
  4) Support live migration between hosts
  5) Support live migration between cluster and host
  6) Support live migration between cluster and resource pool
  7) Support live migration between resource pool and host
  8) Might be more cases.
 
  Please show your comments if any and correct me if anything is not
 correct.
 
  --
  Thanks,
 
  Jay
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 # Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA][VMWare][live-migration] VCDriver live migration problem

2014-03-22 Thread Jay Lau
Thanks Shawn, what you proposed is exactly I want ;-) Cool!

We can discuss more during the IRC meeting.

Thanks!


2014-03-22 20:22 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Shawn, I have updated the title with VMWare.

 Yes, I know that live migration works. But the problem is when a cluster
 admin want to live migrate a VM instance, s/he will not know the target
 host where to migrate, as s/he cannot get target host from nova compute
 because currently VCDriver can only report cluster or resource pool as
 hypervisor host but not ESX server.

 IMHO, the VCDriver should support live migration between cluster, resource
 pool and ESX host, so we may need do at least the following enhancements:
 1) Enable live migration with even one nova compute. My current thinking
 is that enhance target host as host:node when live migrate a VM instance
 and the live migration task need
 2) Enable VCDriver report all ESX servers.

 We can discuss more during next week's IRC meeting.

 Thanks!


 2014-03-22 17:13 GMT+08:00 Shawn Hartsock harts...@acm.org:

 Hi Jay. We usually use [vmware] to tag discussion of VMware things. I
 almost didn't see this message.

 In short, there is a plan and we're currently blocked because we have
 to address several other pressing issues in the driver before we can
 address this one. Part of this is due to the fact that we can't press
 harder on blueprints or changes to the VCDriver right now.

 I actually reported this bug and we've discussed this at
 https://wiki.openstack.org/wiki/Meetings/VMwareAPI the basic problem
 is that live-migration actually works but you can't presently
 formulate a command that activates the feature from the CLI under some
 configurations. That's because of the introduction of clusters in the
 VCDriver in Havana.

 To fix this, we have to come up with a way to target a host inside the
 cluster (as I pointed out in the bug) or we have to have some way for
 a live migration to occur between clusters and a way to validate that
 this can happen first.

 As for the priority of this bug, it's been set to Medium which puts it
 well behind many of the Critical or High tasks on our radar. As for
 fixing the bug, no new outward behaviors or API are going to be
 introduced and this was working at one point and now it's stopped. To
 call this a new feature seems a bit strange.

 So, moving forward... perhaps we need to re-evaluate the priority
 order on some of these things. I tabled Juno planning during the last
 VMwareAPI subteam meeting but I plan on starting the discussion next
 week. We have a priority order for blueprints that we set as a team
 and these are publicly recorded in our meeting logs and on the wiki.
 I'll try to do better advertising these things. You are of course
 invited... and yeah... if you're interested in what we're fixing next
 in the VCDriver that next IRC meeting is where we'll start the
 discussion.

 On Sat, Mar 22, 2014 at 1:18 AM, Jay Lau jay.lau@gmail.com wrote:
  Hi,
 
  Currently we cannot do live migration with VCDriver in nova, live
 migration
  is really an important feature, so any plan to fix this?
 
  I noticed that there is already bug tracing this but seems no progress
 since
  last year's November: https://bugs.launchpad.net/nova/+bug/1192192
 
  Here just bring this problem up to see if there are any plan to fix
 this.
  After some investigation, I think that this might deserve to be a
 blueprint
  but not a bug.
 
  We may need to resolve issues for the following cases:
  1) How to live migration with only one nova compute? (one nova compute
 can
  manage multiple clusters and there can be multi hosts in one cluster)
  2) Support live migration between clusters
  3) Support live migration between resource pools
  4) Support live migration between hosts
  5) Support live migration between cluster and host
  6) Support live migration between cluster and resource pool
  7) Support live migration between resource pool and host
  8) Might be more cases.
 
  Please show your comments if any and correct me if anything is not
 correct.
 
  --
  Thanks,
 
  Jay
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 # Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-03-31 Thread Jay Lau
Hi,

Currently with VMWare VCDriver, one nova compute can manage multiple
clusters/RPs, this caused cluster admin cannot do live migration between
clusters/PRs if those clusters/PRs managed by one nova compute as the
current live migration logic request at least two nova computes.

A bug [1] was also filed to trace VMWare live migration issue.

I'm now trying the following solution to see if it is acceptable for a fix,
the fix wants enable live migration with one nova compute:
1) When live migration check if host are same, check both host and node for
the VM instance.
2) When nova scheduler select destination for live migration, the live
migration task should put (host, node) to attempted hosts.
3) Nova scheduler needs to be enhanced to support ignored_nodes.
4) nova compute need to be enhanced to check host and node when doing live
migration.

I also uploaded a WIP patch [2] for you to review the idea of the fix and
hope can get some comments from you.

[1] https://bugs.launchpad.net/nova/+bug/1192192
[2] https://review.openstack.org/#/c/84085

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-03-31 Thread Jay Lau
Thanks Solly and Alessandro!

@Solly,

Yes, I also want to make the code change not VMWare-specific, but perhaps
we can consider of what @Alessandro said, we are going to have hyper-V
cluster support in next cycle, and maybe Power HMC in the future. All of
them can be managed in cluster level and one cluster can have multiple
hypervisors.

So I think that it might be a time to enhance live migration to handle not
only the case of single hypervisor but also multiple hypervisors managed by
one cluster.

Hope we can also get some comments from VMWare guys.

Thanks.


2014-04-01 6:57 GMT+08:00 Alessandro Pilotti 
apilo...@cloudbasesolutions.com:


 On 31 Mar 2014, at 18:13, Solly Ross sr...@redhat.com wrote:

  Building on what John said, I'm a bit wary of introducing semantics into
 the Conductor's live migration code
  that are VMWare-specific.  The conductor's live-migration code is
 supposed to be driver-agnostic.  IMHO, it
  would be much better if we could handle this at a level where the code
 was already VMWare-specific.
 

 In terms of driver specific features, we're evaluating cluster support for
 Hyper-V in the next cycle which would encounter the same issue for live
 migration.
 Hyper-V does not require clustering for supporting live migration (it's
 already available since Grizzly), but various users are requesting Windows
 clustering support
 for supporting specific scenarios, which requires a separate Nova Hyper-V
 failover clustering driver with resemblances to the VCenter driver in terms
 of
 cells / hosts management. Note: this is not related to Microsoft System
 Center.

 Evaluating such a feature solely in base of blueprints barely under
 drafting for other drivers and never discussed for approval isn't obviously
 requested, but it might be
 useful to consider the possibility that VMWare's might not be the only
 Nova driver with this requirement in the relatively short term future.

 Thanks,

 Alessandro


  Best Regards,
  Solly Ross
 
  - Original Message -
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Monday, March 31, 2014 10:36:17 AM
  Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 
  Thanks John. Yes, I also think that this should be a bp as it is going
 to make some changes to enable live migration with only one nova compute,
 will file a blueprint later.
 
  For your proposal specify the same host as the instance, this can
 resolve the issue of live migration with target host, but what about the
 case of live migration without target host? If we still allow specify the
 same host as the instance, the the live migration will goes to dead loop.
 
  So it seems we definitely need to find a way to specify the node for
 live migration, hope someone else can show some light here.
 
  Of course, I will file bp and go through the new bp review process for
 this feature.
 
  Thanks!
 
 
  2014-03-31 21:02 GMT+08:00 John Garbutt  j...@johngarbutt.com  :
 
 
 
  On 31 March 2014 10:11, Jay Lau  jay.lau@gmail.com  wrote:
  Hi,
 
  Currently with VMWare VCDriver, one nova compute can manage multiple
  clusters/RPs, this caused cluster admin cannot do live migration between
  clusters/PRs if those clusters/PRs managed by one nova compute as the
  current live migration logic request at least two nova computes.
 
  A bug [1] was also filed to trace VMWare live migration issue.
 
  I'm now trying the following solution to see if it is acceptable for a
 fix,
  the fix wants enable live migration with one nova compute:
  1) When live migration check if host are same, check both host and node
 for
  the VM instance.
  2) When nova scheduler select destination for live migration, the live
  migration task should put (host, node) to attempted hosts.
  3) Nova scheduler needs to be enhanced to support ignored_nodes.
  4) nova compute need to be enhanced to check host and node when doing
 live
  migration.
 
  I also uploaded a WIP patch [2] for you to review the idea of the fix
 and
  hope can get some comments from you.
 
  [1] https://bugs.launchpad.net/nova/+bug/1192192
  [2] https://review.openstack.org/#/c/84085
 
  Long term, finding a way to unify how cells and the VMware driver
  manages multiple hosts, seems like the best way forward. It would be a
  shame for this API to be different between cells and VMware, although
  right now, that might not work too well :(
 
  A better short term fix, might be to allow you to specify the same
  host as the instance, and the scheduling of the node could be
  delegated to the VMware driver, which might just delegate that to
  vCenter. I assume we still need some way to specify the node, and I
  can't immediately think of a good way forward.
 
  I feel this should really be treated as a blueprint, and go through
  the new blueprint review process. That should help decide the right

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Lau
Thanks Jay and Chris for the comments!

@Jay Pipes, I think that we still need to enable one nova compute live
migration as one nova compute can manage multiple clusters and VMs can be
migrated between those clusters managed by one nova compute. For cell,
IMHO, each cell can be treated as a small cloud but not a compute,
each cell cloud should be able to handle VM operations in the small cloud
itself. Please correct me if I am wrong.

@Chris, OS-EXT-SRV-ATTR:host is the host where nova compute is running
and OS-EXT-SRV-ATTR:hypervisor_hostname is the hypervisor host where the
VM is running. Live migration is now using host for live migration. What
I want to do is enable migration with one host and the host managing
multiple hyperviosrs.

I'm planning to draft a bp for review which depend on
https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory

Thanks!


2014-04-04 8:03 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 04/03/2014 05:48 PM, Jay Pipes wrote:

 On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:

 Hi,

 Currently with VMWare VCDriver, one nova compute can manage multiple
 clusters/RPs, this caused cluster admin cannot do live migration
 between clusters/PRs if those clusters/PRs managed by one nova compute
 as the current live migration logic request at least two nova
 computes.


 A bug [1] was also filed to trace VMWare live migration issue.

 I'm now trying the following solution to see if it is acceptable for a
 fix, the fix wants enable live migration with one nova compute:
 1) When live migration check if host are same, check both host and
 node for the VM instance.
 2) When nova scheduler select destination for live migration, the live
 migration task should put (host, node) to attempted hosts.
 3) Nova scheduler needs to be enhanced to support ignored_nodes.
 4) nova compute need to be enhanced to check host and node when doing
 live migration.


 What precisely is the point of live migrating an instance to the exact
 same host as it is already on? The failure domain is the host, so moving
 the instance from one cluster to another, but on the same host is kind
 of a silly use case IMO.


 Here is where precise definitions of compute node,
 OS-EXT-SRV-ATTR:host, and OS-EXT-SRV-ATTR:hypervisor_hostname, and
 host as understood by novaclient would be nice.

 Currently the nova live-migration command takes a host argument. It's
 not clear which of the above this corresponds to.

 My understanding is that one nova-compute process can manage multiple
 VMWare physical hosts.  So it could make sense to support live migration
 between separate VMWare hosts even if they're managed by a single
 nova-compute process.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Lau
2014-04-04 12:46 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
  Thanks Jay and Chris for the comments!
 
  @Jay Pipes, I think that we still need to enable one nova compute
  live migration as one nova compute can manage multiple clusters and
  VMs can be migrated between those clusters managed by one nova
  compute.

 Why, though? That is what I am asking... seems to me like this is an
 anti-feature. What benefit does the user get from moving an instance
 from one VCenter cluster to another VCenter cluster if the two clusters
 are on the same physical machine?

@Jay Pipes, for VMWare, one physical machine (ESX server) can only belong
to one VCenter cluster, so we may have following scenarios.
DC
 |
 |---Cluster1
 |  |
 |  |---host1
 |
 |---Cluser2
|
|---host2

Then when using VCDriver, I can use one nova compute manage both Cluster1
and Cluster2, this caused me cannot migrate VM from host2 to host1 ;-(

The bp was introduced by
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service


 Secondly, why is it that a single nova-compute manages multiple VCenter
 clusters? This seems like a hack to me... perhaps someone who wrote the
 code for this or knows the decision behind it could chime in here?

   For cell, IMHO, each cell can be treated as a small cloud but not
  a compute, each cell cloud should be able to handle VM operations
  in the small cloud itself. Please correct me if I am wrong.

 Yes, I agree with you that a cell is not a compute. Not sure if I said
 otherwise in my previous response. Sorry if it was confusing! :)

 Best,
 -jay

  @Chris, OS-EXT-SRV-ATTR:host is the host where nova compute is
  running and OS-EXT-SRV-ATTR:hypervisor_hostname is the hypervisor
  host where the VM is running. Live migration is now using host for
  live migration. What I want to do is enable migration with one host
  and the host managing multiple hyperviosrs.
 
 
  I'm planning to draft a bp for review which depend on
  https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory
 
 
  Thanks!
 
 
 
  2014-04-04 8:03 GMT+08:00 Chris Friesen chris.frie...@windriver.com:
  On 04/03/2014 05:48 PM, Jay Pipes wrote:
  On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:
  Hi,
 
  Currently with VMWare VCDriver, one nova
  compute can manage multiple
  clusters/RPs, this caused cluster admin cannot
  do live migration
  between clusters/PRs if those clusters/PRs
  managed by one nova compute
  as the current live migration logic request at
  least two nova
  computes.
 
 
  A bug [1] was also filed to trace VMWare live
  migration issue.
 
  I'm now trying the following solution to see
  if it is acceptable for a
  fix, the fix wants enable live migration with
  one nova compute:
  1) When live migration check if host are same,
  check both host and
  node for the VM instance.
  2) When nova scheduler select destination for
  live migration, the live
  migration task should put (host, node) to
  attempted hosts.
  3) Nova scheduler needs to be enhanced to
  support ignored_nodes.
  4) nova compute need to be enhanced to check
  host and node when doing
  live migration.
 
  What precisely is the point of live migrating an
  instance to the exact
  same host as it is already on? The failure domain is
  the host, so moving
  the instance from one cluster to another, but on the
  same host is kind
  of a silly use case IMO.
 
 
  Here is where precise definitions of compute node,
  OS-EXT-SRV-ATTR:host, and
  OS-EXT-SRV-ATTR:hypervisor_hostname, and host as
  understood by novaclient would be nice.
 
  Currently the nova live-migration command takes a host
  argument. It's not clear which of the above this corresponds
  to.
 
  My understanding is that one nova-compute process can manage
  multiple VMWare physical hosts.  So it could make sense to
  support live migration between separate VMWare hosts even if
  they're managed by a single nova-compute process.
 
  Chris

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-05 Thread Jay Lau
Thanks Jay Pipes.

If go back to having a single nova-compute managing a single vCenter
cluster, then there might be problems in a large sacle vCenter cluster.
There are still problems that we can not handle:
1) The VCDriver can also manage multiple resource pools with a single nova
compute, the resource pool is another concept, we can create multiple
resource pools in one vCenter cluster or create multiple resource pools in
one ESX host. In a large scale cluster, there can be thousands of resource
pools, it would make the admin crazy for the configuration. ;-)
2) How to manage ESX host which not belong to any cluster or resource
pools? Such as following case:
DC
 |
 |--- ESX host1
 |
 |--- ESX host2

3) There is another bp
https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory filed by
Shawn, this bp want to report all resources including clusters, resource
pools, esx hosts, this bp can be treated as the base for VCDriver, as if
the VCDriver can get all resources, then it would be very easy to do what
we want.

Thanks!


2014-04-06 4:32 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
 
 
 
  2014-04-04 12:46 GMT+08:00 Jay Pipes jaypi...@gmail.com:
  On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
   Thanks Jay and Chris for the comments!
  
   @Jay Pipes, I think that we still need to enable one nova
  compute
   live migration as one nova compute can manage multiple
  clusters and
   VMs can be migrated between those clusters managed by one
  nova
   compute.
 
 
  Why, though? That is what I am asking... seems to me like this
  is an
  anti-feature. What benefit does the user get from moving an
  instance
  from one VCenter cluster to another VCenter cluster if the two
  clusters
  are on the same physical machine?
  @Jay Pipes, for VMWare, one physical machine (ESX server) can only
  belong to one VCenter cluster, so we may have following scenarios.
 
  DC
   |
 
   |---Cluster1
   |  |
 
   |  |---host1
   |
 
   |---Cluser2
  |
 
  |---host2
 
 
  Then when using VCDriver, I can use one nova compute manage both
  Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
  host1 ;-(
 
 
  The bp was introduced by
 
 https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service

 Well, it seems to me that the problem is the above blueprint and the
 code it introduced. This is an anti-feature IMO, and probably the best
 solution would be to remove the above code and go back to having a
 single nova-compute managing a single vCenter cluster, not multiple
 ones.

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-06 Thread Jay Lau
Hi Divakar,

Can I say that the bare metal provisioning is now using kind of Parent -
Child compute mode? I was also thinking that we can use host:node to
identify a kind of Parent-Child or Hierarchy Compute. So can you please
show some difference for your Parent - Child Compute Node and bare metal
provisioning?

Thanks!


2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar 
divakar.padiyar-nanda...@hp.com:

  Well, it seems to me that the problem is the above blueprint and the
 code it introduced. This is an anti-feature IMO, and probably the best
 solution would be to remove the above code and go back to having a single
  nova-compute managing a single vCenter cluster, not multiple ones.

 Problem is not introduced by managing multiple clusters from single
 nova-compute proxy node.  Internally this proxy driver is still presenting
 the compute-node for each of the cluster its managing.What we need to
 think about is applicability of the live migration use case when a
 cluster is modelled as a compute.   Since the cluster is modelled as a
 compute, it is assumed that a typical use case of live-move is taken care
 by the underlying cluster itself.   With this there are other use
 cases which are no-op today like host maintenance mode, live move, setting
 instance affinity etc., In order to resolve this I was thinking of
 A way to expose operations on individual ESX Hosts like Putting host in
 maintenance mode,  live move, instance affinity etc., by introducing Parent
 - Child compute node concept.   Scheduling can be restricted to Parent
 compute node and Child compute node can be used for providing more drill
 down on compute and also enable additional compute operations.Any
 thoughts on this?

 Thanks,
 Divakar


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Sunday, April 06, 2014 2:02 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 Importance: High

 On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
 
 
 
  2014-04-04 12:46 GMT+08:00 Jay Pipes jaypi...@gmail.com:
  On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
   Thanks Jay and Chris for the comments!
  
   @Jay Pipes, I think that we still need to enable one nova
  compute
   live migration as one nova compute can manage multiple
  clusters and
   VMs can be migrated between those clusters managed by one
  nova
   compute.
 
 
  Why, though? That is what I am asking... seems to me like this
  is an
  anti-feature. What benefit does the user get from moving an
  instance
  from one VCenter cluster to another VCenter cluster if the two
  clusters
  are on the same physical machine?
  @Jay Pipes, for VMWare, one physical machine (ESX server) can only
  belong to one VCenter cluster, so we may have following scenarios.
 
  DC
   |
 
   |---Cluster1
   |  |
 
   |  |---host1
   |
 
   |---Cluser2
  |
 
  |---host2
 
 
  Then when using VCDriver, I can use one nova compute manage both
  Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
  host1 ;-(
 
 
  The bp was introduced by
  https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-
  by-one-service

 Well, it seems to me that the problem is the above blueprint and the code
 it introduced. This is an anti-feature IMO, and probably the best solution
 would be to remove the above code and go back to having a single
 nova-compute managing a single vCenter cluster, not multiple ones.

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Does openstack have a notification system that will let us know when a server changes state ?

2013-10-20 Thread Jay Lau
Please get more detail from https://wiki.openstack.org/wiki/SystemUsageData

Thanks,

Jay


2013/10/19 Gabriel Hurley gabriel.hur...@nebula.com

  The answer is “sort of”. Most projects (including Nova) publish to an
 RPC “notifications” channel (e.g. in rabbitMQ or whichever you use in your
 deployment). This is how Ceilometer gets some of its data.

 ** **

 There is common code for connecting to the notification queue in Oslo (the
 “rpc” and “notifier” modules, particularly), but the exercise of actually
 setting up your consumer is left up to you, and there are various gotchas
 that aren’t well-documented. Ceilometer’s code is a reasonable starting
 point for building your own.

 ** **

 As this is an area I’ve been experimenting with lately I’ll say that once
 you get it all working it is certainly functional and will deliver exactly
 what you’re asking for, but it can be a fair bit of engineering effort if
 you’re not familiar with how these things work already.

 ** **

 This is an area I hope can be improved in OpenStack in future releases.***
 *

 ** **

 Hope that helps,

 ** **

 **-  **Gabriel

 ** **

 *From:* openstack learner [mailto:openstacklea...@gmail.com]
 *Sent:* Friday, October 18, 2013 11:57 AM
 *To:* openst...@lists.openstack.org; openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] Does openstack have a notification system that
 will let us know when a server changes state ?

 ** **

 Hi all,


 I am using the openstack python api. After I boot an instance, I will keep
 polling the instance status to check if its status changes from BUILD to
 ACTIVE.

 My question is:

 does openstack have a notification system that will let us know when a vm
 changes state (e.g. goes into ACTIVE state)? then we won't have to keep on
 polling it  when we need to know the change of the machine state.

 Thanks

 xin

 ** **

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder

2013-11-10 Thread Jay Lau
I noticed that there is already a bp in oslo tracing what I want to do:
https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler

Thanks,

Jay



2013/11/9 Jay Lau jay.lau@gmail.com

 Greetings,

 Now in oslo, we already put some scheduler filters/weights logic there and
 cinder is using oslo scheduler filters/weights logic, seems we want both
 novacinder use this logic in future.

 Found some problems as following:
 1) In cinder, some filters/weight logic reside in
 cinder/openstack/common/scheduler and some filter/weight logic in
 cinder/scheduler, this is not consistent and also will make some cinder
 hackers confused: where shall I put the scheduler filter/weight.
  2) Nova is not using filter/weight from oslo and also not using entry
 point to handle all filter/weight.
 3) There is not enough filters in oslo, we may need to add more there:
 such as same host filter, different host filter, retry filter etc.

 So my proposal is as following:
 1) Add more filters to oslo, such as same host filter, different host
 filter, retry filter etc.
 2) Move all filters/weight logic in cinder from cinder/scheduler to
 cinder/openstack/common/scheduler
 3) Enable nova use filter/weight logic from oslo (Move all filter logic to
 nova/openstack/common/scheduler) and also use entry point to handle all
 filters/weight logic.

 Comments?

 Thanks,

 Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-11 Thread Jay Lau
Got it. Thanks.

Jay


2013/11/11 Davanum Srinivas dava...@gmail.com

 Feedback from some of the Nova sessions were,

 If you are writing new tests, try to use mock.
 Writing new tests to cover more code (esp drivers) is more preferable
 to any effort that just converts from mox to mock

 -- dims

 On Sun, Nov 10, 2013 at 11:25 PM, Noorul Islam K M noo...@noorul.com
 wrote:
  Jay Lau jay.lau@gmail.com writes:
 
  Hi,
 
  I noticed that we are now using mock, mox and stub for unit test, just
  curious do we have any guidelines for this, in which condition shall we
 use
  mock, mox or stub?
 
 
  There is already a blueprint [1] in Nova project to replace Mox with
 mock.
 
  Also it has a link to ML thread [2].
 
  Regards,
  Noorul
 
  [1] https://blueprints.launchpad.net/nova/+spec/mox-to-mock-conversion
  [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/012484.html
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][hadoop][template] Does anyone has a hadoop template

2013-11-28 Thread Jay Lau
Hi,

I'm now trying to deploy a hadoop cluster with heat, just wondering if
someone who has a heat template which can help me do the work.

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-02 Thread Jay Lau
Hi,

Does any one encounter this error when install devstack? How did you
resolve this issue?

+ [[ 1 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ ./tools/worlddump.py -d
usage: worlddump.py [-h] [-d DIR]
worlddump.py: error: argument -d/--dir: expected one argument
317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w

BTW: I was using ubuntu 12.04

 gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-02 Thread Jay Lau
Thanks Ken'ichi, its working for me ;-)

Eli, perhaps you can try again as Ken'ichi's solution.


2014-07-02 14:35 GMT+08:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com:

 Hi Jay,

 I faced the same problem and can pass it with adding the following line
 into localrc:

 LOGFILE=/opt/stack/logs/stack.sh.log

 Thanks
 Ken'ichi Ohmichi

 ---
 2014-07-02 14:58 GMT+09:00 Jay Lau jay.lau@gmail.com:
  Hi,
 
  Does any one encounter this error when install devstack? How did you
 resolve
  this issue?
 
  + [[ 1 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + ./tools/worlddump.py -d
  usage: worlddump.py [-h] [-d DIR]
  worlddump.py: error: argument -d/--dir: expected one argument
  317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w
 
  BTW: I was using ubuntu 12.04
 
   gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=12.04
  DISTRIB_CODENAME=precise
  DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS
 
  --
  Thanks,
 
  Jay
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][Scheduler] Prompt select_destination as a REST API

2014-07-21 Thread Jay Lau
Now in OpenStack Nova, select_destination is used by
create/rebuild/migrate/evacuate VM when selecting target host for those
operations.

There is one requirement that some customers want to get the possible host
list when create/rebuild/migrate/evacuate VM so as to create a resource
plan for those operations, but currently select_destination is not a REST
API, is it possible that we prompt this API to be a REST API?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Sorry, correct one typo. I mean Promote select_destination as a REST API


2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Now in OpenStack Nova, select_destination is used by
 create/rebuild/migrate/evacuate VM when selecting target host for those
 operations.

 There is one requirement that some customers want to get the possible host
 list when create/rebuild/migrate/evacuate VM so as to create a resource
 plan for those operations, but currently select_destination is not a REST
 API, is it possible that we promote this API to be a REST API?

 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Thanks Chris and Sylvain.

@Chris, yes,my case is do a select_destination call, and then call the
create/rebuild/migrate/evacuate while specifying the selected destination

@Sylvain, I was also thinking of Gantt, but as you said, Gantt might be
available in K or L which might be a bit late, that's why I say I want to
first do it in nova then migrate to Gantt. OK, agree with you, considering
the spec is freeze now, I will consider this in K or L and find a
workaround for now. ;-)

Thanks.


2014-07-22 1:13 GMT+08:00 Sylvain Bauza sba...@redhat.com:

  Le 21/07/2014 17:52, Jay Lau a écrit :

 Sorry, correct one typo. I mean Promote select_destination as a REST API



 -1 to it. During last Summit, we agreed on externalizing current Scheduler
 code into a separate project called Gantt. For that, we agreed on first
 doing necessary changes within the Scheduler before recreating a new
 repository.

 By providing select_destinations as a new API endpoint, it would create a
 disruptive change where the Scheduler would have a new entrypoint.

 As this change would need a spec anyway and as there is a Spec Freeze now
 for Juno, I propose to delay this proposal until Gantt is created and
 propose a REST API for Gantt instead (in Kilo or L)

 -Sylvain


 2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com:

  Now in OpenStack Nova, select_destination is used by
 create/rebuild/migrate/evacuate VM when selecting target host for those
 operations.

  There is one requirement that some customers want to get the possible
 host list when create/rebuild/migrate/evacuate VM so as to create a
 resource plan for those operations, but currently select_destination is not
 a REST API, is it possible that we promote this API to be a REST API?

 --
  Thanks,

  Jay




 --
  Thanks,

  Jay


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Hi Jay,

There are indeed some China customers want this feature because before they
do some operations, they want to check the action plan, such as where the
VM will be migrated or created, they want to use some interactive mode do
some operations to make sure no errors.

Thanks.


2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 07/21/2014 07:45 PM, Jay Lau wrote:

 There is one requirement that some customers want to get the possible
 host list when create/rebuild/migrate/evacuate VM so as to create a
 resource plan for those operations, but currently select_destination is
 not a REST API, is it possible that we promote this API to be a REST API?


 Which customers want to get the possible host list?

 /me imagines someone asking Amazon for a REST API that returned all the
 possible servers that might be picked for placement... and what answer
 Amazon might give to the request.

 If by customer, you are referring to something like IBM Smart Cloud
 Orchestrator, then I don't really see the point of supporting something
 like this. Such a customer would only need to create a resource plan for
 those operations if it was wholly supplanting large pieces of OpenStack
 infrastructure, including parts of Nova and much of Heat.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-23 Thread Jay Lau
Thanks Alex and Jay Pipes.

@Alex, I want a common interface for all VM operations to get target host
list, seems only adding a new API 'confirm_before_migration' not enough to
handle this? ;-)

@Jay Pipes, I will try to see if we can export this in K or L via Gantt

Thanks.


2014-07-23 17:14 GMT+08:00 Alex Xu x...@linux.vnet.ibm.com:

 Maybe we can implement this goal by another way, adding new API
 'confirm_before_migration' that's similar with 'confirm_resize'. This also
 can resolve Chris Friesen's concern.


 On 2014年07月23日 00:13, Jay Pipes wrote:

 On 07/21/2014 11:16 PM, Jay Lau wrote:

 Hi Jay,

 There are indeed some China customers want this feature because before
 they do some operations, they want to check the action plan, such as
 where the VM will be migrated or created, they want to use some
 interactive mode do some operations to make sure no errors.


 This isn't something that normal tenants should have access to, IMO. The
 scheduler is not like a database optimizer that should give you a query
 plan for a SQL statement. The information the scheduler is acting on
 (compute node usage records, aggregate records, deployment configuration,
 etc) are absolutely NOT something that should be exposed to end-users.

 I would certainly support a specification that intended to add detailed
 log message output from the scheduler that recorded how it made its
 decisions, so that an operator could evaluate the data and decision, but
 I'm not in favour of exposing this information via a tenant-facing API.

 Best,
 -jay

  2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com:

 On 07/21/2014 07:45 PM, Jay Lau wrote:

 There is one requirement that some customers want to get the
 possible
 host list when create/rebuild/migrate/__evacuate VM so as to
 create a
 resource plan for those operations, but currently
 select_destination is
 not a REST API, is it possible that we promote this API to be a
 REST API?


 Which customers want to get the possible host list?

 /me imagines someone asking Amazon for a REST API that returned all
 the possible servers that might be picked for placement... and what
 answer Amazon might give to the request.

 If by customer, you are referring to something like IBM Smart
 Cloud Orchestrator, then I don't really see the point of supporting
 something like this. Such a customer would only need to create a
 resource plan for those operations if it was wholly supplanting
 large pieces of OpenStack infrastructure, including parts of Nova
 and much of Heat.

 Best,
 -jay


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [InstanceGroup] Why instance group API extension do not support setting metadata

2014-07-24 Thread Jay Lau
Hi,

I see that the instance_group object already support instance group
metadata, why we filter out metadata in instance group api extension? Can
we enable this?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-29 Thread Jay Lau
Its a good idea to have a generic way to handle object notifications.
Considering that different objects might have different payload and
different logic for handling payload, we may need some clear design for
this. Seems a bp is needed for this. Thanks.


2014-07-30 2:49 GMT+08:00 Mike Spreitzer mspre...@us.ibm.com:

 Gary Kotton gkot...@vmware.com wrote on 07/29/2014 12:43:08 PM:

  Hi,
  When reviewing https://review.openstack.org/#/c/107954/ it occurred
  to me that maybe we should consider having some kind of generic
  object wrapper that could do notifications for objects. Any thoughts on
 this?

 I am not sure what that would look like, but I agree that we have a
 problem with too many things not offering notifications.  If there were
 some generic way to solve that problem, it would indeed be great.

 Thanks,
 Mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-30 Thread Jay Lau
So we need to create a decorator method for create(), save(), destroy() etc
as following?
NOTIFICATION_FIELDS = ['host', 'metadata', ...]

  @notify_on_save(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):

  @notify_on_create(NOTIFICATION_FIELDS)
  @base.remotable
  def create(context):

Or can we just make the decorator method as generic as possible as
following:

  @notify(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):

  @notify(NOTIFICATION_FIELDS)
  @base.remotable
  def create(context):

For above case, the notify() method can handle all cases including create,
delete, update etc

Comments?




2014-07-30 12:26 GMT+08:00 Dan Smith d...@danplanet.com:

  When reviewing https://review.openstack.org/#/c/107954/ it occurred to
  me that maybe we should consider having some kind of generic object
  wrapper that could do notifications for objects. Any thoughts on this?

 I think it might be good to do this in a repeatable, but perhaps not
 totally automatic way. I can see that any time instance gets changed in
 certain ways, that we'd want a notification about it. However, there are
 probably some cases that don't fit that. For example,
 instance.system_metadata is mostly private to nova I think, so I'm not
 sure we'd want to emit a notification for that. Plus, we'd probably end
 up with some serious duplication if we just do it implicitly.

 What if we provided a way to declare the fields of an object that we
 want to trigger a notification? Something like:

   NOTIFICATION_FIELDS = ['host', 'metadata', ...]

   @notify_on_save(NOTIFICATION_FIELDS)
   @base.remotable
   def save(context):
   ...

 --Dan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Summit] Please vote if you are interested

2014-08-05 Thread Jay Lau
Hi,

We submitted three simple but very interesting topics for Paris Summit,
please check if you are interested.

1) Holistic Resource Scheduling:
https://www.openstack.org/vote-paris/Presentation/schedule-multiple-tiers-enterprise-application-in-openstack-environment-prs-a-holistic-scheduler-for-both-application-orchestrator-and-infrastructure

2) China OpenStack Meetup Summary:
https://www.openstack.org/vote-paris/Presentation/organizing-openstack-meet-ups-in-china

3) How does one China Customer use OpenStack:
https://www.openstack.org/vote-paris/Presentation/an-application-driven-approach-to-openstack-another-way-to-engage-enterprises?sthash.sNSBxEVS.mjjo

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling ServerGroup filters by default (was RE: [nova] Server Groups are not an optional element, bug or feature ?)

2014-04-08 Thread Jay Lau
2014-04-08 20:08 GMT+08:00 Russell Bryant rbry...@redhat.com:

 On 04/08/2014 06:16 AM, Day, Phil wrote:
  https://bugs.launchpad.net/nova/+bug/1303983
 
  --
  Russell Bryant
 
  Wow - was there really a need to get that change merged within 12 hours
 and before others had a chance to review and comment on it ?

 It was targeted against RC2 which we're trying to get out ASAP.  The
 change is harmless.

  I see someone has already queried (post the merge) if there isn't a
 performance impact.

 The commit message indicates that when the API is not used, the
 scheduler filters are a no-op.  There is no noticable performance impact.

Thanks Russell, I asked the performance question in the gerrit review. Just
checked the logic again and did not found any potential performance issue.


  I've raised this point before - but apart from non-urgent security fixes
 shouldn't there be a minimum review period to make sure that all relevant
 feedback can be given ?

 Separate topic, but no, I do not think there should be any rules on
 this.  I think in the majority of cases, people do the right thing.

 In this case, the patch was incredibly trivial and has no performance
 impact, so I don't see anything wrong.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
Hi Kevin,

Thanks for the contribution.

Shawn from VMware already filed a bp to export those resources
https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory , but
this bp might some redesign as we need to decide how we will handle
configuration and convention when it comes to vSphere inventory.

Yes, I think that we need support host-node mode for live migration.

Divakar give some idea of Parent - Child compute mode, I'm just wondering
what is the difference of this mode with bare-metal host-node mode.



2014-04-09 14:07 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com:

 we used to have one compute service corresponding to multiple hypervisors
 (like host and nodes concept )
 our major issue on our platform is we can't run nova-compute service on
 the hypervisor and we need to find another place to run the nova-compute in
 order to talk to
 hypervisor management API through REST API
 which means we have to run multiple compute service out side of our
 hypervisors and it's hard for us to control the compute services at that
 time,
 but we have no choice since nova migration only can be migrated to another
 host instead of node ,so we implement according to it
 if we can support host + node, then it might be helpful for the
 hypervisors with different arch

 The point is whether we are able to expose the internal (say, not only the
 host concept but also the node concept ) to outside
 guess live-migration is admin only feature, can we expose those node
 concept to admin and let admin decide it?

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 [image: Inactive hide details for Jay Lau ---04/06/2014 07:02:15 PM---Hi
 Divakar, Can I say that the bare metal provisioning is now usi]Jay Lau
 ---04/06/2014 07:02:15 PM---Hi Divakar, Can I say that the bare metal
 provisioning is now using kind of Parent -

 From: Jay Lau jay.lau@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Date: 04/06/2014 07:02 PM

 Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 --



 Hi Divakar,

 Can I say that the bare metal provisioning is now using kind of Parent -
 Child compute mode? I was also thinking that we can use host:node to
 identify a kind of Parent-Child or Hierarchy Compute. So can you please
 show some difference for your Parent - Child Compute Node and bare metal
 provisioning?

 Thanks!


 2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar 
 *divakar.padiyar-nanda...@hp.com* divakar.padiyar-nanda...@hp.com:

 Well, it seems to me that the problem is the above blueprint and
the code it introduced. This is an anti-feature IMO, and probably the best
solution would be to remove the above code and go back to having a single
 nova-compute managing a single vCenter cluster, not multiple ones.

Problem is not introduced by managing multiple clusters from single
nova-compute proxy node.  Internally this proxy driver is still presenting
the compute-node for each of the cluster its managing.What we need to
think about is applicability of the live migration use case when a
cluster is modelled as a compute.   Since the cluster is modelled as a
compute, it is assumed that a typical use case of live-move is taken care
by the underlying cluster itself.   With this there are other use
cases which are no-op today like host maintenance mode, live move, setting
instance affinity etc., In order to resolve this I was thinking of
A way to expose operations on individual ESX Hosts like Putting host
in maintenance mode,  live move, instance affinity etc., by introducing
Parent - Child compute node concept.   Scheduling can be restricted to
Parent compute node and Child compute node can be used for providing more
drill down on compute and also enable additional compute operations.
 Any thoughts on this?

Thanks,
Divakar


-Original Message-
From: Jay Pipes [mailto:*jaypi...@gmail.com* jaypi...@gmail.com]
Sent: Sunday, April 06, 2014 2:02 AM
To: *openstack-dev@lists.openstack.org*openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
migration with one nova compute
Importance: High

On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:



 2014-04-04 12:46 GMT+08:00 Jay Pipes 
 *jaypi...@gmail.com*jaypi...@gmail.com
:
 On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
  Thanks Jay and Chris for the comments!
 
  @Jay Pipes, I think that we still need to enable one nova
 compute
  live migration as one nova

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
2014-04-09 19:04 GMT+08:00 Matthew Booth mbo...@redhat.com:

 On 09/04/14 07:07, Chen CH Ji wrote:
  we used to have one compute service corresponding to multiple
  hypervisors (like host and nodes concept )
  our major issue on our platform is we can't run nova-compute service on
  the hypervisor and we need to find another place to run the nova-compute
  in order to talk to
  hypervisor management API through REST API

 It may not be directly relevant to this discussion, but I'm interested
 to know what constraint prevents you running nova-compute on the
 hypervisor.

Actually, VMWare has two drivers, one is ESXDriver and the other is
VCDriver.

When using ESXDrvier, one nova compute can only manage one ESX host, but
ESXDriver do not support some advanced features such as live migration,
resize etc. And this driver has been deprecated.

We are now talking about VCDriver which will talk to vCenter via wsdl API
and the  VCDriver is intend to support all VM operations, but we need some
enhancement to make VCDriver can work well for some advacend features such
as live migration.


 Matt

 --
 Matthew Booth, RHCA, RHCSS
 Red Hat Engineering, Virtualisation Team

 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
@Divakar, yes, the Proxy Compute model is not new, but I'm not sure if
this model can be accepted by community to manage both VM and PM. Anyway, I
will try to file a bp and get more comments then. Thanks.


2014-04-09 22:52 GMT+08:00 Nandavar, Divakar Padiyar 
divakar.padiyar-nanda...@hp.com:

 Hi Jay,
 Managing multiple clusters using the Compute Proxy is not new right?
 Prior to this nova baremetal driver has used this model already.   Also
 this Proxy Compute model gives flexibility to deploy as many computes
 required based on the requirement.   For example, one can setup one proxy
 compute node to manage a set of clusters and another proxy compute to
 manage a separate set of clusters or launch compute node for each of the
 clusters.

 Thanks,
 Divakar

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Wednesday, April 09, 2014 6:23 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 Importance: High

 Hi Juan, thanks for your response. Comments inline.

 On Mon, 2014-04-07 at 10:22 +0200, Juan Manuel Rey wrote:
  Hi,
 
  I'm fairly new to this list, actually this is my first email sent, and
  to OpenStack in general, but I'm not new at all to VMware so I'll try
  to give you my point of view about possible use case here.
 
  Jay you are saying that by using Nova to manage ESXi hosts we don't
  need vCenter because they basically overlap in their capabilities.

 Actually, no, this is not my main point. My main point is that Nova should
 not change its architecture to fit the needs of one particular host
 management platform (vCenter).

 Nova should, as much as possible, communicate with vCenter to perform some
 operations -- in the same way that Nova communicates with KVM or XenServer
 to perform some operations. But Nova should not be re-architected (and I
 believe that is what has gone on here with the code change to have one
 nova-compute worker talking to multiple vCenter
 clusters) just so that one particular host management scheduler/platform
 (vCenter) can have all of its features exposed to Nova.

   I agree with you to some extent, Nova may have similar capabilities
  as vCenter Server but as you know OpenStack as a full cloud solution
  adds a lot more features that vCenter lacks, like multitenancy just to
  name one.

 Sure, however, my point is that Nova shouldn't need to be re-architected
 just to adhere to one particular host management platform's concepts of an
 atomic provider of compute resources.

  Also in any vSphere environment managing ESXi hosts individually, this
  is without vCenter, is completely out of the question. vCenter is the
  enabler of many vSphere features. And precisely that's is, IMHO, the
  use case of using Nova to manage vCenter to manage vSphere. Without
  vCenter we only have a bunch of hypervisors and none of the HA or DRS
  (dynamic resource balancing) capabilities that a vSphere cluster
  provides, this in my experience with vSphere users/customers is a no
  go scenario.

 Understood. Still doesn't change my opinion though :)

 Best,
 -jay

  I don't know why the decision to manage vCenter with Nova was made but
  based on the above I understand the reasoning.
 
 
  Best,
  ---
  Juan Manuel Rey
 
  @jreypo
 
 
  On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar
  wrote:
Well, it seems to me that the problem is the above
  blueprint and the code it introduced. This is an anti-feature
  IMO, and probably the best solution would be to remove the
  above code and go back to having a single  nova-compute
  managing a single vCenter cluster, not multiple ones.
  
   Problem is not introduced by managing multiple clusters from
  single nova-compute proxy node.
 
 
  I strongly disagree.
 
   Internally this proxy driver is still presenting the
  compute-node for each of the cluster its managing.
 
 
  In what way?
 
What we need to think about is applicability of the live
  migration use case when a cluster is modelled as a compute.
  Since the cluster is modelled as a compute, it is assumed
  that a typical use case of live-move is taken care by the
  underlying cluster itself.   With this there are other
  use cases which are no-op today like host maintenance mode,
  live move, setting instance affinity etc., In order to
  resolve this I was thinking of
   A way to expose operations on individual ESX Hosts like
  Putting host in maintenance mode,  live move, instance
  affinity etc., by introducing Parent - Child compute node
  concept.   Scheduling can be restricted to Parent compute node
  and Child compute node can be used 

Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Jay Lau
@Oleg, Till now, I'm not sure the target of Gantt, is it for initial
placement policy or run time policy or both, can you help clarify?

@Henrique, not sure if you know IBM PRS (Platform Resource Scheduler) [1],
we have finished the dynamic scheduler in our Icehouse version (PRS 2.2),
it has exactly the same feature as your described, we are planning a live
demo for this feature in Atlanta Summit. I'm also writing some document for
run time policy which will cover more run time policies for OpenStack, but
not finished yet. (My shame for the slow progress). The related blueprint
is [2], you can also get some discussion from [3]

[1]
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
[2]
https://blueprints.launchpad.net/nova/+spec/resource-optimization-service
[3] http://markmail.org/~jaylau/OpenStack-DRS

Thanks.


2014-04-09 23:21 GMT+08:00 Oleg Gelbukh ogelb...@mirantis.com:

 Henrique,

 You should check out Gantt project [1], it could be exactly the place to
 implement such features. It is a generic cross-project Scheduler as a
 Service forked from Nova recently.

 [1] https://github.com/openstack/gantt

 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs


 On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

 Hello, everyone!

 I am currently a graduate student and member of a group of contributors
 to OpenStack. We believe that a dynamic scheduler could improve the
 efficiency of an OpenStack cloud, either by rebalancing nodes to maximize
 performance or to minimize the number of active hosts, in order to minimize
 energy costs. Therefore, we would like to propose a dynamic scheduling
 mechanism to Nova. The main idea is using the Ceilometer information (e.g.
 RAM, CPU, disk usage) through the ceilometer-client and dinamically decide
 whether a instance should be live migrated.

 This might me done as a Nova periodic task, which will be executed every
 once in a given period or as a new independent project. In both cases, the
 current Nova scheduler will not be affected, since this new scheduler will
 be pluggable. We have done a search and found no such initiative in the
 OpenStack BPs. Outside the community, we found only a recent IBM
 announcement for a similiar feature in one of its cloud products.

 A possible flow is: In the new scheduler, we periodically make a call to
 Nova, get the instance list from a specific host and, for each instance, we
 make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
 cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
 parameters configured by the user, analyze the meters and do the proper
 migrations.

 Do you have any comments or suggestions?

 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enable live migration with one nova compute

2014-04-09 Thread Jay Lau
@Divakar, exactly, we want do ESX server level live-migrations with vCenter
(VCDriver) by leveraging nova scheduler. Thanks.


2014-04-09 23:36 GMT+08:00 Nandavar, Divakar Padiyar 
divakar.padiyar-nanda...@hp.com:

 Steve,
 The problem with the support of live-migrate would still exist even if we
 decide to manage only one cluster from a compute node, unless one is ok
 with only live-migrate functionality between clusters.  The main debate
 started with supporting the live-migrate between the ESX Hosts in the same
 cluster.

 Thanks,
 Divakar

 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: Wednesday, April 09, 2014 8:38 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
 migration with one nova compute
 Importance: High

 - Original Message -
  I'm not writing off vCenter or its capabilities. I am arguing that the
  bar for modifying a fundamental design decision in Nova -- that of
  being horizontally scalable by having a single nova-compute worker
  responsible for managing a single provider of compute resources -- was
  WAY too low, and that this decision should be revisited in the future
  (and possibly as part of the vmware driver refactoring efforts
  currently underway by the good folks at RH and VMWare).

 +1, This is my main concern about having more than one ESX cluster under a
 single nova-compute agent as well. Currently it works, but it doesn't seem
 particularly advisable as on face value as such an architecture seems to
 break a number of the Nova design guidelines around high availability and
 fault tolerance. To me it seems like such an architecture effectively
 elevates nova-compute into being part of the control plane where it needs
 to have high availability (when discussing on IRC yesterday it seemed like
 this *may* be possible today but more testing is required to shake out any
 bugs).

 Now may well be the right approach *is* to make some changes to these
 expectations about Nova, but I think it's disingenuous to suggest that what
 is being suggested here isn't a significant re-architecting to resolve
 issues resulting from earlier hacks that allowed this functionality to work
 in the first place. Should be an interesting summit session.

 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Jay Lau
2014-04-10 0:57 GMT+08:00 Susanne Balle sleipnir...@gmail.com:

 Ditto. I am interested in contributing as well.

 Does Gant work with Devstack? I am assuming the link will give me
 directions on how to test it and contribute to the project.

You can refer to
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026330.htmlfor
how to enable Gantt with devstack.


 Susanne


 On Wed, Apr 9, 2014 at 12:44 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

 @Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
 blueprint


 2014-04-09 12:59 GMT-03:00 Sylvain Bauza sylvain.ba...@gmail.com:




 2014-04-09 17:47 GMT+02:00 Jay Lau jay.lau@gmail.com:

 @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?


 I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
 forklift of the current Nova scheduler. So, a placement decision based on
 dynamic metrics would be worth it.
 That said, as Gantt is not targeted to be delivered until Juno at least
 (with Nova sched deprecated), I think any progress on a BP should target
 Nova with respect to the forklift efforts, so it would automatically be
 ported to Gantt once the actual fork would happen.

 -Sylvain

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Jay Lau
Hi Shane,

IBM PRS is not open source, its an add on product for OpenStack.

For your framework, ceilometer can only report some VM metrics, how did you
do the resource optimization based on only vm metrics?

Thanks.


2014-04-10 14:57 GMT+08:00 Wang, Shane shane.w...@intel.com:

  Ditto, I am also interested in that area. We're implementing a framework
 to monitor different metrics from Ceilometer, apply predefined policies
 from administrators, and take actions if some conditions are met for
 resource optimization purpose or SLA purpose.



 Jay, is IBM PRS open source?



 Thanks.

 --

 Shane

 *From:* Susanne Balle [mailto:sleipnir...@gmail.com]
 *Sent:* Thursday, April 10, 2014 12:57 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] Dynamic scheduling



 Ditto. I am interested in contributing as well.



 Does Gant work with Devstack? I am assuming the link will give me
 directions on how to test it and contribute to the project.



 Susanne



 On Wed, Apr 9, 2014 at 12:44 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

 @Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
 blueprint



 2014-04-09 12:59 GMT-03:00 Sylvain Bauza sylvain.ba...@gmail.com:







 2014-04-09 17:47 GMT+02:00 Jay Lau jay.lau@gmail.com:



 @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?



 I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
 forklift of the current Nova scheduler. So, a placement decision based on
 dynamic metrics would be worth it.

 That said, as Gantt is not targeted to be delivered until Juno at least
 (with Nova sched deprecated), I think any progress on a BP should target
 Nova with respect to the forklift efforts, so it would automatically be
 ported to Gantt once the actual fork would happen.



 -Sylvain



  Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 --
 Ítalo Henrique Costa Truta


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-10 Thread Jay Lau
2014-04-11 3:54 GMT+08:00 Steve Gordon sgor...@redhat.com:

 - Original Message -
  Is it a same thing like openstack-neat project?
 
  http://openstack-neat.org/
 
  I am curious about why Neat was not accepted previously.
 
  -Hao

 Did anyone ever try to submit it? Looks useful though.

I submitted one cross project session which might already cover this:
http://summit.openstack.org/cfp/details/262


 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >