Re: [openstack-dev] Question regarding openstack heat

2015-06-20 Thread Nandavar, Divakar Padiyar
You need to use https://ask.openstack.org/en/questions/ to post such queries

Thanks
Divakar

On 21 Jun 2015 12:11, Swaroop Jayanthi  wrote:
Hi All,

I am understanding HEAT architecture, I have few basic questions in this regard.

1) Who provides the templates to Cloud Admin?

2) Template will be defined by the customer or end user?

3) How will be the overall flow of template request by end user happens?


Can you please let me know your thoughts...
--
Thanks and Regards,

--Swaroop Jayanthi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Nandavar, Divakar Padiyar
> Most stuff in OpenStack gets around this by doing synchronous calls across 
> oslo.messaging, where there is an end-to-end ack. We don't want that here > 
> >though. We'll probably have to make do with having ways to recover after a 
> failure (kick off another update with the same data is always an option). The 
> hard >part is that if something dies we don't really want to wait until the 
> stack timeout to start recovering.

We should be able to address this in convergence without having to wait for 
stack timeout.  This scenario would be similar to initiating the stack update 
while another large stack update is still progress.  We are looking into 
addressing this scenario.

Thanks,
Divakar

-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, November 13, 2014 11:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

On 13/11/14 09:58, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
>> On 13/11/14 03:29, Murugan, Visnusaran wrote:
>>> Hi all,
>>>
>>> Convergence-POC distributes stack operations by sending resource 
>>> actions over RPC for any heat-engine to execute. Entire stack 
>>> lifecycle will be controlled by worker/observer notifications. This 
>>> distributed model has its own advantages and disadvantages.
>>>
>>> Any stack operation has a timeout and a single engine will be 
>>> responsible for it. If that engine goes down, timeout is lost along 
>>> with it. So a traditional way is for other engines to recreate 
>>> timeout from scratch. Also a missed resource action notification 
>>> will be detected only when stack operation timeout happens.
>>>
>>> To overcome this, we will need the following capability:
>>>
>>> 1.Resource timeout (can be used for retry)
>>
>> I don't believe this is strictly needed for phase 1 (essentially we 
>> don't have it now, so nothing gets worse).
>>
>
> We do have a stack timeout, and it stands to reason that we won't have 
> a single box with a timeout greenthread after this, so a strategy is 
> needed.

Right, that was 2, but I was talking specifically about the resource retry. I 
think we agree on both points.

>> For phase 2, yes, we'll want it. One thing we haven't discussed much 
>> is that if we used Zaqar for this then the observer could claim a 
>> message but not acknowledge it until it had processed it, so we could 
>> have guaranteed delivery.
>>
>
> Frankly, if oslo.messaging doesn't support reliable delivery then we 
> need to add it.

That is straight-up impossible with AMQP. Either you ack the message and risk 
losing it if the worker dies before processing is complete, or you don't ack 
the message until it's processed and you become a blocker for every other 
worker trying to pull jobs off the queue. It works fine when you have only one 
worker; otherwise not so much. This is the crux of the whole "why isn't Zaqar 
just Rabbit" debate.

Most stuff in OpenStack gets around this by doing synchronous calls across 
oslo.messaging, where there is an end-to-end ack. We don't want that here 
though. We'll probably have to make do with having ways to recover after a 
failure (kick off another update with the same data is always an option). The 
hard part is that if something dies we don't really want to wait until the 
stack timeout to start recovering.



> Zaqar should have nothing to do with this and is, IMO, a poor choice 
> at this stage, though I like the idea of using it in the future so 
> that we can make Heat more of an outside-the-cloud app.

I'm inclined to agree that it would be hard to force operators to deploy Zaqar 
in order to be able to deploy Heat, and that we should probably be cautious for 
that reason.

That said, from a purely technical point of view it's not a poor choice at all 
- it has *exactly* the semantics we want (unlike AMQP), and at least to the 
extent that the operator wants to offer Zaqar to users anyway it completely 
eliminates a whole backend that they would otherwise have to deploy. It's a 
tragedy that all of OpenStack has not been designed to build upon itself in 
this way and it causes me physical pain to know that we're about to perpetuate 
it.

>>> 2.Recover from engine failure (loss of stack timeout, resource 
>>> action
>>> notification)
>>>
>>> Suggestion:
>>>
>>> 1.Use task queue like celery to host timeouts for both stack and resource.
>>
>> I believe Celery is more or less a non-starter as an OpenStack 
>> dependency because it uses Kombu directly to talk to the queue, vs.
>> oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ 
>> and maybe others in the future. i.e. requiring Celery means that some 
>> users would be forced to install Rabbit for the first time.
>>
>> One option would be to fork Celery and replace Kombu with 
>> oslo.messaging as its abstraction layer. Good luck getting that 
>> maintained though, since Celery _invented_ Kombu to b

Re: [openstack-dev] Enable live migration with one nova compute

2014-04-09 Thread Nandavar, Divakar Padiyar
Steve,
The problem with the support of live-migrate would still exist even if we 
decide to manage only one cluster from a compute node, unless one is ok with 
only live-migrate functionality between clusters.  The main debate started with 
supporting the live-migrate between the ESX Hosts in the same cluster.

Thanks,
Divakar

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, April 09, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

- Original Message -
> I'm not writing off vCenter or its capabilities. I am arguing that the 
> bar for modifying a fundamental design decision in Nova -- that of 
> being horizontally scalable by having a single nova-compute worker 
> responsible for managing a single provider of compute resources -- was 
> WAY too low, and that this decision should be revisited in the future 
> (and possibly as part of the vmware driver refactoring efforts 
> currently underway by the good folks at RH and VMWare).

+1, This is my main concern about having more than one ESX cluster under a 
single nova-compute agent as well. Currently it works, but it doesn't seem 
particularly advisable as on face value as such an architecture seems to break 
a number of the Nova design guidelines around high availability and fault 
tolerance. To me it seems like such an architecture effectively elevates 
nova-compute into being part of the control plane where it needs to have high 
availability (when discussing on IRC yesterday it seemed like this *may* be 
possible today but more testing is required to shake out any bugs).

Now may well be the right approach *is* to make some changes to these 
expectations about Nova, but I think it's disingenuous to suggest that what is 
being suggested here isn't a significant re-architecting to resolve issues 
resulting from earlier hacks that allowed this functionality to work in the 
first place. Should be an interesting summit session.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-09 Thread Nandavar, Divakar Padiyar
Hi Jay,
Managing multiple clusters using the "Compute Proxy" is not new right?   Prior 
to this "nova baremetal" driver has used this model already.   Also this "Proxy 
Compute" model gives flexibility to deploy as many computes required based on 
the requirement.   For example, one can setup one proxy compute node to manage 
a set of clusters and another proxy compute to manage a separate set of 
clusters or launch compute node for each of the clusters.

Thanks,
Divakar

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, April 09, 2014 6:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

Hi Juan, thanks for your response. Comments inline.

On Mon, 2014-04-07 at 10:22 +0200, Juan Manuel Rey wrote:
> Hi,
> 
> I'm fairly new to this list, actually this is my first email sent, and 
> to OpenStack in general, but I'm not new at all to VMware so I'll try 
> to give you my point of view about possible use case here.
> 
> Jay you are saying that by using Nova to manage ESXi hosts we don't 
> need vCenter because they basically overlap in their capabilities.

Actually, no, this is not my main point. My main point is that Nova should not 
change its architecture to fit the needs of one particular host management 
platform (vCenter).

Nova should, as much as possible, communicate with vCenter to perform some 
operations -- in the same way that Nova communicates with KVM or XenServer to 
perform some operations. But Nova should not be re-architected (and I believe 
that is what has gone on here with the code change to have one nova-compute 
worker talking to multiple vCenter
clusters) just so that one particular host management scheduler/platform
(vCenter) can have all of its features exposed to Nova.

>  I agree with you to some extent, Nova may have similar capabilities 
> as vCenter Server but as you know OpenStack as a full cloud solution 
> adds a lot more features that vCenter lacks, like multitenancy just to 
> name one.

Sure, however, my point is that Nova shouldn't need to be re-architected just 
to adhere to one particular host management platform's concepts of an atomic 
provider of compute resources.

> Also in any vSphere environment managing ESXi hosts individually, this 
> is without vCenter, is completely out of the question. vCenter is the 
> enabler of many vSphere features. And precisely that's is, IMHO, the 
> use case of using Nova to manage vCenter to manage vSphere. Without 
> vCenter we only have a bunch of hypervisors and none of the HA or DRS 
> (dynamic resource balancing) capabilities that a vSphere cluster 
> provides, this in my experience with vSphere users/customers is a no 
> go scenario.

Understood. Still doesn't change my opinion though :)

Best,
-jay

> I don't know why the decision to manage vCenter with Nova was made but 
> based on the above I understand the reasoning.
> 
> 
> Best,
> ---
> Juan Manuel Rey
> 
> @jreypo
> 
> 
> On Mon, Apr 7, 2014 at 7:20 AM, Jay Pipes  wrote:
> On Sun, 2014-04-06 at 06:59 +, Nandavar, Divakar Padiyar
> wrote:
> > >> Well, it seems to me that the problem is the above
> blueprint and the code it introduced. This is an anti-feature
> IMO, and probably the best solution would be to remove the
> above code and go back to having a single >> nova-compute
> managing a single vCenter cluster, not multiple ones.
> >
> > Problem is not introduced by managing multiple clusters from
> single nova-compute proxy node.
> 
> 
> I strongly disagree.
> 
> > Internally this proxy driver is still presenting the
> "compute-node" for each of the cluster its managing.
> 
> 
> In what way?
> 
> >  What we need to think about is applicability of the live
> migration use case when a "cluster" is modelled as a compute.
> Since the "cluster" is modelled as a compute, it is assumed
> that a typical use case of live-move is taken care by the
> underlying "cluster" itself.   With this there are other
> use cases which are no-op today like host maintenance mode,
> live move, setting instance affinity etc., In order to
> resolve this I was thinking of
> > "A way to expose operations on individual ESX Hosts like
> Putting host in maintenance mode,  live move, instance
> affinity etc., by introducing Parent - Child compute node
>

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-06 Thread Nandavar, Divakar Padiyar
>> Well, it seems to me that the problem is the above blueprint and the code it 
>> introduced. This is an anti-feature IMO, and probably the best solution 
>> would be to remove the above code and go back to having a single >> 
>> nova-compute managing a single vCenter cluster, not multiple ones.

Problem is not introduced by managing multiple clusters from single 
nova-compute proxy node.  Internally this proxy driver is still presenting the 
"compute-node" for each of the cluster its managing.What we need to think 
about is applicability of the live migration use case when a "cluster" is 
modelled as a compute.   Since the "cluster" is modelled as a compute, it is 
assumed that a typical use case of live-move is taken care by the underlying 
"cluster" itself.   With this there are other use cases which are no-op 
today like host maintenance mode, live move, setting instance affinity etc.,
 In order to resolve this I was thinking of 
"A way to expose operations on individual ESX Hosts like Putting host in 
maintenance mode,  live move, instance affinity etc., by introducing Parent - 
Child compute node concept.   Scheduling can be restricted to Parent compute 
node and Child compute node can be used for providing more drill down on 
compute and also enable additional compute operations".Any thoughts on this?

Thanks,
Divakar


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Sunday, April 06, 2014 2:02 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live 
migration with one nova compute
Importance: High

On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
> 
> 
> 
> 2014-04-04 12:46 GMT+08:00 Jay Pipes :
> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
> > Thanks Jay and Chris for the comments!
> >
> > @Jay Pipes, I think that we still need to enable "one nova
> compute
> > live migration" as one nova compute can manage multiple
> clusters and
> > VMs can be migrated between those clusters managed by one
> nova
> > compute.
> 
> 
> Why, though? That is what I am asking... seems to me like this
> is an
> anti-feature. What benefit does the user get from moving an
> instance
> from one VCenter cluster to another VCenter cluster if the two
> clusters
> are on the same physical machine?
> @Jay Pipes, for VMWare, one physical machine (ESX server) can only 
> belong to one VCenter cluster, so we may have following scenarios.
> 
> DC
>  |
> 
>  |---Cluster1
>  |  |
> 
>  |  |---host1
>  |
> 
>  |---Cluser2
> |
> 
> |---host2
> 
> 
> Then when using VCDriver, I can use one nova compute manage both
> Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
> host1 ;-(
> 
> 
> The bp was introduced by
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-
> by-one-service

Well, it seems to me that the problem is the above blueprint and the code it 
introduced. This is an anti-feature IMO, and probably the best solution would 
be to remove the above code and go back to having a single nova-compute 
managing a single vCenter cluster, not multiple ones.

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-03 Thread Nandavar, Divakar Padiyar
>>> Why reboot an instance? What is wrong with deleting it and create a new one?

You generally use non-persistent disk mode when you are testing new software or 
experimenting with settings.   If something goes wrong just reboot and you are 
back to clean state and start over again.I feel it's convenient to handle 
this with just a reboot rather than recreating the instance.

Thanks,
Divakar

-Original Message-
From: Joe Gordon [mailto:joe.gord...@gmail.com] 
Sent: Tuesday, March 04, 2014 10:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after 
stopping VM, data will be rollback automatically), do you think we shoud 
introduce this feature?
Importance: High

On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang  wrote:
>>
>> This sounds like ephemeral storage plus snapshots.  You build a base 
>> image, snapshot it then boot from the snapshot.
>
>
> Non-persistent storage/disk is useful for sandbox-like environment, and this 
> feature has already exists in VMWare ESX from version 4.1. The implementation 
> of ESX is the same as what you said, boot from snapshot of the disk/volume, 
> but it will also *automatically* delete the transient snapshot after the 
> instance reboots or shutdowns. I think the whole procedure may be controlled 
> by OpenStack other than user's manual operations.

Why reboot an instance? What is wrong with deleting it and create a new one?

>
> As far as I know, libvirt already defines the corresponding  
> element in domain xml for non-persistent disk ( [1] ), but it cannot specify 
> the location of the transient snapshot. Although qemu-kvm has provided 
> support for this feature by the "-snapshot" command argument, which will 
> create the transient snapshot under /tmp directory, the qemu driver of 
> libvirt don't support  element currently.
>
> I think the steps of creating and deleting transient snapshot may be better 
> to done by Nova/Cinder other than waiting for the  support added 
> to libvirt, as the location of transient snapshot should specified by Nova.
>
>
> [1] http://libvirt.org/formatdomain.html#elementsDisks
> --
> zhangleiqiang
>
> Best Regards
>
>
>> -Original Message-
>> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> Sent: Tuesday, March 04, 2014 11:26 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Luohao (brian)
>> Subject: Re: [openstack-dev] [nova][cinder] non-persistent 
>> storage(after stopping VM, data will be rollback automatically), do 
>> you think we shoud introduce this feature?
>>
>> On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) 
>> wrote:
>> > Hi stackers,
>> >
>> > As far as I know ,there are two types of storage used by VM in openstack:
>> Ephemeral Storage and Persistent Storage.
>> > Data on ephemeral storage ceases to exist when the instance it is 
>> > associated
>> with is terminated. Rebooting the VM or restarting the host server, 
>> however, will not destroy ephemeral data.
>> > Persistent storage means that the storage resource outlives any 
>> > other
>> resource and is always available, regardless of the state of a running 
>> instance.
>> >
>> > There is a use case that maybe need a new type of storage, maybe we 
>> > can
>> call it non-persistent storage .
>> > The use case is that VMs are assigned to the public ephemerally in 
>> > public
>> areas.
>> > After the VM is used, new data on storage of VM ceases to exist 
>> > when the
>> instance it is associated with is stopped.
>> > It means stop the VM, Non-persistent storage used by VM will be 
>> > rollback
>> automatically.
>> >
>> > Is there any other suggestions? Or any BPs about this use case?
>> >
>>
>> This sounds like ephemeral storage plus snapshots.  You build a base 
>> image, snapshot it then boot from the snapshot.
>>
>> > Thanks!
>> >
>> > Zhou Yu
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev