Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-31 Thread laserjetyang
I tend to say 2) is the best option. There are many open source or
commercial backup software, and both for VMs and volume.
If we do option 1), it reminds me to implement something similar to VMware
method, and it will cause nova really heavy.


On Sun, Aug 31, 2014 at 4:04 AM, Preston L. Bannister pres...@bannister.us
wrote:

 You are thinking of written-for-cloud applications. For those the state
 should not persist with the instance.

 There are a very large number of existing applications, not written to the
 cloud model, but which could be deployed in a cloud. Those applications are
 not all going to get re-written (as the cost is often greater than the
 benefit). Those applications need some ready and efficient means of backup.

 The benefits of the cloud-application model and the cloud-deployment model
 are distinct.

 The existing nova backup (if it worked) is an inefficient snapshot. Not
 really useful at scale.

 There are two basic paths forward, here.  1) Build a complete common
 backup implementation that everyone can use. Or 2) define a common API for
 invoking backup, allow vendors to supply differing implementations, and add
 to OpenStack the APIs needed by backup implementations.

 Given past history, there does not seem to be enough focus or resources to
 get (1) done.

 That makes (2) much more likely. Reasonably sure we can find the interest
 and resources for this path. :)






 On Fri, Aug 29, 2014 at 10:55 PM, laserjetyang laserjety...@gmail.com
 wrote:

 I think the purpose of nova VM is not for persistent usage, and it should
 be used for stateless. However, there are use cases to use VM to replace
 bare metal applications, and it requires the same coverage, which I think
 VMware did pretty well.
 The nova backup is snapshot indeed, so it should be re-implemented to be
 fitting into the real backup solution.


 On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister 
 pres...@bannister.us wrote:

 The current backup APIs in OpenStack do not really make sense (and
 apparently do not work ... which perhaps says something about usage and
 usability). So in that sense, they could be removed.

 Wrote out a bit as to what is needed:

 http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/

 At the same time, to do efficient backup at cloud scale, OpenStack is
 missing a few primitives needed for backup. We need to be able to quiesce
 instances, and collect changed-block lists, across hypervisors and
 filesystems. There is some relevant work in this area - for example:

 https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots

 Switching hats - as a cloud developer, on AWS there is excellent current
 means of backup-through-snapshots, which is very quick and is charged based
 on changed-blocks. (The performance and cost both reflect use of
 changed-block tracking underneath.)

 If OpenStack completely lacks any equivalent API, then the platform is
 less competitive.

 Are you thinking about backup as performed by the cloud infrastructure
 folk, or as a service used by cloud developers in deployed applications?
 The first might do behind-the-scenes backups. The second needs an API.




 On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/29/2014 02:48 AM, Preston L. Bannister wrote:

 Looking to put a proper implementation of instance backup into
 OpenStack. Started by writing a simple set of baseline tests and
 running
 against the stable/icehouse branch. They failed!

 https://github.com/dreadedhill-work/openstack-backup-scripts

 Scripts and configuration are in the above. Simple tests.

 At first I assumed there was a configuration error in my Devstack ...
 but at this point I believe the errors are in fact in OpenStack. (Also
 I
 have rather more colorful things to say about what is and is not
 logged.)

 Try to backup bootable Cinder volumes attached to instances ... and all
 fail. Try to backup instances booted from images, and all-but-one fail
 (without logged errors, so far as I see).

 Was concerned about preserving existing behaviour (as I am currently
 hacking the Nova backup API), but ... if the existing is badly broken,
 this may not be a concern. (Makes my job a bit simpler.)

 If someone is using nova backup successfully (more than one backup at
 a time), I *would* rather like to know!

 Anyone with different experience?


 IMO, the create_backup API extension should be removed from the Compute
 API. It's completely unnecessary and backups should be the purview of
 external (to Nova) scripts or configuration management modules. This API
 extension is essentially trying to be a Cloud Cron, which is inappropriate
 for the Compute API, IMO.

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev

Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?

2014-08-29 Thread laserjetyang
I think the purpose of nova VM is not for persistent usage, and it should
be used for stateless. However, there are use cases to use VM to replace
bare metal applications, and it requires the same coverage, which I think
VMware did pretty well.
The nova backup is snapshot indeed, so it should be re-implemented to be
fitting into the real backup solution.


On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister pres...@bannister.us
wrote:

 The current backup APIs in OpenStack do not really make sense (and
 apparently do not work ... which perhaps says something about usage and
 usability). So in that sense, they could be removed.

 Wrote out a bit as to what is needed:

 http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/

 At the same time, to do efficient backup at cloud scale, OpenStack is
 missing a few primitives needed for backup. We need to be able to quiesce
 instances, and collect changed-block lists, across hypervisors and
 filesystems. There is some relevant work in this area - for example:

 https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots

 Switching hats - as a cloud developer, on AWS there is excellent current
 means of backup-through-snapshots, which is very quick and is charged based
 on changed-blocks. (The performance and cost both reflect use of
 changed-block tracking underneath.)

 If OpenStack completely lacks any equivalent API, then the platform is
 less competitive.

 Are you thinking about backup as performed by the cloud infrastructure
 folk, or as a service used by cloud developers in deployed applications?
 The first might do behind-the-scenes backups. The second needs an API.




 On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/29/2014 02:48 AM, Preston L. Bannister wrote:

 Looking to put a proper implementation of instance backup into
 OpenStack. Started by writing a simple set of baseline tests and running
 against the stable/icehouse branch. They failed!

 https://github.com/dreadedhill-work/openstack-backup-scripts

 Scripts and configuration are in the above. Simple tests.

 At first I assumed there was a configuration error in my Devstack ...
 but at this point I believe the errors are in fact in OpenStack. (Also I
 have rather more colorful things to say about what is and is not logged.)

 Try to backup bootable Cinder volumes attached to instances ... and all
 fail. Try to backup instances booted from images, and all-but-one fail
 (without logged errors, so far as I see).

 Was concerned about preserving existing behaviour (as I am currently
 hacking the Nova backup API), but ... if the existing is badly broken,
 this may not be a concern. (Makes my job a bit simpler.)

 If someone is using nova backup successfully (more than one backup at
 a time), I *would* rather like to know!

 Anyone with different experience?


 IMO, the create_backup API extension should be removed from the Compute
 API. It's completely unnecessary and backups should be the purview of
 external (to Nova) scripts or configuration management modules. This API
 extension is essentially trying to be a Cloud Cron, which is inappropriate
 for the Compute API, IMO.

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread laserjetyang
  Will this patch of Python fix your problem? *http://bugs.python.org/issue7213
http://bugs.python.org/issue7213*

On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to use OVS in Nova networking without Neutron?

2014-04-28 Thread laserjetyang
I don't think it is supported in OpenStack community right now, however, I
think nova-network with some modification may work that way, but the code
may not have big chance to be in community.


On Tue, Apr 29, 2014 at 12:21 PM, ZhengLingyun konghuaru...@163.com wrote:

  Hi list,

 I want to use OVS instead of Linux bridge in Nova networking without
 Neutron.
 How to do that?
 I use OpenStack Icehouse which is deployed by DevStack on a single host.

 Thanks.

 Dustin Zheng



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-28 Thread laserjetyang
It looks to me the Nova API will be dangerous source of DoS attacks due to
the regexp?


On Mon, Apr 28, 2014 at 7:04 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 Regex matching in APIs can be a dangerous source of DoS attacks - see
 http://en.wikipedia.org/wiki/ReDoS. Unless this is mitigated sensibly,
 I will continue to resist any cinder patch that adds them.

 Glob matches might be safer?

 On 26 April 2014 05:02, Zhangleiqiang (Trump) zhangleiqi...@huawei.com
 wrote:
  Hi, all:
 
  I see Nova allows search instances by name, ip and ip6 fields
 which can be normal string and regular expression:
 
  [stack@leiqzhang-stack cinder]$ nova help list
 
  List active servers.
 
  Optional arguments:
  --ip ip-regexp  Search with regular expression
 match by IP address
  (Admin only).
  --ip6 ip6-regexpSearch with regular expression
 match by IPv6 address
   (Admin only).
  --name name-regexp  Search with regular expression
 match by name
  --instance-name name-regexp Search with regular
 expression match by server name
  (Admin only).
 
  I think it is also needed for Cinder when query the
 volume/snapshot/backup by name. Any advice?
 
  --
  zhangleiqiang (Trump)
 
  Best Regards
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA][VMWare][live-migration] VCDriver live migration problem

2014-03-22 Thread laserjetyang
I think we might need to discuss the VMware driver to refactor progress in
IRC meeting as well, and what is our overall plan. We have been keeping
seeing the vmware code broken.


On Sun, Mar 23, 2014 at 7:49 AM, Jay Lau jay.lau@gmail.com wrote:

 Thanks Shawn, what you proposed is exactly I want ;-) Cool!

 We can discuss more during the IRC meeting.

 Thanks!


 2014-03-22 20:22 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Shawn, I have updated the title with VMWare.

 Yes, I know that live migration works. But the problem is when a
 cluster admin want to live migrate a VM instance, s/he will not know the
 target host where to migrate, as s/he cannot get target host from nova
 compute because currently VCDriver can only report cluster or resource pool
 as hypervisor host but not ESX server.

 IMHO, the VCDriver should support live migration between cluster,
 resource pool and ESX host, so we may need do at least the following
 enhancements:
 1) Enable live migration with even one nova compute. My current thinking
 is that enhance target host as host:node when live migrate a VM instance
 and the live migration task need
 2) Enable VCDriver report all ESX servers.

 We can discuss more during next week's IRC meeting.

 Thanks!


 2014-03-22 17:13 GMT+08:00 Shawn Hartsock harts...@acm.org:

 Hi Jay. We usually use [vmware] to tag discussion of VMware things. I
 almost didn't see this message.

 In short, there is a plan and we're currently blocked because we have
 to address several other pressing issues in the driver before we can
 address this one. Part of this is due to the fact that we can't press
 harder on blueprints or changes to the VCDriver right now.

 I actually reported this bug and we've discussed this at
 https://wiki.openstack.org/wiki/Meetings/VMwareAPI the basic problem
 is that live-migration actually works but you can't presently
 formulate a command that activates the feature from the CLI under some
 configurations. That's because of the introduction of clusters in the
 VCDriver in Havana.

 To fix this, we have to come up with a way to target a host inside the
 cluster (as I pointed out in the bug) or we have to have some way for
 a live migration to occur between clusters and a way to validate that
 this can happen first.

 As for the priority of this bug, it's been set to Medium which puts it
 well behind many of the Critical or High tasks on our radar. As for
 fixing the bug, no new outward behaviors or API are going to be
 introduced and this was working at one point and now it's stopped. To
 call this a new feature seems a bit strange.

 So, moving forward... perhaps we need to re-evaluate the priority
 order on some of these things. I tabled Juno planning during the last
 VMwareAPI subteam meeting but I plan on starting the discussion next
 week. We have a priority order for blueprints that we set as a team
 and these are publicly recorded in our meeting logs and on the wiki.
 I'll try to do better advertising these things. You are of course
 invited... and yeah... if you're interested in what we're fixing next
 in the VCDriver that next IRC meeting is where we'll start the
 discussion.

 On Sat, Mar 22, 2014 at 1:18 AM, Jay Lau jay.lau@gmail.com wrote:
  Hi,
 
  Currently we cannot do live migration with VCDriver in nova, live
 migration
  is really an important feature, so any plan to fix this?
 
  I noticed that there is already bug tracing this but seems no progress
 since
  last year's November: https://bugs.launchpad.net/nova/+bug/1192192
 
  Here just bring this problem up to see if there are any plan to fix
 this.
  After some investigation, I think that this might deserve to be a
 blueprint
  but not a bug.
 
  We may need to resolve issues for the following cases:
  1) How to live migration with only one nova compute? (one nova compute
 can
  manage multiple clusters and there can be multi hosts in one cluster)
  2) Support live migration between clusters
  3) Support live migration between resource pools
  4) Support live migration between hosts
  5) Support live migration between cluster and host
  6) Support live migration between cluster and resource pool
  7) Support live migration between resource pool and host
  8) Might be more cases.
 
  Please show your comments if any and correct me if anything is not
 correct.
 
  --
  Thanks,
 
  Jay
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 # Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread laserjetyang
the live snapshot has some issue on KVM, and I think it is a problem of KVM
hypervisor. For VMware, live snapshot is quite mature, and I think it is a
good way to start with VMware live snapshot.


On Tue, Mar 11, 2014 at 1:37 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Jay,
 When users move from old tools to new cloud tools, they also hope the new
 tool can inherit some good and well-known capabilities. Sometimes, assuming
 users can change their habit is dangerous. (Eg. removing Windows Start
 button). Live-snapshot is indeed a very useful feature of hypervisors, and
 it is widely used for several years (especially for VMware). I think it is
 not harmful to existing Nova structure and workflow, and will let more
 people to adopt OpenStack easier.


 On Tue, Mar 11, 2014 at 6:15 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-03-10 at 15:52 -0600, Chris Friesen wrote:
  On 03/10/2014 02:58 PM, Jay Pipes wrote:
   On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
   While I understand the general argument about pets versus cattle. The
   question is, would you be willing to poke a few holes in the strict
   cattle abstraction for the sake of pragmatism. Few shops are going
   to make the direct transition in one move. Poking a hole in the
 cattle
   abstraction allowing them to keep a pet VM might be very valuable to
   some shops making a migration.
  
   Poking holes in cattle aside, my experience with shops that prefer the
   pets approach is that they are either:
  
 * Not managing their servers themselves at all and just relying on
 some
   IT operations organization to manage everything for them, including
 all
   aspects of backing up their data as well as failing over and balancing
   servers, or,
 * Hiding behind rationales of needing to be secure or needing
 100%
   uptime or needing no customer disruption in order to avoid any
 change
   to the status quo. This is because the incentives inside legacy IT
   application development and IT operations groups are typically towards
   not rocking the boat in order to satisfy unrealistic expectations and
   outdated interface agreements that are forced upon them by management
   chains that haven't crawled out of the waterfall project management
 funk
   of the 1980s.
  
   Adding pet-based features to Nova would, IMO, just perpetuate the
 above
   scenarios and incentives.
 
  What about the cases where it's not a preference but rather just the
  inertia of pre-existing systems and procedures?

 You mean what I wrote in the second bullet point above?

  If we can get them in the door with enough support for legacy stuff,
  then they might be easier to convince to do things the cloud way in
  the future.

 Yes, fair point, and that's what Shawn was saying as well. Just noting
 that in my experience, the second part of the above sentence just
 doesn't happen. Once you bring them over and offer them the tools from
 their legacy environment, they aren't interested in changing. :)

  If we stick with the hard-line cattle-only approach we run the risk of
  alienating them completely since redoing everything at once is generally
  not feasible.

 Yes, I understand that. I'm actually fine with including functionality
 like memory snapshotting, but only if under no circumstances does it
 negatively impact the service of compute to other tenants/users and will
 not negatively impact the scaling factor of Nova either.

 I'm just not as optimistic as you are that once legacy IT folks have
 their old tools, they will consider changing their habits. ;)

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread laserjetyang
I think the workflow management might be a better place to solve your
problem, if I understood correctly


On Tue, Mar 11, 2014 at 4:29 PM, Huang Zhiteng winsto...@gmail.com wrote:

 On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 zhangleiqi...@huawei.com wrote:
  Hi all,
 
 
 
  Besides the soft-delete state for volumes, I think there is need for
  introducing another fake delete state for volumes which have snapshot.
 
 
 
  Current Openstack refuses the delete request for volumes which have
  snapshot. However, we will have no method to limit users to only use the
  specific snapshot other than the original volume ,  because the original
  volume is always visible for the users.
 
 
 
  So I think we can permit users to delete volumes which have snapshots,
 and
  mark the volume as fake delete state. When all of the snapshots of the
  volume have already deleted, the original volume will be removed
  automatically.
 
 Can you describe the actual use case for this?  I not sure I follow
 why operator would like to limit the owner of the volume to only use
 specific version of snapshot.  It sounds like you are adding another
 layer.  If that's the case, the problem should be solved at upper
 layer instead of Cinder.
 
 
 
 
  Any thoughts? Welcome any advices.
 
 
 
 
 
 
 
  --
 
  zhangleiqiang
 
 
 
  Best Regards
 
 
 
  From: John Griffith [mailto:john.griff...@solidfire.com]
  Sent: Thursday, March 06, 2014 8:38 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
 
 
 
 
 
 
  On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt j...@johngarbutt.com
 wrote:
 
  On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:
  It seems to be an interesting idea. In fact, a China-based public IaaS,
  QingCloud, has provided a similar feature
  to their virtual servers. Within 2 hours after a virtual server is
  deleted, the server owner can decide whether
  or not to cancel this deletion and re-cycle that deleted virtual
 server.
 
  People make mistakes, while such a feature helps in urgent cases. Any
 idea
  here?
 
  Nova has soft_delete and restore for servers. That sounds similar?
 
  John
 
 
 
  -Original Message-
  From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
  Sent: Thursday, March 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete
  protection
 
  Hi all,
 
  Current openstack provide the delete volume function to the user.
  But it seems there is no any protection for user's delete operation
 miss.
 
  As we know the data in the volume maybe very important and valuable.
  So it's better to provide a method to the user to avoid the volume
 delete
  miss.
 
  Such as:
  We can provide a safe delete for the volume.
  User can specify how long the volume will be delay deleted(actually
  deleted) when he deletes the volume.
  Before the volume is actually deleted, user can cancel the delete
  operation and find back the volume.
  After the specified time, the volume will be actually deleted by the
  system.
 
  Any thoughts? Welcome any advices.
 
  Best regards to you.
 
 
  --
  zhangleiqiang
 
  Best Regards
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  I think a soft-delete for Cinder sounds like a neat idea.  You should
 file a
  BP that we can target for Juno.
 
 
 
  Thanks,
 
  John
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards
 Huang Zhiteng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-03-03 Thread laserjetyang
there are a lot of rules for HA or LB, so I think it might be a better idea
to scope the framework and leave the policy as plugins.


On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski andrew.la...@rackspace.comwrote:

 On 03/01/14 at 07:24am, Jay Lau wrote:

 Hey,

 Sorry to bring this up again. There are also some discussions here:
 http://markmail.org/message/5zotly4qktaf34ei

 You can also search [Runtime Policy] in your email list.

 Not sure if we can put this to Gantt and enable Gantt provide both initial
 placement and rum time polices like HA, load balance etc.


 I don't have an opinion at the moment as to whether or not this sort of
 functionality belongs in Gantt, but there's still a long way to go just to
 get the scheduling functionality we want out of Gantt and I would like to
 see the focus stay on that.





 Thanks,

 Jay



 2014-02-21 21:31 GMT+08:00 Russell Bryant rbry...@redhat.com:

  On 02/20/2014 06:04 PM, Sean Dague wrote:
  On 02/20/2014 05:32 PM, Russell Bryant wrote:
  On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
  Hi,
 
  Would like to know if there's any interest on having
  'automatic evacuation' feature when a compute node goes down. I
  found 3 bps related to this topic: [1] Adding a periodic task
  and using ServiceGroup API for compute-node status [2] Using
  ceilometer to trigger the evacuate api. [3] Include some kind
  of H/A plugin  by using a 'resource optimization service'
 
  Most of those BP's have comments like 'this logic should not
  reside in nova', so that's why i am asking what should be the
  best approach to have something like that.
 
  Should this be ignored, and just rely on external monitoring
  tools to trigger the evacuation? There are complex scenarios
  that require lot of logic that won't fit into nova nor any
  other OS component. (For instance: sometimes it will be faster
  to reboot the node or compute-nova than starting the
  evacuation, but if it fail X times then trigger an evacuation,
  etc )
 
  Any thought/comment// about this?
 
  Regards Leandro
 
  [1]
 
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
 
 [2]
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-
 instance-automatically
 
 
 [3]
 
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 
 
 
 My opinion is that I would like to see this logic done outside of Nova.
 
  Right now Nova is the only service that really understands the
  compute topology of hosts, though it's understanding of liveness is
  really not sufficient to handle this kind of HA thing anyway.
 
  I think that's the real problem to solve. How to provide
  notifications to somewhere outside of Nova on host death. And the
  question is, should Nova be involved in just that part, keeping
  track of node liveness and signaling up for someone else to deal
  with it? Honestly that part I'm more on the fence about. Because
  putting another service in place to just handle that monitoring
  seems overkill.
 
  I 100% agree that all the policy, reacting, logic for this should
  be outside of Nova. Be it Heat or somewhere else.

 I think we agree.  I'm very interested in continuing to enhance Nova
 to make sure that the thing outside of Nova has all of the APIs it
 needs to get the job done.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-23 Thread laserjetyang
I think it could bring more general discussion on how to collect physical
equipment information, and which to be collected?
right now, ceilometer only tracking the VM level, and when we use ironic,
we expect the ironic can bring us some good information on the deployed
physical machine images.


On Mon, Dec 23, 2013 at 10:17 AM, Gao, Fengqian fengqian@intel.comwrote:

  Hi, Pradipta,

 From personal experience,  I think lm-sensors is not good as IPMI. I have
 to configure it manually and the sensor data it could get also less than
 IPMI.

 So, I prefer to use IPMI. Did you use it before? Maybe you can share your
 experience.



 Best wishes



 --fengqian



 *From:* Pradipta Banerjee [mailto:bprad...@yahoo.com]
 *Sent:* Friday, December 20, 2013 10:52 PM
 *To:* openstack-dev@lists.openstack.org

 *Subject:* Re: [openstack-dev] [Nova] [Ironic] Get power and temperature
 via IPMI



 On 12/19/2013 12:30 AM, Devananda van der Veen wrote:

   On Tue, Dec 17, 2013 at 10:00 PM, Gao, Fengqian fengqian@intel.com
 wrote:

  Hi, all,

 I am planning to extend bp
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-schedulingwith 
 power and temperature. In other words, power and temperature can be
 collected and used for nova-scheduler just as CPU utilization.

   This is a good idea and have definite use cases where one might want to
 optimize provisioning based on power consumption

 I have a question here. As you know, IPMI is used to get power and
 temperature and baremetal implements IPMI functions in Nova. But baremetal
 driver is being split out of nova, so if I want to change something to the
 IPMI, which part should I choose now? Nova or Ironic?





 Hi!



 A few thoughts... Firstly, new features should be geared towards Ironic,
 not the nova baremetal driver as it will be deprecated soon (
 https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver).
 That being said, I actually don't think you want to use IPMI for what
 you're describing at all, but maybe I'm wrong.



 When scheduling VMs with Nova, in many cases there is already an agent
 running locally, eg. nova-compute, and this agent is already supplying
 information to the scheduler. I think this is where the facilities for
 gathering power/temperature/etc (eg, via lm-sensors) should be placed, and
 it can reported back to the scheduler along with other usage statistics.

 +1

 Using lm-sensors or equivalent seems better.
 Have a look at the following blueprint
 https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking



 If you think there's a compelling reason to use Ironic for this instead of
 lm-sensors, please clarify.



 Cheers,

 Devananda






  ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --

 Regards,

 Pradipta


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] future fate of nova-network?

2013-11-25 Thread laserjetyang
I've moved my cloud neutron already, but while it provides many advanced
features it still really falls down on providing simple solutions for
simple use cases
it is my feeling as well.
I am not able to easily achieve my running neutron smoothly, and there are
a lot of tricks. Compare to nova-network, it is getting more and more
advance features, which is good, but it looks to me the code is heavier and
even harder to get it stable.


On Sat, Nov 23, 2013 at 12:17 AM, Jonathan Proulx j...@jonproulx.com wrote:

 To add to the screams of others removing features from nova-network to
 achieve parity with neutron is a non starter, and it rather scares me
 to hear it suggested.

 I do try not to rant in public, especially about things I'm not
 competent to really help fix, but I can't really contain this one any
 longer:

 rant
 As an operator I've moved my cloud neutron already, but while it
 provides many advanced features it still really falls down on
 providing simple solutions for simple use cases.  Most operators I've
 talked to informally hate it for that and don't want to go near it and
 for new users, even those with advanced skill sets, neutron causes by
 far the most cursing and rage quits I've seen (again just my
 subjective observation) on IRC, Twitter, and the mailing lists.

 Providing feature parity and easy cut over *should have been* priority
 1 when quantum split out of nova as it was for cinder (which was a
 delightful and completely unnoticable transition)

 We need feature parity and complexity parity with nova-network for the
 use cases it covers.  The failure to do so or even have a reasonable
 plan to do so is currently the worst thing about openstack.
 /rant

 I do appreciate the work being done on advanced networking features in
 neutron, I'm even using some of them, just someone please bring focus
 back on the basics.

 -Jon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev