[openstack-dev] [Glance] image.exists events

2015-11-09 Thread Cristian Tomoiaga
Hello,

I recently created a script to generate image.exists events similar to what
is described here:
https://blueprints.launchpad.net/glance/+spec/glance-exists-notification

I am wondering if it's a good idea to finish the implementation described
in that spec ?

image.exists events are useful especially for resource accounting/billing
(what images were active for a tenant 7 months ago for example).

(see nova instance.exists events or cinder, neutron and probably other
projects)

-- 
Cristian Tomoiaga
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova and LVM thin support

2014-04-20 Thread Cristian Tomoiaga
Hello everyone,

Before going any further with my implementation I would like to ask the
community about the LVM thin support in Nova (not Cinder).
The current implementation of the LVM backend does not support thin LVs.
Does anyone believe it is a good idea to add support for this in Nova ? (I
plan on adding support for my implementation anyway).
I would also like to know where Red Hat stands on this, since they are
primarily working on LVM.
I've seen that LVM thin would be supported in RHEL 7 (?) so we may consider
the thin target stable enough for production in Juno (cinder already has
support for this since last year).

I know there was ongoing work to bring a common storage library
implementation to oslo or nova directly (Cinder's Brick library) but I
heard nothing new for some time now. Maybe John Griffith has some thoughts
on this.

The reasons why support for LVM thin would be a nice addition should be
well known especially to people working with LVM.

Another question is related to how Nova treats snapshots when LVM is used
as a backend (I hope I didn't miss anything in the code):
Right now if we can't do a live snapshot, the instance state (memory) is
being saved (libvirt virDomainManagedSave) and qemu-img is used to backup
the instance disk(s). After that we resume the instance.
Can we insert code to snapshot the instance disk so we only keep the
instance offline just for a memory dump and copy the disk content from the
snapshot created ?

-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Debian Jessie freeze date announced: 5th of November 2014

2013-10-19 Thread Cristian Tomoiaga
Hello Thomas,

I am sorry to send a reply a little late on this. I plan on working with
Debian for my Openstack setups (now I'm on a rhel based setup) and I would
really like the latest OpenStack release available.
I was initially planning to setup my own mirrors since I always seem to
need features from the next OpenStack release. For example Grizzly for me
looks too old and some features that were supposed to land on Havana
are now scheduled for Icehouse.
Given this, I would pretty much like to have J in Debian Jessie.
I'm not sure how to approach this or if it's worth the effort on your part
given the latest issues you submitted for Havana and since most likely some
features in K will probably make me switch to separate mirrors anyway.
However, taking into account the rapid development of OpenStack my guess is
that the J release should land in Jessie if possible.
I will also try to find some time and help out as much as I can with this.
Let me know what you decide , probably after the summit.



On Mon, Oct 14, 2013 at 7:34 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 The Debian release team has announced the release date for Jessie:
 https://lists.debian.org/debian-devel-announce/2013/10/msg4.html

 This means that for this release, we will *not* have any kind of sync
 with Ubuntu LTS (last time for Wheezy, we froze a few months after the
 2012.04 LTS).

 I haven't made up my mind yet if we should release Debian Jessie with
 OpenStack Icehouse (to have the same release as the Ubuntu LTS), or with
 the J release. I'd be happy to gather comments and suggestions about
 it, and discuss about it at the HK summit.

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Cristian Tomoiaga
Hello Chris,

Just a note regarding this. I was thinking on using local plus shared
storage for an instance ( eg. root disk local and another disk as a cinder
volume ).
If I understand this correctly, flagging the instance as having local
storage may not be such a good idea in this particular case right ?
Maybe root_on_local ?

Regards



-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron] Handling issues unlikely to happen with the current code

2013-10-01 Thread Cristian Tomoiaga
Hello,

I am wondering if we should handle potential issues (unlikely to happen
with the current code) now or wait to see what happens with the code in the
future ?
What I am referring to can be seen for example
in neutron/db/db_base_plugin_v2.py in _recycle_ip if we try to recycle the
same IP twice.
Looking at where and how the _recycle_ip function is called, it's unlikely
that the same IP will be recycled twice, but if that happens, two
allocation pools will be created containing the same IP address leading to
all sorts of issues afterwards.
There is no check to see if the IP already exists as a single entry in the
allocation pools table.
An extra db query should be created to avoid this.
The IP handling code will change in the future but I am guessing the same
function will continue to exist.
Waiting for some feedback on this, thank you!

-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Climate] bp:configurable-ip-allocation

2013-09-26 Thread Cristian Tomoiaga
Hello Nikolay,

Looking at this bp, it seems it has been targeted for icehouse-1 :(
I was waiting for this too (for some time now).

Mark I may be able to help if needed (will this use the same logic as the
abandoned code ?).

I am working on something similar to floating IPs but for normal IPs. I
need to be able to allow project owners to reserve specific IPs and
allocate them to VMs as needed (targeted at flat, provider networks where
project owners need to keep IP addresses)

-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron] Reserved fixed IPs

2013-07-15 Thread Cristian Tomoiaga
Hello everyone,

I am working on implementing fixed IP reservation for tenants. My goal is
to be able to reserve fixed IPs for a tenant and avoid as much as possible
the ephemeral state of an IP.

A basic workflow would be like this:

Tenant or admin reserves one or more fixed IPs. He will than be able to use
one or more of those reserved IPs on his instances (assign them to ports,
support multiple IPs per port).
If no/not enough fixed IPs are reserved, use the current IPAM
implementation otherwise allow the tenant to select from his reserved IPs
and then go through the current IPAM.

I am using fixed routable and non-routable IPs for public and private
networks (provider network , no NAT and no tagging). I will also use
floating IPs for LB, DNS a.s.o.

I have a few questions regarding the development of this since the
documentation is still being worked on and I have to dig through the code a
lot to understand a few things:

1. nova reserve-fixed-ip, this belongs to nova-network now obsolete right ?
2. I though of creating a new model (mainly a db table) to hold the IPs and
the tenant IDs in order to keep the association. I've done this for the
openvswitch plugin in ovs_models_v2 by adding a new model. I can probably
do this globally in /db directly right (especially if I plan on supporting
multiple plugins) ?
3. I was planning on adding to Neutron the api calls nova has for fixed IPs
(ex: fixed-ip-get, reserve, unreserve) Does this seem right ? I am asking
because I believe there is some work towards a new IPAM implementation and
I would like to get some thoughts. I am also asking because to me it seems
a little bit confusing that nova can also manage IPs and I am not sure
if/what functions are
obsolete there.
4. This should go as an extension first (as far as I understand for the
docs). Add the extension to extend the Neutron API and modify the current
IPAM right ?


-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev