Re: [Openstack-doc-core] [Openstack-docs] Troubleshooting docs: in one chapter or across multiple chapters?

2012-07-12 Thread Russell Bryant
On 07/12/2012 10:45 AM, Lorin Hochstein wrote:
 (crossposting to doc-core in case not everyone there has moved over to
 openstack-docs yet).
 
 A question came up on this merge proposal: https://review.openstack.org/9494
 
 In that proposal, I had created a troubleshooting section in the
 Networking chapter in the Compute Admin guide. Tom pointed out that we
 already have a whole Troubleshooting chapter and asked whether it made
 sense to just keep it in one place or split it across sections. I wanted
 to bring it up to the list to see what people thought.
 
 I liked having it by section to have the Networking troubleshooting docs
 closer to the rest of the networking content. But I can imagine
 troubleshooting issues where it would deal with issues across multiple
 sections (e.g., some interaction between scheduling and VMs, for
 example), in which case that would make it harder to find the right
 content, since a reader might miss the troubleshooting section that
 solves their problem.

I like having it with the Networking chapter.  I think there are a
couple of general cases where people are going to be reading this
information:

1) They are reading through all of the docs to learn about OpenStack.
In that case, I think the troubleshooting information about a particular
topic is most useful with the rest of the information on the same topic.

2) Someone has hit a problem so they started putting it into Google.  In
this case, it doesn't matter that much where it is.  However, I still
think it makes more sense to have it in the Networking chapter.  Once
someone gets finished troubleshooting their networking problem, they
would likely be interested in reviewing other information on the
networking topic rather than other unrelated troubleshooting bits.

-- 
Russell Bryant



-- 
Mailing list: https://launchpad.net/~openstack-doc-core
Post to : openstack-doc-core@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-doc-core
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova-Compute and Libvirt

2012-07-12 Thread Trinath Somanchi
Self Resolved 

since no memory resources are available, this error is shown.


On Thu, Jul 12, 2012 at 11:06 AM, Trinath Somanchi 
trinath.soman...@gmail.com wrote:

 Hi-

 I have set up two machine a Controller (All Openstack Modules) and Node
 (Only Nova-Compute).

 In the first instance, the VM  is getting created in the Controller and is
 Active and able to a login.

 In the Second Instance the VM is getting created in the Node but stops
 with an Spawning error. Libvirt error: unable to read from monitor :
 connection reset by peer.

 To check this error, I have moved the existing and active instance file
 set to the NODE machine and from the VIRSH console, started the same. This
 particular instance which started and active in the Controller machine gave
 an error in the NODE machine Libvirt error: unable to read from monitor :
 connection reset by peer.

 Is this error Machine specific or Installation specific.

 How libvirt communicate in both Controller and Node.

 Please help me in resolving the issue.



 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Scheduler issue

2012-07-12 Thread Christian Wittwer
You can just stop nova-compute on the essex-1 node.

2012/7/12 Trinath Somanchi trinath.soman...@gmail.com

 Hi-

 I have installed Openstack on a Machine ESSEX-1 which is the controller
 and Nova-Compute on another machine ESSEX-2 which acts as an agent/node.

 When ever I start an new instance, the VM instance is getting created in
 the CONTROLLER (ESSEX-1) machine not in the ESSEX-2 machine, the agent/node.

 Please kindly help me on how to restrict/instruct the controller to create
 the VM instances only on the Agent/Nodes only.

 Thanking you for the help.


 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Scheduler issue

2012-07-12 Thread Sébastien Han
$ sudo nova-manage service disable --host=ESSEX-1 --service nova-compute

It's also good to read the documentation before asking questions.

http://docs.openstack.org/essex/openstack-compute/admin/content/managing-the-cloud.html#d6e6254

Cheers.

On Thu, Jul 12, 2012 at 9:14 AM, Christian Wittwer wittwe...@gmail.comwrote:

 You can just stop nova-compute on the essex-1 node.

 2012/7/12 Trinath Somanchi trinath.soman...@gmail.com

 Hi-

 I have installed Openstack on a Machine ESSEX-1 which is the controller
 and Nova-Compute on another machine ESSEX-2 which acts as an agent/node.

 When ever I start an new instance, the VM instance is getting created in
 the CONTROLLER (ESSEX-1) machine not in the ESSEX-2 machine, the agent/node.

 Please kindly help me on how to restrict/instruct the controller to
 create the VM instances only on the Agent/Nodes only.

 Thanking you for the help.


 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Folsom testing packages available for Ubuntu 12.04 and 12.10.

2012-07-12 Thread Thierry Carrez
Adam Gandelman wrote:
 I'd like to announce the availability of Folsom trunk testing PPAs for
 Ubuntu 12.04 and and 12.10.  We've spent a considerable amount of time
 this cycle expanding our test infrastructure + coverage and packaging
 efforts in order to support a single Openstack release across two Ubuntu
 releases.  We're not *entirely* there yet, but anyone interested in
 testing a packaged version of Folsom are encouraged to check out the PPA
 [1].
 
 https://launchpad.net/~openstack-ubuntu-testing/+archive/folsom-trunk-testing
 [...]

Great news ! Note that this effort replaces the previous testing PPAs
that were loosely maintained by Ubuntu-minded folks in CI/Release team.

The wiki page has been updated to match, and will be updated again once
the Cloud Archive is available:

http://wiki.openstack.org/PPAs

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova-Compute and Libvirt

2012-07-12 Thread Trinath Somanchi
Hi Neo-

I have configured Nova network to use FLATDHCPMANAGER driver.

I'm able to start the instances at NODES.

But some times these instances run at Controller it self. I'm investigating
the terms on how to restrict this.

Hence posted the issue in the list.

On Thu, Jul 12, 2012 at 12:28 PM, Nguyen Son Tung tungns@gmail.comwrote:

 Hi Trinath,

 I'm Neo.
 I am trying to install nova in 2 nodes like you, but I stuck with
 nova-network config.
 How about your installation?

 Could you run the instances on compute-nodes?
 How about your nova network setup?

 Thank you!
 Regrads,

 Neo

 On Thu, Jul 12, 2012 at 1:50 PM, Trinath Somanchi
 trinath.soman...@gmail.com wrote:
  Self Resolved 
 
  since no memory resources are available, this error is shown.
 
 
 
  On Thu, Jul 12, 2012 at 11:06 AM, Trinath Somanchi
  trinath.soman...@gmail.com wrote:
 
  Hi-
 
  I have set up two machine a Controller (All Openstack Modules) and Node
  (Only Nova-Compute).
 
  In the first instance, the VM  is getting created in the Controller and
 is
  Active and able to a login.
 
  In the Second Instance the VM is getting created in the Node but stops
  with an Spawning error. Libvirt error: unable to read from monitor :
  connection reset by peer.
 
  To check this error, I have moved the existing and active instance file
  set to the NODE machine and from the VIRSH console, started the same.
 This
  particular instance which started and active in the Controller machine
 gave
  an error in the NODE machine Libvirt error: unable to read from
 monitor :
  connection reset by peer.
 
  Is this error Machine specific or Installation specific.
 
  How libvirt communicate in both Controller and Node.
 
  Please help me in resolving the issue.
 
 
 
  --
  Regards,
  --
  Trinath Somanchi,
  +91 9866 235 130
 
 
 
 
  --
  Regards,
  --
  Trinath Somanchi,
  +91 9866 235 130
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



 --
 Nguyen Son Tung
 Cellphone: 0942312007
 Skype: neophilo




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Scheduler issue

2012-07-12 Thread Trinath Somanchi
Thanks for the reply.

If I stop the Nova-Compute in Essex-1 machine. then How will Controller and
Nodes communicate via libvirt where in turn Nova-compute handles it.?


On Thu, Jul 12, 2012 at 1:00 PM, Sébastien Han han.sebast...@gmail.comwrote:

 $ sudo nova-manage service disable --host=ESSEX-1 --service nova-compute

 It's also good to read the documentation before asking questions.


 http://docs.openstack.org/essex/openstack-compute/admin/content/managing-the-cloud.html#d6e6254

 Cheers.

 On Thu, Jul 12, 2012 at 9:14 AM, Christian Wittwer wittwe...@gmail.comwrote:

 You can just stop nova-compute on the essex-1 node.

 2012/7/12 Trinath Somanchi trinath.soman...@gmail.com

 Hi-

 I have installed Openstack on a Machine ESSEX-1 which is the controller
 and Nova-Compute on another machine ESSEX-2 which acts as an agent/node.

 When ever I start an new instance, the VM instance is getting created in
 the CONTROLLER (ESSEX-1) machine not in the ESSEX-2 machine, the agent/node.

 Please kindly help me on how to restrict/instruct the controller to
 create the VM instances only on the Agent/Nodes only.

 Thanking you for the help.


 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Public Network spec proposal

2012-07-12 Thread Gary Kotton

Hi,
1. Is this also applicable to the agents? Say for example a user wants 
to ensure that a public network is attached to network interface em1 and 
the private network attached to em2. Is this something that will be 
addressed by the blueprint?
2. I prefer option #3. This seems to be a cleaner approach for the user 
interface.

Thanks
Gary

On 07/12/2012 01:52 AM, Salvatore Orlando wrote:

Hi,

A proposal for the implementation of the public networks feature has 
been published.

It can be reached from the quantum-v2-public-networks blueprint page [1].
Feedback is more than welcome!

Regards,
Salvatore

[1]: 
https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova-Compute and Libvirt

2012-07-12 Thread Trinath Somanchi
Thanks a lot for the help in replies...


have disabled Nova-compute in Controller. It worked fine..

On Thu, Jul 12, 2012 at 3:48 PM, Kiall Mac Innes ki...@managedit.ie wrote:

 Are you saying you don't want instances to run on the controller?

 If so, remove nova-compute from the controller and remove its record from
 the compute_nodes table, and its nova-compute row from the services table.

 That will prevent nova from launching instances on that server.

 Thanks,
 Kiall

 Sent from my mobile...
 On Jul 12, 2012 10:17 AM, Trinath Somanchi trinath.soman...@gmail.com
 wrote:

 Hi Neo-

 I have configured Nova network to use FLATDHCPMANAGER driver.

 I'm able to start the instances at NODES.

 But some times these instances run at Controller it self. I'm
 investigating the terms on how to restrict this.

 Hence posted the issue in the list.

 On Thu, Jul 12, 2012 at 12:28 PM, Nguyen Son Tung 
 tungns@gmail.comwrote:

 Hi Trinath,

 I'm Neo.
 I am trying to install nova in 2 nodes like you, but I stuck with
 nova-network config.
 How about your installation?

 Could you run the instances on compute-nodes?
 How about your nova network setup?

 Thank you!
 Regrads,

 Neo

 On Thu, Jul 12, 2012 at 1:50 PM, Trinath Somanchi
 trinath.soman...@gmail.com wrote:
  Self Resolved 
 
  since no memory resources are available, this error is shown.
 
 
 
  On Thu, Jul 12, 2012 at 11:06 AM, Trinath Somanchi
  trinath.soman...@gmail.com wrote:
 
  Hi-
 
  I have set up two machine a Controller (All Openstack Modules) and
 Node
  (Only Nova-Compute).
 
  In the first instance, the VM  is getting created in the Controller
 and is
  Active and able to a login.
 
  In the Second Instance the VM is getting created in the Node but stops
  with an Spawning error. Libvirt error: unable to read from monitor :
  connection reset by peer.
 
  To check this error, I have moved the existing and active instance
 file
  set to the NODE machine and from the VIRSH console, started the same.
 This
  particular instance which started and active in the Controller
 machine gave
  an error in the NODE machine Libvirt error: unable to read from
 monitor :
  connection reset by peer.
 
  Is this error Machine specific or Installation specific.
 
  How libvirt communicate in both Controller and Node.
 
  Please help me in resolving the issue.
 
 
 
  --
  Regards,
  --
  Trinath Somanchi,
  +91 9866 235 130
 
 
 
 
  --
  Regards,
  --
  Trinath Somanchi,
  +91 9866 235 130
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



 --
 Nguyen Son Tung
 Cellphone: 0942312007
 Skype: neophilo




 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: [Quantum] Public Network spec proposal

2012-07-12 Thread Endre Karlson
Why not just --public or not ? Why do you need --public True ? That just
adds confusion...

Endre.


2012/7/12 Gary Kotton gkot...@redhat.com

 **
 Hi,
 1. Is this also applicable to the agents? Say for example a user wants to
 ensure that a public network is attached to network interface em1 and the
 private network attached to em2. Is this something that will be addressed
 by the blueprint?
 2. I prefer option #3. This seems to be a cleaner approach for the user
 interface.
 Thanks
 Gary


 On 07/12/2012 01:52 AM, Salvatore Orlando wrote:

 Hi,

  A proposal for the implementation of the public networks feature has
 been published.
 It can be reached from the quantum-v2-public-networks blueprint page [1].
 Feedback is more than welcome!

  Regards,
 Salvatore

  [1]:
 https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks


 ___
 Mailing list: https://launchpad.net/~openstack

 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: Re: [Quantum] Public Network spec proposal

2012-07-12 Thread Gary Kotton

Fowarding to the list

 Original Message 
Subject:Re: [Openstack] [Quantum] Public Network spec proposal
Date:   Thu, 12 Jul 2012 13:47:13 +0200
From:   Endre Karlson endre.karl...@gmail.com
To: gkot...@redhat.com



Why not just --public or not ? Why do you need --public True ? That just 
adds confusion...


Endre.

2012/7/12 Gary Kotton gkot...@redhat.com mailto:gkot...@redhat.com

   Hi,
   1. Is this also applicable to the agents? Say for example a user
   wants to ensure that a public network is attached to network
   interface em1 and the private network attached to em2. Is this
   something that will be addressed by the blueprint?
   2. I prefer option #3. This seems to be a cleaner approach for the
   user interface.
   Thanks
   Gary


   On 07/12/2012 01:52 AM, Salvatore Orlando wrote:

Hi,

A proposal for the implementation of the public networks feature
has been published.
It can be reached from the quantum-v2-public-networks blueprint
page [1].
Feedback is more than welcome!

Regards,
Salvatore

[1]:
https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks


___
Mailing list:https://launchpad.net/~openstack  
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net Unsubscribe :
https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack More help :
https://help.launchpad.net/ListHelp



   ___
   Mailing list: https://launchpad.net/~openstack
   https://launchpad.net/%7Eopenstack
   Post to : openstack@lists.launchpad.net
   mailto:openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   https://launchpad.net/%7Eopenstack
   More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Networking issues in Essex

2012-07-12 Thread Jonathan Proulx

I've only deployed openstack for the first time a couple weeks ago,
but FWIW...

I had similar symptoms on my Essex test deployment (on Ubuntu 12.04)
turned out my problem was taht while the br100 bridge was up and
configured the underlying eth1 physical interface was down so the bits
went nowhere.  'ifconfig eth1 up' fixed all, followed ofcoures by
fixing in /etc/network/interfaces as well so this happens on it's own
in future.

-Jon

On Thu, Jul 12, 2012 at 02:56:57PM +1000, Michael Chapman wrote:
:Hi all, I'm hoping I could get some assistance figuring out my networking
:problems with a small Essex test cluster. I have a small Diablo cluster
:running without any problems but have hit a wall in deploying Essex.
:
:I can launch VMs without issue and access them from the compute host, but
:from there I can't access anything except the host, DNS services, and other
:VMs.
:
:I have separate machines running keystone, glance, postgresql, rabbit-mq
:and nova-api. They're all on the .os domain with 172.22.1.X IPs
:
:I have one machine running nova-compute, nova-network and nova-api, with a
:public address 192.43.239.175 and also an IP on the 172.22.1.X subnet in
:the .os domain. It has the following nova/conf:
:
:--dhcpbridge_flagfile=/etc/nova/nova.conf
:--dhcpbridge=/usr/bin/nova-dhcpbridge
:--logdir=/var/log/nova
:--state_path=/var/lib/nova
:--lock_path=/var/lock/nova
:--force_dhcp_release
:--iscsi_helper=tgtadm
:--libvirt_use_virtio_for_bridges
:--connection_type=libvirt
:--root_helper=sudo nova-rootwrap
:--verbose
:--ec2_private_dns_show_ip
:
:--network_manager=nova.network.manager.FlatDHCPManager
:--rabbit_host=os-amqp.os
:--sql_connection=postgresql://[user]:[password]@os-sql.os/nova
:--image_service=nova.image.glance.GlanceImageService
:--glance_api_servers=os-glance.os:9292
:--auth_strategy=keystone
:--scheduler_driver=nova.scheduler.simple.SimpleScheduler
:--keystone_ec2_url=http://os-key.os:5000/v2.0/ec2tokens
:
:--api_paste_config=/etc/nova/api-paste.ini
:
:--my_ip=192.43.239.175
:--flat_interface=eth0
:--public_interface=eth1
:--multi_host=True
:--routing_source_ip=192.43.239.175
:--network_host=192.43.239.175
:
:--dmz_cidr=$my_ip
:
:--ec2_host=192.43.239.175
:--ec2_dmz_host=192.43.239.175
:
:I believe I'm seeing a natting issue of some sort - my VMs cannot ping
:external IPs, though DNS seems to work.
:ubuntu@monday:~$ ping www.google.com
:PING www.l.google.com (74.125.237.148) 56(84) bytes of data.
:AWKWARD SILENCE
:
:When I do a tcpdump on the compute host things seem fairly normal, even
:though nothing is getting back to the VM
:
:root@ncios1:~# tcpdump icmp -i br100
:tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
:listening on br100, link-type EN10MB (Ethernet), capture size 65535 bytes
:14:35:28.046416 IP 10.0.0.8  syd01s13-in-f20.1e100.net: ICMP echo request,
:id 5002, seq 9, length 64
:14:35:28.051477 IP syd01s13-in-f20.1e100.net  10.0.0.8: ICMP echo reply,
:id 5002, seq 9, length 64
:14:35:29.054505 IP 10.0.0.8  syd01s13-in-f20.1e100.net: ICMP echo request,
:id 5002, seq 10, length 64
:14:35:29.059556 IP syd01s13-in-f20.1e100.net  10.0.0.8: ICMP echo reply,
:id 5002, seq 10, length 64
:
:I've pored over the iptables nat rules and can't see anything amiss apart
:from the masquerades that are automatically added: (I've cut out some empty
:chains for brevity)
:
:root@ncios1:~# iptables -L -t nat -v
:Chain PREROUTING (policy ACCEPT 22 packets, 2153 bytes)
: pkts bytes target prot opt in out source
:destination
:   22  2153 nova-network-PREROUTING  all  --  anyany anywhere
:  anywhere
:   22  2153 nova-compute-PREROUTING  all  --  anyany anywhere
:  anywhere
:   22  2153 nova-api-PREROUTING  all  --  anyany anywhere
:  anywhere
:
:Chain INPUT (policy ACCEPT 12 packets, 1573 bytes)
: pkts bytes target prot opt in out source
:destination
:
:Chain OUTPUT (policy ACCEPT 31 packets, 2021 bytes)
: pkts bytes target prot opt in out source
:destination
:   31  2021 nova-network-OUTPUT  all  --  anyany anywhere
:  anywhere
:   31  2021 nova-compute-OUTPUT  all  --  anyany anywhere
:  anywhere
:   31  2021 nova-api-OUTPUT  all  --  anyany anywhere
:anywhere
:
:Chain POSTROUTING (policy ACCEPT 30 packets, 1961 bytes)
: pkts bytes target prot opt in out source
:destination
:   31  2021 nova-network-POSTROUTING  all  --  anyany anywhere
:anywhere
:   30  1961 nova-compute-POSTROUTING  all  --  anyany anywhere
:anywhere
:   30  1961 nova-api-POSTROUTING  all  --  anyany anywhere
:anywhere
:   30  1961 nova-postrouting-bottom  all  --  anyany anywhere
:  anywhere
:0 0 MASQUERADE  tcp  --  anyany 192.168.122.0/24!
:192.168.122.0/24 masq ports: 1024-65535
:0 0 MASQUERADE  udp  --  anyany 192.168.122.0/24!
:192.168.122.0/24 masq ports: 1024-65535
:0 0 MASQUERADE  all  -- 

Re: [Openstack] Nova Scheduler issue

2012-07-12 Thread Nguyen Son Tung
Oh don't worry.
The nova-scheduler and other nova components will take care all of it.
Just run the disable nova-compute command and run another instance. You'll see.

On Thu, Jul 12, 2012 at 4:16 PM, Trinath Somanchi
trinath.soman...@gmail.com wrote:
 Thanks for the reply.

 If I stop the Nova-Compute in Essex-1 machine. then How will Controller and
 Nodes communicate via libvirt where in turn Nova-compute handles it.?



 On Thu, Jul 12, 2012 at 1:00 PM, Sébastien Han han.sebast...@gmail.com
 wrote:

 $ sudo nova-manage service disable --host=ESSEX-1 --service nova-compute

 It's also good to read the documentation before asking questions.


 http://docs.openstack.org/essex/openstack-compute/admin/content/managing-the-cloud.html#d6e6254

 Cheers.

 On Thu, Jul 12, 2012 at 9:14 AM, Christian Wittwer wittwe...@gmail.com
 wrote:

 You can just stop nova-compute on the essex-1 node.

 2012/7/12 Trinath Somanchi trinath.soman...@gmail.com

 Hi-

 I have installed Openstack on a Machine ESSEX-1 which is the controller
 and Nova-Compute on another machine ESSEX-2 which acts as an agent/node.

 When ever I start an new instance, the VM instance is getting created in
 the CONTROLLER (ESSEX-1) machine not in the ESSEX-2 machine, the 
 agent/node.

 Please kindly help me on how to restrict/instruct the controller to
 create the VM instances only on the Agent/Nodes only.

 Thanking you for the help.


 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Nguyen Son Tung
Cellphone: 0942312007
Skype: neophilo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: [Quantum] Public Network spec proposal

2012-07-12 Thread Yong Sheng Gong
If we just use one flag, it can represent just two values True or False. If we want to represent three values True, False or not specified, we have to use --public True or --public False or nothing at all.So it is a three-values logic.-openstack-bounces+gongysh=cn.ibm@lists.launchpad.net wrote: -To: openstack@lists.launchpad.netFrom: Endre Karlson Sent by: openstack-bounces+gongysh=cn.ibm@lists.launchpad.netDate: 07/12/2012 07:53PMSubject: [Openstack] Fwd:  [Quantum] Public Network spec proposalWhy not just --public or not ? Why do you need --public True ? That just adds confusion...Endre.
2012/7/12 Gary Kotton gkot...@redhat.com


  

  
  
Hi,
1. Is this also applicable to the agents? Say for example a user
wants to ensure that a public network is attached to network
interface em1 and the private network attached to em2. Is this
something that will be addressed by the blueprint?
2. I prefer option #3. This seems to be a cleaner approach for the
user interface.
Thanks
Gary

On 07/12/2012 01:52 AM, Salvatore Orlando wrote:
Hi,
  
  
  A proposal for the implementation of the public networks
feature has been published.
  It can be reached from the quantum-v2-public-networks
blueprint page [1].
  Feedback is more than welcome!
  
  
  Regards,
  Salvatore
  
  
  [1]:https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks
  ___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help  : https://help.launchpad.net/ListHelp


  

___
Mailing list: https://launchpad.net/~openstack
Post to   : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp


___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help  : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift][Object-server] Why that arp_cache consumes memory followed with uploading objects?

2012-07-12 Thread Kuo Hugo
Hi all

I found that the arp_cache in slabinfo on objec-server is growing up
followed with uploaded object numbers.

Does any code using it ?

2352000 1329606  56%0.06K  36750   64147000K kmalloc-64
1566617 1257226  80%0.21K  42341   37338728K xfs_ili
1539808 1257748  81%1.00K  48119   32   1539808K xfs_inode
538432 470882  87%0.50K  16826   32269216K kmalloc-512
403116 403004  99%0.19K   9598   42 76784K dentry
169250 145824  86%0.31K   6770   25 54160K arp_cache


Does it may cause any performance concern ?

Btw , how could I flush the memory of arp_cache which using by XFS(SWIFT)?


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ tonyt...@gmail.com886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Thomas, Duncan
We've got volumes in production, and while I'd be more comfortable with option 
2 for the reasons you list below, plus the fact that cinder is fundamentally 
new code with totally new HA and reliability work needing to be done 
(particularly for the API endpoint), it sounds like the majority is strongly 
favouring option 1...

--
Duncan Thomas
HP Cloud Services, Galway

From: openstack-bounces+duncan.thomas=hp@lists.launchpad.net 
[mailto:openstack-bounces+duncan.thomas=hp@lists.launchpad.net] On Behalf 
Of Flavia Missi
Sent: 11 July 2012 20:56
To: Renuka Apte
Cc: Openstack (openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

For me it's +1 to 1, but...

Here at Globo.com we're already deploying clouds based on openstack (not in 
production yet, we have dev and lab), and it's really painful when openstack 
just forces us to change, I mean, sysadmins are not that happy, so I think 
it's more polite if we warn them in Folsom, and remove everything next. Maybe 
this way nobody's going to fear the update. It also make us lose the chain of 
thought.. you're learning, and suddenly you have to change something for an 
update, and then you come back to what you we're doing...
--
Flavia

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Rate limit middleware

2012-07-12 Thread Jay Pipes
On 07/11/2012 07:28 PM, Rafael Durán Castañeda wrote:
 Thank you guys for the info, I didn't know about some of the projects.
 However writing my on-house own stuff is not what I was considering
 but adding a middleware into Keystone, nothing fancy but extensible so
 it covers at least most basic use cases, pretty much like nova
 middleware. So , would you like to see something like that into keystone
 or you don't?

I think that's what Kevin was trying to say you didn't need to do, since
Turnstile can already do that for you :) You simply insert the Turnstile
Python WSGI middleware into the Paste deploy pipeline of Keystone, and
then you get rate limiting in Keystone.

You'd just add this into the Keystone paste.ini file:

[filter:turnstile]
paste.filter_factory = turnstile.middleware:turnstile_filter
redis.host = your Redis database host name or IP

And then insert the turnstile middleware in the Keystone pipeline, like so:

[pipeline:public_api]
pipeline = stats_monitoring url_normalize token_auth admin_token_auth
xml_body json_body debug ec2_extension turnstile public_service

The above should be a single line of course...

And then configure Turnstile to your needs. See:

http://code.activestate.com/pypm/turnstile/

If you wanted to do some custom stuff, check out the custom Nova
Turnstile middleware for an example:

http://code.activestate.com/pypm/nova-limits/

All the best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Time for a UK Openstack User Group meeting ?

2012-07-12 Thread Nick Barcet
On 07/05/2012 07:02 PM, Nick Barcet wrote:
 On 07/04/2012 05:38 PM, Day, Phil wrote:
 Hi All,

 I’m thinking it’s about time we had an OpenStack User Group meeting in
 the UK , and would be interested in hearing from anyone interested in
 attending, presenting, helping to organise, etc.

 London would seem the obvious choice, but we could also host here in HP
 Bristol if that works for people.

 Reply here or e-mail me directly (phil@hp.com), and if there’s
 enough interest I’ll pull something together.

 
 Canonical would be delighted to host that meeting in our new london
 office.  We have capacity for up to 100 people and are currently
 checking availability, but are looking for a date in the last week july.
  We have started the OpenStack-London meetup [1] so anyone interested
 can subscribe.
 
 [1] http://www.meetup.com/Openstack-London/

The room has now been booked:

http://www.meetup.com/Openstack-London/events/55354582/

When: Wednesday, July 25, 2012 6:30 PM

Where: IPC Media Building (Bluefin)
110 Southwark Street
SE1 0SU

RSVP limit: 100 Yes RSVPs

Cheers,
--
Nick Barcet nick.bar...@canonical.com
aka: nijaba, nicolas



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] About glance API v2.0 spec

2012-07-12 Thread Yong Sheng Gong
 Hi,Who can tell me where glance API v2.0 spec is?ThanksYong Sheng Gong

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Jay Pipes
On 07/12/2012 10:36 AM, Thomas, Duncan wrote:
 We’ve got volumes in production, and while I’d be more comfortable with
 option 2 for the reasons you list below, plus the fact that cinder is
 fundamentally new code with totally new HA and reliability work needing
 to be done (particularly for the API endpoint), it sounds like the
 majority is strongly favouring option 1…

Actually, I believe Cinder is essentially a bit-for-bit copy of
nova-volumes. John G, is that correct?

It's this similarity that really makes option 1 feasible. If the
codebases (and API) were radically different, removal like this would be
much more difficult IMHO.

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] About glance API v2.0 spec

2012-07-12 Thread Brian Waldon
Here's the latest draft: 
https://docs.google.com/document/d/1jPdgmQUzTuSt9iqcgPy74fbkzafCvqfQpvzRhwf2Kic/edit

It is still a work in progress as we have found several tweaks we want to make 
as we actually implement it. I will spend some time updating the google doc 
sometime soon.


On Jul 12, 2012, at 8:25 AM, Yong Sheng Gong wrote:

 Hi,
  Who can tell me where glance API v2.0 spec is?
 
 Thanks
 Yong Sheng Gong
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [keystone] - V3 API implementation folsom goals

2012-07-12 Thread Joseph Heck
During Tuesday's keystone meeting 
(http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-10-18.01.html
 if you're curious), we reviewed the work that we'd originally lined up against 
Folsom and did a check against the time remaining in the development cycle 
(through August 16th - http://wiki.openstack.org/FolsomReleaseSchedule).

It was pretty clear that while we had high hopes, there's a lot remaining that 
isn't going to get done. Related to that, I've unlinked a significant number of 
blueprints from the folsom-3 milestone. Based on the current trajectory, I 
suspect they won't make it for those blueprints. If we start making significant 
traction on the development to get those features and components done, then 
we'll relink them.

The V3 API has been nailed down sufficient to start implementation, and that's 
my view of our highest focus outside of the PKI work that Adam Young has been 
pushing into place. I am hopeful that we can have a V3 API fully implemented 
and operational by the end of the Folsom-3 milestone (in a few weeks), but I 
feel it's far too late in the development cycle to ask any other OpenStack core 
projects to rely on that effort for a Folsom release. Regardless of hope, I've 
unlinked the V3 API blueprint and the pieces dependent on it from the folsom 
series until it's clear that we're going to be in a position to actually 
deliver those items.

Related to the V3 API work, I've done a quick breakdown, and we're abusing the 
bug mechanism in launchpad to help coordinate and track. The bugs related to 
the work breakdown have all been tagged with v3api, and have been 
prioritized. The general breakdown is set to implement tests that represent the 
V3 API, and then fill in the implementation to make those tests work. 

-joe

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [Quantum] Public Network spec proposal

2012-07-12 Thread Salvatore Orlando
Thank you again for your feedback.

On the discussion about two or three-way logic, I understand Yong's point
of being able to fetch public and private networks in one call, but I also
I agree with Endre that using a boolean flag for something which is
actually Yes/No/Whatever sounds confusing and is different by what the
Openstack CLI usually does.

Hence, if we have a large agreement on the need of being able to specify
whether we want public networks, private networks or both, I'd go for the
approach #3 in the design proposal, as suggested by Gary, and the CLI
option would became something like --network_type={public|private|both}.

On the agent issue raised by Gary - I'm afraid I don't understand. Gary,
could you please elaborate more?

Regards,
Salvatore

On 12 July 2012 05:37, Yong Sheng Gong gong...@cn.ibm.com wrote:


 If we just use one flag, it can represent just two values True or False.
 If we want to represent three values True, False or not specified, we have
 to use --public True or --public False or nothing at all.

 So it is a three-values logic.


 -openstack-bounces+gongysh=cn.ibm@lists.launchpad.net wrote: -
 To: openstack@lists.launchpad.net
 From: Endre Karlson **
 Sent by: openstack-bounces+gongysh=cn.ibm@lists.launchpad.net
 Date: 07/12/2012 07:53PM
 Subject: [Openstack] Fwd: [Quantum] Public Network spec proposal


 Why not just --public or not ? Why do you need --public True ? That just
 adds confusion...

 Endre.


 2012/7/12 Gary Kotton gkot...@redhat.com

 **
 Hi,
 1. Is this also applicable to the agents? Say for example a user wants to
 ensure that a public network is attached to network interface em1 and the
 private network attached to em2. Is this something that will be addressed
 by the blueprint?
 2. I prefer option #3. This seems to be a cleaner approach for the user
 interface.
 Thanks
 Gary


 On 07/12/2012 01:52 AM, Salvatore Orlando wrote:

 Hi,

  A proposal for the implementation of the public networks feature has
 been published.
 It can be reached from the quantum-v2-public-networks blueprint page [1].
 Feedback is more than welcome!

  Regards,
 Salvatore

  [1]:
 https://blueprints.launchpad.net/quantum/+spec/quantum-v2-public-networks

 ___
 Mailing list: https://launchpad.net/~openstack

 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread John Griffith
On Thu, Jul 12, 2012 at 9:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/12/2012 10:36 AM, Thomas, Duncan wrote:
 We’ve got volumes in production, and while I’d be more comfortable with
 option 2 for the reasons you list below, plus the fact that cinder is
 fundamentally new code with totally new HA and reliability work needing
 to be done (particularly for the API endpoint), it sounds like the
 majority is strongly favouring option 1…

 Actually, I believe Cinder is essentially a bit-for-bit copy of
 nova-volumes. John G, is that correct?


Yes, that's correct, and as you state it's really the only reason that
option 1 is feasible and also why in my opinion it's the best option.

 It's this similarity that really makes option 1 feasible. If the
 codebases (and API) were radically different, removal like this would be
 much more difficult IMHO.

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Rate limit middleware

2012-07-12 Thread Rafael Durán Castañeda

On 07/12/2012 04:48 PM, Jay Pipes wrote:

On 07/11/2012 07:28 PM, Rafael Durán Castañeda wrote:

Thank you guys for the info, I didn't know about some of the projects.
However writing my on-house own stuff is not what I was considering
but adding a middleware into Keystone, nothing fancy but extensible so
it covers at least most basic use cases, pretty much like nova
middleware. So , would you like to see something like that into keystone
or you don't?

I think that's what Kevin was trying to say you didn't need to do, since
Turnstile can already do that for you :) You simply insert the Turnstile
Python WSGI middleware into the Paste deploy pipeline of Keystone, and
then you get rate limiting in Keystone.

You'd just add this into the Keystone paste.ini file:

[filter:turnstile]
paste.filter_factory = turnstile.middleware:turnstile_filter
redis.host = your Redis database host name or IP

And then insert the turnstile middleware in the Keystone pipeline, like so:

[pipeline:public_api]
pipeline = stats_monitoring url_normalize token_auth admin_token_auth
xml_body json_body debug ec2_extension turnstile public_service

The above should be a single line of course...

And then configure Turnstile to your needs. See:

http://code.activestate.com/pypm/turnstile/

If you wanted to do some custom stuff, check out the custom Nova
Turnstile middleware for an example:

http://code.activestate.com/pypm/nova-limits/

All the best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
Unless I'm missing something, nova_limits is not applicable to Keystone 
since it takes the tenant_id from 'nova.context', which obiously is not 
available for Keystone; thought adapt/extend it to keystone should be 
trivial and probably is the way to go.


Regards,
Rafael


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Chuck Thier
We currently have a large deployment that is based on nova-volume as it is
in trunk today, and just ripping it out will be quite painful.  For us,
option #2 is the only suitable option.

We need a smooth migration path, and time to successfuly migrate to Cinder.
Since there is no clear migration path between Openstack Nova releases, we
have to track very close to trunk.  The rapid change of nova and nova-volume
trunk has already been a very difficult task.  Ripping nova-volume out of nova
would bring us to a standstill until we could migrate to Cinder.

Cinder has made great strides to get where it is today, but
I really hope we, the Openstack community, will take the time to consider the
ramifications, and make sure that we take the time needed to ensure both
a successful release of Cinder and a successful transition from nova-volume
to Cinder.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] resource free -vs- allocated utilization?

2012-07-12 Thread Jonathan Proulx
for posterity yes the info isn't hard to find in the database:

mysql select id,vcpus,vcpus_used,memory_mb,memory_mb_used  from compute_nodes;

I'm not terribly keen on SQL as an interface, guess if it bothers me
enough I'll implement a different interface...

On Wed, Jul 11, 2012 at 10:34 PM, Jonathan Proulx j...@jonproulx.com wrote:
 On Wed, Jul 11, 2012 at 8:58 PM, Chris Behrens cbehr...@codestud.com wrote:
 Hi Jon,

 There's actually a review up right now proposing to add an OS API extension
 to be able to give some of this data:

 https://review.openstack.org/#/c/9544/

 that seems to be based on quota limits where as I'm look for just now
 is over all physical limits for all users.

 I don't know how you may be looking to query it, but it's not too difficult
 to get it directly from the instances table in the database, either.

 I guess I'm looking for something like:

 eucalyptus-describe-availibility-zones verbose
 AVAILABILITYZONE |- vm types free / max cpu ram disk
 AVAILABILITYZONE |- m1.small  /  1 128 2
 AVAILABILITYZONE |- c1.medium  /  1 256 5
 AVAILABILITYZONE |- m1.large  /  2 512 10
 AVAILABILITYZONE |- m1.xlarge  /  2 1024 20
 AVAILABILITYZONE |- c1.xlarge  /  4 2048 20

 (well not with the zeros, but it's the first example I could find)

 where with euca2ools and openstack I get essentially the output of
 'nova-manage service list' which is useful but not for the same
 things.  Guess I'll dig into the database shouldn't be too hard to get
 close to what I want, the eucalyptus output also takes into account
 fragmentation, which is nice since if I have 100 free cpu slots but on
 100 different compute nodes I'm in a bit more trouble than if they are
 only spread across 10 since in the later case multicore instances can
 start (for a while anyway).

 Thanks,
 -Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
This community just doesn't give a rat's ass about compatibility, does it?

-George

On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
+1.207.956.0217
enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com
To schedule a meeting with me: http://tungle.me/GeorgeReese

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Rate limit middleware

2012-07-12 Thread Kevin L. Mitchell
On Thu, 2012-07-12 at 18:26 +0200, Rafael Durán Castañeda wrote:
 Unless I'm missing something, nova_limits is not applicable to Keystone 
 since it takes the tenant_id from 'nova.context', which obiously is not 
 available for Keystone; thought adapt/extend it to keystone should be 
 trivial and probably is the way to go.

You are correct, you would not take nova_limits and use it with
Keystone.  What you likely would do is use nova_limits as a model to
develop your own shim with the additional capabilities you need.  I
expect that you would not need much of what nova_limits does, by the
way…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Meeting agenda for Thursday at 16:00 UTC (July 12th, 2012)

2012-07-12 Thread Nick Barcet
On 07/11/2012 02:34 PM, Nick Barcet wrote:
 Hi,
 
 The metering project team holds a meeting in #openstack-meeting,
 Thursdays at 1600 UTC
 http://www.timeanddate.com/worldclock/fixedtime.html?hour=16min=0sec=0.
 
 Everyone is welcome.
 
 Agenda:
 http://wiki.openstack.org/Meetings/MeteringAgenda
 
  * Review last week's actions
- nijaba to send an email to the PPB for Incabation application
- nijaba to send call for candidate to the general mailing list
- jd to setup opa voting system to start on 26th and end on Aug 3rd
- nijaba to prime a roadmap page and invite others to populate it
- jd handle counter/meter type for real
- nijaba to document external API use with a clear warning about the
  limitation of the sum and duration function
 
  * Discuss and define actions from the PPB discussion on Tue regarding
our incubation.
-
 http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-10-20.01.html
-
 http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-10-20.01.log.html
 
  * Open discussion
 
 If you are not able to attend or have additional topic you would like to
 cover, please update the agenda on the wiki.
 
 Cheers,

The meeting took place, here is the summary:

==
#openstack-meeting: Ceilometer
==


Meeting started by nijaba at 16:00:01 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-12-16.00.log.html


Meeting summary
---

* LINK: http://wiki.openstack.org/Meetings/MeteringAgenda  (nijaba,
  16:00:01)
* actions from previous meeting  (nijaba, 16:02:08)

* nijaba to send an email to the PPB for Incabation application
  (nijaba, 16:02:20)

* nijaba to send call for candidate to the general mailing list
  (nijaba, 16:02:31)

* jd__ to setup opa voting system to start on 26th and end on Aug 3rd
  (nijaba, 16:04:07)
  * ACTION: jd to setup opa voting system to start on 26th and end on
Aug 3rd  (nijaba, 16:04:27)

* nijaba to prime a roadmap page and invite others to populate it
  (nijaba, 16:04:40)
  * LINK: http://wiki.openstack.org/EfficientMetering/RoadMap  (nijaba,
16:04:40)
  * ACTION: nijaba to post roadmap to mailing list, askingfor feeback
and volunteers?  (nijaba, 16:05:48)

* jd__ handle counter/meter type for real  (nijaba, 16:06:12)
  * ACTION: jd___ adding the type of meters in ceilometer meter code
(nijaba, 16:07:57)

* nijaba to document external API use with a clear warning about the
  limitation of the sum and duration function  (nijaba, 16:08:13)
  * LINK:
http://wiki.openstack.org/EfficientMetering/APIProposalv1#Limitations
(nijaba, 16:08:13)

* Discuss and define actions from the PPB discussion on Tue regarding
  our incubation.  (nijaba, 16:10:13)
  * LINK:

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-10-20.01.html
(nijaba, 16:10:13)
  * LINK:

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-10-20.01.log.html
(nijaba, 16:10:13)
  * ACTION: nijaba to add a table showing the state of integration for
each OpenStack project  (nijaba, 16:13:53)
  * ACTION: nijaba to adjust the proposal to reflect a longer incubation
period  (nijaba, 16:15:45)
  * ACTION: dhellmann to get some feedback now about the sorts of meters
users want from the mailing list  (nijaba, 16:19:57)
  * ACTION: dhellmann to open a bug and work on devstack integration
(nijaba, 16:21:39)

* Open Discusssion  (nijaba, 16:26:42)
  * ACTION: dhellmann create a diagram of ceilometer architecture
(dhellmann, 16:35:35)
  * ACTION: dhellmann write a walk-through of setting up ceilometer and
collecting data  (dhellmann, 16:36:02)



Meeting ended at 16:47:27 UTC.



Action items, by person
---

* dhellmann
  * dhellmann to get some feedback now about the sorts of meters users
want from the mailing list
  * dhellmann to open a bug and work on devstack integration
  * dhellmann create a diagram of ceilometer architecture
  * dhellmann write a walk-through of setting up ceilometer and
collecting data
* jd___
  * jd___ adding the type of meters in ceilometer meter code
* nijaba
  * nijaba to post roadmap to mailing list, askingfor feeback and
volunteers?
  * nijaba to add a table showing the state of integration for each
OpenStack project
  * nijaba to adjust the proposal to reflect a longer incubation period




People present (lines said)
---

* nijaba (114)
* dhellmann (65)
* jd___ (34)
* flacoste (13)
* gmb (9)
* heckj (8)
* DanD (8)
* openstack (3)
* uvirtbot` (1)



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : 

Re: [Openstack] [Nova] resource free -vs- allocated utilization?

2012-07-12 Thread Kevin L. Mitchell
On Thu, 2012-07-12 at 12:31 -0400, Jonathan Proulx wrote:
 for posterity yes the info isn't hard to find in the database:
 
 mysql select id,vcpus,vcpus_used,memory_mb,memory_mb_used  from 
 compute_nodes;
 
 I'm not terribly keen on SQL as an interface, guess if it bothers me
 enough I'll implement a different interface...

Check out the hypervisors extension and related novaclient addition, now
in trunk; I made all the information from the compute_nodes table
available via the API.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Rate limit middleware

2012-07-12 Thread Jay Pipes
On 07/12/2012 12:26 PM, Rafael Durán Castañeda wrote:
 Unless I'm missing something, nova_limits is not applicable to Keystone
 since it takes the tenant_id from 'nova.context', which obiously is not
 available for Keystone; thought adapt/extend it to keystone should be
 trivial and probably is the way to go.

Sure, though I'm pointing out that this could/should be an external
project (like nova_limits) and not something to be proposed for merging
into Keystone core...

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [nova] VM stucks in deleting task state

2012-07-12 Thread Hien Phan
Hello list,

I've just installed Openstack Essex on Ubuntu 12.04. Everything works well
except one problem I just faced:
When I try terminate VM in dashboard. It stuck in deleting ask for few
hours until now. I can not connect VM. I try reboot using nova command and
error :

root@openstack-node01:~# nova reboot hien-vm02
ERROR: Cannot 'reboot' while instance is in task_state deleting (HTTP 409)
root@openstack-node01:~# nova reboot hien-vm01
ERROR: Cannot 'reboot' while instance is in task_state deleting (HTTP 409)
root@openstack-node01:~# nova list
+--+---++-+
|  ID  |Name   | Status |
Networks  |
+--+---++-+
| b924a325-b07f-480b-9a31-3049736fbfde | hien-vm02 | ACTIVE |
private=172.16.1.35, 192.168.255.34 |
| e7908096-83e6-480d-9131-efa4ea73ca0d | hien-vm01 | ACTIVE |
private=172.16.1.34, 192.168.255.33 |
+--+---++-+



Openstack Dashboard screenshot image: http://i.imgur.com/7e4cf.png

How i can delete VMs completely ?
Thanks in advance.
-- 
Best regards,
Phan Quoc Hien

http://www.mrhien.info
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Brian Waldon
We actually care a hell of a lot about compatibility. We also recognize there 
are times when we have to sacrifice compatibility so we can move forward at a 
reasonable pace.

If you think we are handling anything the wrong way, we would love to hear your 
suggestions. If you just want to make comments like this, I would suggest you 
keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:

 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
 +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
Well, I think overall OpenStack has done an absolute shit job of compatibility 
and I had hoped (and made a huge point of this at the OpenStack conference) 
Diablo - Essex would be the end of this compatibility bullshit.

But the attitudes in this thread and with respect to the whole Cinder question 
in general suggest to me that this cavalier attitude towards forward migration 
hasn't changed.

So you can kiss my ass.

-George

On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

 We actually care a hell of a lot about compatibility. We also recognize there 
 are times when we have to sacrifice compatibility so we can move forward at a 
 reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
 +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus - 
 http://www.enstratus.com
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
+1.207.956.0217
enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com
To schedule a meeting with me: http://tungle.me/GeorgeReese

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Jay Pipes
On 07/12/2012 12:32 PM, George Reese wrote:
 This community just doesn't give a rat's ass about compatibility, does it?

a) Please don't be inappropriate on the mailing list
b) Vish sent the email below to the mailing list *precisely because* he
cares about compatibility. He wants to discuss the options with the
community and come up with a reasonable action plan with the Cinder PTL,
John Griffith for the move

Now, would you care to be constructive with your criticism?

Thanks,
-jay

 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,

 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:

 Option 1 -- Remove Nova Volume
 ==

 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom

 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release

 Option 2 -- Deprecate Nova Volume
 =

 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder

 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported

 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.

 But we really need to know if this is going to cause major pain to
 existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.

 Vish


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.com mailto:george.re...@enstratus.com   
 Skype: nspollutiont: @GeorgeReesep: +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus
 - http://www.enstratus.com http://www.enstratus.com/
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Some clarifications about Cinder

2012-07-12 Thread John Griffith
Hi Everyone,

Throughout the email thread regarding how to proceed with
cinder/nova-volumes and a number of IRC conversations I thought I
should try and clarify a few things about Cinder.

First, it should be clear that Cinder is literally a direct copy of
the existing nova-volume code.  This was intentional in order to
maintain compatibility and provide a 'sane' transition.  The goal for
Folsom from the very start was clearly stated as providing a
functional equivalent and as near compatible version as possible (I
won't say 100% because there's always room for interpretation with
regard to what compatibility means).  There are a number of things
that were considered and done to make things better for the community:

1. For the most part it's the same code
2. The usage semantics are the same
3. You can still use novaclient just as you did before
4. You can use euca2ools to the same extent that you did before
5. You can also use the new cinderclient

The only thing we're really changing at this point is that now the
volume service is it's own project.  This of course means that there
is some up front configuration regarding a different end point etc,
but that really is the bulk of it.  You'll notice for example that all
of the existing devstack tests etc work exactly the same with cinder
as they do with nova-volume, we're not suggesting anybody replace
nova-volume with an incompatible interface.

I want to be very clear that myself and just about everybody else I've
talked with and worked with DO in fact care about compatibility as
well as customer impacts.  I know one concern is migration, that's
something that is critical and if Cinder doesn't have a robust and
clean migration mechanism in place by F3 I don't think we would ask
anybody to switch from what they already have.

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nova Cells

2012-07-12 Thread Michael J Fork


Outside of the Etherpad (http://etherpad.openstack.org/FolsomComputeCells)
and presentation referenced there (http://comstud.com/FolsomCells.pdf), are
there additional details available on the architecture / implementation of
Cells?

Thanks.

Michael

-
Michael Fork
Cloud Architect, Emerging Solutions
IBM Systems  Technology Group___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Christopher B Ferris
This level of response is unnecessary. 

That said, the perspectives which influenced the decision seemed somewhat 
weighted to the development community. I could be wrong, but I did not see much 
input from the operations community as to the impact.

Clearly, going forward, we want to be more deliberate about changes that may 
have impact on operations and he broader ecosystem that bases its efforts on 
assumptions established at the start of a release cycle, rather than on changes 
introduced late in the cycle.

Cheers

Chris

Sent from my iPad

On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com wrote:

 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the OpenStack 
 conference) Diablo - Essex would be the end of this compatibility bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards forward 
 migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReese
 p: +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus - 
 http://www.enstratus.com
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
 +1.207.956.0217
 enStratus: Enterprise Cloud 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
I certainly wasn't picking on Vish, but instead the entire community so eagerly 
interested in option #1. You see, the OpenStack community has a perfect record 
of making sure stuff like that ends up breaking everyone between upgrades.

So, if you take offense by my comments… err, well, I'm not at all sorry. It's 
time for this community to grow the hell up and make sure systems upgrade 
nicely now and forever and that OpenStack environments are actually compatible 
with one another. Hell, I still find Essex environments that aren't even API 
compatible with one another. You have the Rackspace CTO wandering around 
conferences talking about how the value proposition of OpenStack is 
interoperability among clouds and yet you can't even get interoperability 
within the same OpenStack distribution of the same OpenStack version.

I smell a pile of bullshit and the community just keeps shoveling.

-George

On Jul 12, 2012, at 12:22 PM, Jay Pipes wrote:

 On 07/12/2012 12:32 PM, George Reese wrote:
 This community just doesn't give a rat's ass about compatibility, does it?
 
 a) Please don't be inappropriate on the mailing list
 b) Vish sent the email below to the mailing list *precisely because* he
 cares about compatibility. He wants to discuss the options with the
 community and come up with a reasonable action plan with the Cinder PTL,
 John Griffith for the move
 
 Now, would you care to be constructive with your criticism?
 
 Thanks,
 -jay
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
  place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
  database to the cinder database (The schema for the tables in
  cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
  from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
  if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
  for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to
 existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.com mailto:george.re...@enstratus.com   
 Skype: nspollutiont: @GeorgeReesep: +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus
 - http://www.enstratus.com http://www.enstratus.com/
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

--
George Reese - Chief Technology Officer, enStratus

Re: [Openstack] [Swift][Object-server] Why that arp_cache consumes memory followed with uploading objects?

2012-07-12 Thread Rick Jones

On 07/12/2012 06:36 AM, Kuo Hugo wrote:


Hi all

I found that the arp_cache in slabinfo on objec-server is growing up 
followed with uploaded object numbers.


Does any code using it ?


The code which maps from IP to Ethernet addresses does.  That mapping is 
what enables sending IP datagrams to their next-hop destination (which 
may be the final hop, depending) on an Ethernet network.



2352000 1329606  56%0.06K  36750   64 147000K kmalloc-64
1566617 1257226  80%0.21K  42341   37338728K xfs_ili
1539808 1257748  81%1.00K  48119   32   1539808K xfs_inode
538432 470882  87%0.50K  16826   32269216K kmalloc-512
403116 403004  99%0.19K   9598   42 76784K dentry
169250 145824  86% 0.31K   6770   25 54160K arp_cache


Does it may cause any performance concern ?


I believe that is one of those it depends kinds of questions.


Btw , how could I flush the memory of arp_cache which using by XFS(SWIFT)?


You can use the classic arp command to manipulate the ARP cache. It 
can also show you how many entries there are.  I suspect that a web 
search on linux flush arp cache may yield some helpful results as well.


rick jones
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
You evidently have not had to live with the interoperability nightmare known as 
OpenStack in the same way I have. Otherwise, you would find responses like 
Brian's much more offensive.

-George

On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:

 This level of response is unnecessary. 
 
 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not see 
 much input from the operations community as to the impact.
 
 Clearly, going forward, we want to be more deliberate about changes that may 
 have impact on operations and he broader ecosystem that bases its efforts on 
 assumptions established at the start of a release cycle, rather than on 
 changes introduced late in the cycle.
 
 Cheers
 
 Chris
 
 Sent from my iPad
 
 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com 
 wrote:
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the 
 OpenStack conference) Diablo - Essex would be the end of this compatibility 
 bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards 
 forward migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to 
 existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReese
 p: +1.207.956.0217
 enStratus: Enterprise Cloud Management - @enStratus - 
 http://www.enstratus.com
 To schedule a meeting with me: http://tungle.me/GeorgeReese
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Brian Waldon
What exactly was so offensive about what I said? Communities like OpenStack are 
built on top of people *doing* things, not *talking* about things. I'm just 
asking you to contribute code or design help rather than slanderous commentary.

Brian  Offensive  Waldon

On Jul 12, 2012, at 11:59 AM, George Reese wrote:

 You evidently have not had to live with the interoperability nightmare known 
 as OpenStack in the same way I have. Otherwise, you would find responses like 
 Brian's much more offensive.
 
 -George
 
 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:
 
 This level of response is unnecessary. 
 
 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not see 
 much input from the operations community as to the impact.
 
 Clearly, going forward, we want to be more deliberate about changes that may 
 have impact on operations and he broader ecosystem that bases its efforts on 
 assumptions established at the start of a release cycle, rather than on 
 changes introduced late in the cycle.
 
 Cheers
 
 Chris
 
 Sent from my iPad
 
 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com 
 wrote:
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the 
 OpenStack conference) Diablo - Essex would be the end of this 
 compatibility bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards 
 forward migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to 
 existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 --
 George Reese - Chief Technology Officer, enStratus
 e: george.re...@enstratus.comSkype: nspollutiont: 

Re: [Openstack] Nova Cells

2012-07-12 Thread Chris Behrens
Partially developed.  This probably isn't much use, but I'll throw it out 
there: http://comstud.com/cells.pdf

ATM the messy code speaks for itself here:

https://github.com/comstud/nova/tree/cells_service

The basic architecture is:

Top level cell with API service has DB, rabbit, and the nova-cells service.  
API's compute_api_class is overridden to use a new class that shoves every 
action on an instance into the nova-cells service, telling it which cell to 
route the request to based on instance['cell_name'].  The nova-cells service 
routes the request to correct cell as requested… 1 hop at a time to the 
nova-cells service in each child.

(Each child runs this new nova-cells service also)

If nova-cells service gets a message destined for itself, it'll call the 
appropriate compute_api call in the child.

DB updates are hooked in the child and pushed up to parent cells.

New instance creation is slightly different.  API will create the DB entry up 
front… and pass the uuid and all of the same data to the nova-cells service, 
which will pick a cell for the instance.  When it is decided to use the 
'current cell' in some child, it will create the DB entry there as well… push a 
notification upward… and cast the message over to the host scheduler (current 
scheduler).  And the build continues as normal from there (host is picked, and 
message is casted to the host, etc).

There's some code to sync instances in case of lost DB updates.. but there's 
improvements to make yet..

Sorry… that's very quick.  I'm going to be AFK for a couple days..

- Chris


On Jul 12, 2012, at 10:39 AM, Michael J Fork wrote:

 Outside of the Etherpad (http://etherpad.openstack.org/FolsomComputeCells) 
 and presentation referenced there (http://comstud.com/FolsomCells.pdf), are 
 there additional details available on the architecture / implementation of 
 Cells?  
 
 Thanks.
 
 Michael
 
 -
 Michael Fork
 Cloud Architect, Emerging Solutions
 IBM Systems  Technology Group
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
So if Im not coding, I should shut up?

I think you answered your own question.

Sent from my iPhone

On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com wrote:

What exactly was so offensive about what I said? Communities like OpenStack
are built on top of people *doing* things, not *talking* about things. I'm
just asking you to contribute code or design help rather than slanderous
commentary.

Brian  Offensive  Waldon

On Jul 12, 2012, at 11:59 AM, George Reese wrote:

You evidently have not had to live with the interoperability nightmare
known as OpenStack in the same way I have. Otherwise, you would find
responses like Brian's much more offensive.

-George

On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:

This level of response is unnecessary.

That said, the perspectives which influenced the decision seemed somewhat
weighted to the development community. I could be wrong, but I did not see
much input from the operations community as to the impact.

Clearly, going forward, we want to be more deliberate about changes that
may have impact on operations and he broader ecosystem that bases its
efforts on assumptions established at the start of a release cycle, rather
than on changes introduced late in the cycle.

Cheers

Chris

Sent from my iPad

On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com
wrote:

Well, I think overall OpenStack has done an absolute shit job of
compatibility and I had hoped (and made a huge point of this at the
OpenStack conference) Diablo - Essex would be the end of this
compatibility bullshit.

But the attitudes in this thread and with respect to the whole Cinder
question in general suggest to me that this cavalier attitude towards
forward migration hasn't changed.

So you can kiss my ass.

-George

On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

We actually care a hell of a lot about compatibility. We also recognize
there are times when we have to sacrifice compatibility so we can move
forward at a reasonable pace.

If you think we are handling anything the wrong way, we would love to hear
your suggestions. If you just want to make comments like this, I would
suggest you keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:

This community just doesn't give a rat's ass about compatibility, does it?

-George

On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

Hello Everyone,

Now that the PPB has decided to promote Cinder to core for the Folsom
release, we need to decide what happens to the existing Nova Volume
code. As far as I can see it there are two basic strategies. I'm going
to give an overview of each here:

Option 1 -- Remove Nova Volume
==

Process
---
* Remove all nova-volume code from the nova project
* Leave the existing nova-volume database upgrades and tables in
  place for Folsom to allow for migration
* Provide a simple script in cinder to copy data from the nova
  database to the cinder database (The schema for the tables in
  cinder are equivalent to the current nova tables)
* Work with package maintainers to provide a package based upgrade
  from nova-volume packages to cinder packages
* Remove the db tables immediately after Folsom

Disadvantages
-
* Forces deployments to go through the process of migrating to cinder
  if they want to use volumes in the Folsom release

Option 2 -- Deprecate Nova Volume
=

Process
---
* Mark the nova-volume code deprecated but leave it in the project
  for the folsom release
* Provide a migration path at folsom
* Backport bugfixes to nova-volume throughout the G-cycle
* Provide a second migration path at G
* Package maintainers can decide when to migrate to cinder

Disadvantages
-
* Extra maintenance effort
* More confusion about storage in openstack
* More complicated upgrade paths need to be supported

Personally I think Option 1 is a much more manageable strategy because
the volume code doesn't get a whole lot of attention. I want to keep
things simple and clean with one deployment strategy. My opinion is that
if we choose option 2 we will be sacrificing significant feature
development in G in order to continue to maintain nova-volume for another
release.

But we really need to know if this is going to cause major pain to existing
deployments out there. If it causes a bad experience for deployers we
need to take our medicine and go with option 2. Keep in mind that it
shouldn't make any difference to end users whether cinder or nova-volume
is being used. The current nova-client can use either one.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.comSkype: 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Brian Waldon
Planning the development of the projects is valuable as well as contributing 
code. If you review my last message, you'll see the words '... or design help', 
which I intended to represent non-code contribution. You seem to have strong 
opinions on how things should be done, but I don't see your voice in any of the 
community discussions.

Moving forward, I wish you would share your expertise in a constructive manner. 
Keep in mind this list reaches 2200 people. Let's not waste anyone's time.

WALDON


On Jul 12, 2012, at 12:14 PM, George Reese wrote:

 So if Im not coding, I should shut up? 
 
 I think you answered your own question. 
 
 Sent from my iPhone
 
 On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com wrote:
 
 What exactly was so offensive about what I said? Communities like OpenStack 
 are built on top of people *doing* things, not *talking* about things. I'm 
 just asking you to contribute code or design help rather than slanderous 
 commentary.
 
 Brian  Offensive  Waldon
 
 On Jul 12, 2012, at 11:59 AM, George Reese wrote:
 
 You evidently have not had to live with the interoperability nightmare 
 known as OpenStack in the same way I have. Otherwise, you would find 
 responses like Brian's much more offensive.
 
 -George
 
 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:
 
 This level of response is unnecessary. 
 
 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not see 
 much input from the operations community as to the impact.
 
 Clearly, going forward, we want to be more deliberate about changes that 
 may have impact on operations and he broader ecosystem that bases its 
 efforts on assumptions established at the start of a release cycle, rather 
 than on changes introduced late in the cycle.
 
 Cheers
 
 Chris
 
 Sent from my iPad
 
 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com 
 wrote:
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the 
 OpenStack conference) Diablo - Essex would be the end of this 
 compatibility bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards 
 forward migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to 
 hear your suggestions. If you just want to make comments like this, I 
 would suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does 
 it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is 
 that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
This ain't the first time I've had a run in with you where your response was 
essentially if you don't like it, go code it.

And obviously you missed the entire constructive point in my response. It's 
this:

The proposed options suck. It's too late to do anything about that as this ship 
has sailed.

What you need to understand going forward is that this community has an abysmal 
history when it comes to compatibility and interoperability.

Abysmal.

Not checkered. Not patchy. Not lacking. Abysmal.

Horizontally. Vertically. Abysmal.

Actually, shockingly abysmal.

You saw one public response laughing at me for expecting this community to care 
about compatibility. I also received private responses with the same sentiment.

If you guys really think you care about compatibility, you need to go sit in a 
corner and do some heavy thinking. Because the history of this project and this 
thread in particular suggest otherwise.

In case you missed it again, here it is in a single sentence: 

The constructive point I am making is that it's time to wake up and get serious 
about compatibility and interoperability.

-George

On Jul 12, 2012, at 2:27 PM, Brian Waldon wrote:

 Planning the development of the projects is valuable as well as contributing 
 code. If you review my last message, you'll see the words '... or design 
 help', which I intended to represent non-code contribution. You seem to have 
 strong opinions on how things should be done, but I don't see your voice in 
 any of the community discussions.
 
 Moving forward, I wish you would share your expertise in a constructive 
 manner. Keep in mind this list reaches 2200 people. Let's not waste anyone's 
 time.
 
 WALDON
 
 
 On Jul 12, 2012, at 12:14 PM, George Reese wrote:
 
 So if Im not coding, I should shut up? 
 
 I think you answered your own question. 
 
 Sent from my iPhone
 
 On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com wrote:
 
 What exactly was so offensive about what I said? Communities like OpenStack 
 are built on top of people *doing* things, not *talking* about things. I'm 
 just asking you to contribute code or design help rather than slanderous 
 commentary.
 
 Brian  Offensive  Waldon
 
 On Jul 12, 2012, at 11:59 AM, George Reese wrote:
 
 You evidently have not had to live with the interoperability nightmare 
 known as OpenStack in the same way I have. Otherwise, you would find 
 responses like Brian's much more offensive.
 
 -George
 
 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:
 
 This level of response is unnecessary. 
 
 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not 
 see much input from the operations community as to the impact.
 
 Clearly, going forward, we want to be more deliberate about changes that 
 may have impact on operations and he broader ecosystem that bases its 
 efforts on assumptions established at the start of a release cycle, 
 rather than on changes introduced late in the cycle.
 
 Cheers
 
 Chris
 
 Sent from my iPad
 
 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com 
 wrote:
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the 
 OpenStack conference) Diablo - Essex would be the end of this 
 compatibility bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards 
 forward migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to 
 hear your suggestions. If you just want to make comments like this, I 
 would suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does 
 it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Vishvananda Ishaya

On Jul 12, 2012, at 11:48 AM, Christopher B Ferris wrote:

 This level of response is unnecessary. 
 
 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not see 
 much input from the operations community as to the impact.

Agreed, I'm a developer, so I'm clearly biased towards what is easier for 
developers. It will be a significant effort to have to maintain the nova-volume 
code, so I want to be sure it is necessary. End users really shouldn't care 
about this, so the other community members who are impacted are operators.

I really would like more feedback on how painful it will be for operators if we 
force them to migrate. We have one clear response from Chuck, which is very 
helpful. Is there anyone else out there running nova-volume that would prefer 
to keep it when they move to folsom?

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Jon Mittelhauser
George,

I am relatively new to this mailing list so I assume that there is some history 
that is prompting the vehemence but I do not understand what you are trying to 
accomplish.

Vish sent out two proposed ways for dealing with the migration.  Most of the 
early voting (including mine) has been for option #1 (happy to explain why if 
desired) but it isn't like the discussion is over.  If you believe that option 
#2 is better, please explain why you believe that.  If you believe that there 
is a 3rd option, please explain it to us.

You are complaining without offering a counter proposal.  That is simply not 
effective and makes semi-neutral folks (like me) tend to discard your point of 
view (which I assume is not your objective).

-Jon

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Thursday, July 12, 2012 10:14 AM
To: Brian Waldon brian.wal...@rackspace.commailto:brian.wal...@rackspace.com
Cc: Openstack 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

Well, I think overall OpenStack has done an absolute shit job of compatibility 
and I had hoped (and made a huge point of this at the OpenStack conference) 
Diablo - Essex would be the end of this compatibility bullshit.

But the attitudes in this thread and with respect to the whole Cinder question 
in general suggest to me that this cavalier attitude towards forward migration 
hasn't changed.

So you can kiss my ass.

-George

On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

We actually care a hell of a lot about compatibility. We also recognize there 
are times when we have to sacrifice compatibility so we can move forward at a 
reasonable pace.

If you think we are handling anything the wrong way, we would love to hear your 
suggestions. If you just want to make comments like this, I would suggest you 
keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:

This community just doesn't give a rat's ass about compatibility, does it?

-George

On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

Hello Everyone,

Now that the PPB has decided to promote Cinder to core for the Folsom
release, we need to decide what happens to the existing Nova Volume
code. As far as I can see it there are two basic strategies. I'm going
to give an overview of each here:

Option 1 -- Remove Nova Volume
==

Process
---
* Remove all nova-volume code from the nova project
* Leave the existing nova-volume database upgrades and tables in
  place for Folsom to allow for migration
* Provide a simple script in cinder to copy data from the nova
  database to the cinder database (The schema for the tables in
  cinder are equivalent to the current nova tables)
* Work with package maintainers to provide a package based upgrade
  from nova-volume packages to cinder packages
* Remove the db tables immediately after Folsom

Disadvantages
-
* Forces deployments to go through the process of migrating to cinder
  if they want to use volumes in the Folsom release

Option 2 -- Deprecate Nova Volume
=

Process
---
* Mark the nova-volume code deprecated but leave it in the project
  for the folsom release
* Provide a migration path at folsom
* Backport bugfixes to nova-volume throughout the G-cycle
* Provide a second migration path at G
* Package maintainers can decide when to migrate to cinder

Disadvantages
-
* Extra maintenance effort
* More confusion about storage in openstack
* More complicated upgrade paths need to be supported

Personally I think Option 1 is a much more manageable strategy because
the volume code doesn't get a whole lot of attention. I want to keep
things simple and clean with one deployment strategy. My opinion is that
if we choose option 2 we will be sacrificing significant feature
development in G in order to continue to maintain nova-volume for another
release.

But we really need to know if this is going to cause major pain to existing
deployments out there. If it causes a bad experience for deployers we
need to take our medicine and go with option 2. Keep in mind that it
shouldn't make any difference to end users whether cinder or nova-volume
is being used. The current nova-client can use either one.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.commailto:george.re...@enstratus.comSkype: 
nspollutiont: @GeorgeReesep: +1.207.956.0217

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Matt Joyce
To certain extent I agree with george's sentiment.

Recent example... we're changing tenants  to projects in the keystone api.

Yes we maintain v2 api compatibility but there will be a cost to users in
the confusion of decisions like this.  George is right to be calling for
openstack to grow up.

That's my personal opinion.

-Matt


On Thu, Jul 12, 2012 at 11:55 AM, George Reese
george.re...@enstratus.comwrote:

 I certainly wasn't picking on Vish, but instead the entire community so
 eagerly interested in option #1. You see, the OpenStack community has a
 perfect record of making sure stuff like that ends up breaking everyone
 between upgrades.

 So, if you take offense by my comments… err, well, I'm not at all sorry.
 It's time for this community to grow the hell up and make sure systems
 upgrade nicely now and forever and that OpenStack environments are actually
 compatible with one another. Hell, I still find Essex environments that
 aren't even API compatible with one another. You have the Rackspace CTO
 wandering around conferences talking about how the value proposition of
 OpenStack is interoperability among clouds and yet you can't even get
 interoperability within the same OpenStack distribution of the same
 OpenStack version.

 I smell a pile of bullshit and the community just keeps shoveling.

 -George

 On Jul 12, 2012, at 12:22 PM, Jay Pipes wrote:

 On 07/12/2012 12:32 PM, George Reese wrote:

 This community just doesn't give a rat's ass about compatibility, does it?


 a) Please don't be inappropriate on the mailing list
 b) Vish sent the email below to the mailing list *precisely because* he
 cares about compatibility. He wants to discuss the options with the
 community and come up with a reasonable action plan with the Cinder PTL,
 John Griffith for the move

 Now, would you care to be constructive with your criticism?

 Thanks,
 -jay

 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:


 Hello Everyone,


 Now that the PPB has decided to promote Cinder to core for the Folsom

 release, we need to decide what happens to the existing Nova Volume

 code. As far as I can see it there are two basic strategies. I'm going

 to give an overview of each here:


 Option 1 -- Remove Nova Volume

 ==


 Process

 ---

 * Remove all nova-volume code from the nova project

 * Leave the existing nova-volume database upgrades and tables in

  place for Folsom to allow for migration

 * Provide a simple script in cinder to copy data from the nova

  database to the cinder database (The schema for the tables in

  cinder are equivalent to the current nova tables)

 * Work with package maintainers to provide a package based upgrade

  from nova-volume packages to cinder packages

 * Remove the db tables immediately after Folsom


 Disadvantages

 -

 * Forces deployments to go through the process of migrating to cinder

  if they want to use volumes in the Folsom release


 Option 2 -- Deprecate Nova Volume

 =


 Process

 ---

 * Mark the nova-volume code deprecated but leave it in the project

  for the folsom release

 * Provide a migration path at folsom

 * Backport bugfixes to nova-volume throughout the G-cycle

 * Provide a second migration path at G

 * Package maintainers can decide when to migrate to cinder


 Disadvantages

 -

 * Extra maintenance effort

 * More confusion about storage in openstack

 * More complicated upgrade paths need to be supported


 Personally I think Option 1 is a much more manageable strategy because

 the volume code doesn't get a whole lot of attention. I want to keep

 things simple and clean with one deployment strategy. My opinion is that

 if we choose option 2 we will be sacrificing significant feature

 development in G in order to continue to maintain nova-volume for another

 release.


 But we really need to know if this is going to cause major pain to

 existing

 deployments out there. If it causes a bad experience for deployers we

 need to take our medicine and go with option 2. Keep in mind that it

 shouldn't make any difference to end users whether cinder or nova-volume

 is being used. The current nova-client can use either one.


 Vish



 ___

 Mailing list: https://launchpad.net/~openstack

 Post to : openstack@lists.launchpad.net

 mailto:openstack@lists.launchpad.net openstack@lists.launchpad.net

 Unsubscribe : https://launchpad.net/~openstack

 More help   : https://help.launchpad.net/ListHelp


 --

 George Reese - Chief Technology Officer, enStratus

 e: george.re...@enstratus.com 
 mailto:george.re...@enstratus.comgeorge.re...@enstratus.com


 Skype: nspollutiont: @GeorgeReesep: +1.207.956.0217

 enStratus: Enterprise Cloud Management - @enStratus

 - http://www.enstratus.com http://www.enstratus.com/

 To schedule a meeting with me: http://tungle.me/GeorgeReese




 

Re: [Openstack] Nova Cells

2012-07-12 Thread Nathanael Burton
That's a good question. I'm also interested in an update on cells. How
is progress on cells going? Is there a blueprint for it? Is it
targeted to a folsom milestone?

Thanks,

Nate

On Thu, Jul 12, 2012 at 1:39 PM, Michael J Fork mjf...@us.ibm.com wrote:
 Outside of the Etherpad (http://etherpad.openstack.org/FolsomComputeCells)
 and presentation referenced there (http://comstud.com/FolsomCells.pdf), are
 there additional details available on the architecture / implementation of
 Cells?

 Thanks.

 Michael

 -
 Michael Fork
 Cloud Architect, Emerging Solutions
 IBM Systems  Technology Group


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
You are mistaking me for caring about the answer to this question.

This ship has sailed. We are faced with two shitty choices as a result of 
continued lack of concern by this community for compatibility.

History? I've been pounding my head against the OpenStack all for years on 
compatibility and we end up AGAIN in a situation like this where we have two 
shitty options.

I'm not offering an opinion or a third option because I just don't give a damn 
what option is picked since both will suck.

I'm trying to get everyone to get their heads out of their asses and not stick 
us yet against in this situation in the future.

You can discard my position if you want. I really don't give a damn. I just 
happen to work with a wider variety of OpenStack environments that most others 
on the list. 

But whatever.

-George

On Jul 12, 2012, at 2:40 PM, Jon Mittelhauser wrote:

 George,
 
 I am relatively new to this mailing list so I assume that there is some 
 history that is prompting the vehemence but I do not understand what you are 
 trying to accomplish.
 
 Vish sent out two proposed ways for dealing with the migration.  Most of the 
 early voting (including mine) has been for option #1 (happy to explain why if 
 desired) but it isn't like the discussion is over.  If you believe that 
 option #2 is better, please explain why you believe that.  If you believe 
 that there is a 3rd option, please explain it to us.
 
 You are complaining without offering a counter proposal.  That is simply not 
 effective and makes semi-neutral folks (like me) tend to discard your point 
 of view (which I assume is not your objective).
 
 -Jon
 
 From: George Reese george.re...@enstratus.com
 Date: Thursday, July 12, 2012 10:14 AM
 To: Brian Waldon brian.wal...@rackspace.com
 Cc: Openstack (openstack@lists.launchpad.net) 
 (openstack@lists.launchpad.net) openstack@lists.launchpad.net
 Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the OpenStack 
 conference) Diablo - Essex would be the end of this compatibility bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards forward 
 migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 

Re: [Openstack] [nova] VM stucks in deleting task state

2012-07-12 Thread Sébastien Han
http://www.sebastien-han.fr/blog/2012/07/10/delete-a-vm-in-an-error-state/


On Thu, Jul 12, 2012 at 8:34 PM, Tong Li liton...@us.ibm.com wrote:

  Hi, Hien,
  I had same problem. The only way that I can get rid of it is to remove
 the record for that instance from the following 3 mysql db tables in the
 following order.

  security_group_instance_association
  instance_info_caches
  instances

  hope that helps.

 Tong Li
 Emerging Technologies  Standards

 [image: Inactive hide details for Hien Phan ---07/12/2012 02:08:58
 PM---Hello list, I've just installed Openstack Essex on Ubuntu 12.04]Hien
 Phan ---07/12/2012 02:08:58 PM---Hello list, I've just installed Openstack
 Essex on Ubuntu 12.04. Everything works well

 From: Hien Phan phanquoch...@gmail.com
 To: openstack@lists.launchpad.net
 Date: 07/12/2012 02:08 PM
 Subject: [Openstack] [nova] VM stucks in deleting task state
 Sent by: openstack-bounces+litong01=us.ibm@lists.launchpad.net
 --



 Hello list,

 I've just installed Openstack Essex on Ubuntu 12.04. Everything works well
 except one problem I just faced:
 When I try terminate VM in dashboard. It stuck in deleting ask for few
 hours until now. I can not connect VM. I try reboot using nova command and
 error :

 root@openstack-node01:~# nova reboot hien-vm02
 ERROR: Cannot 'reboot' while instance is in task_state deleting (HTTP 409)
 root@openstack-node01:~# nova reboot hien-vm01
 ERROR: Cannot 'reboot' while instance is in task_state deleting (HTTP 409)
 root@openstack-node01:~# nova list

 +--+---++-+
 |  ID  |Name   | Status
 |   Networks  |

 +--+---++-+
 | b924a325-b07f-480b-9a31-3049736fbfde | hien-vm02 | ACTIVE |
 private=172.16.1.35, 192.168.255.34 |
 | e7908096-83e6-480d-9131-efa4ea73ca0d | hien-vm01 | ACTIVE |
 private=172.16.1.34, 192.168.255.33 |

 +--+---++-+



 Openstack Dashboard screenshot image: 
 *http://i.imgur.com/7e4cf.png*http://i.imgur.com/7e4cf.png

 How i can delete VMs completely ?
 Thanks in advance.
 --
 Best regards,
 Phan Quoc Hien
 *
 * http://www.mrhien.info/
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Jon Mittelhauser
How can I disregard a position that you don't have?  (or at least I don't 
understand yet)  You have failed to provide a position.

Like I said, I'm fairly new to OpenStack….but I am *very* experienced in open 
source and operating very large and complex production systems… so I am trying 
to come up to speed and understand your position…

Separating out the volume code from the compute code seems like a no-brainer 
thing that needed to be done.

Do you disagree with that basic premise (e.g. That Cinder should exist)?
Do you disagree with the way that it was done (e.g. How Cinder is written)?
Or do you disagree with the migration strategies proposed (which is what Vish's 
email was opening discussion about)?

Or…??

-Jon

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Thursday, July 12, 2012 12:47 PM
To: Jon Mittelhauser 
jon.mittelhau...@nebula.commailto:jon.mittelhau...@nebula.com
Cc: Brian Waldon 
brian.wal...@rackspace.commailto:brian.wal...@rackspace.com, Openstack 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

You are mistaking me for caring about the answer to this question.

This ship has sailed. We are faced with two shitty choices as a result of 
continued lack of concern by this community for compatibility.

History? I've been pounding my head against the OpenStack all for years on 
compatibility and we end up AGAIN in a situation like this where we have two 
shitty options.

I'm not offering an opinion or a third option because I just don't give a damn 
what option is picked since both will suck.

I'm trying to get everyone to get their heads out of their asses and not stick 
us yet against in this situation in the future.

You can discard my position if you want. I really don't give a damn. I just 
happen to work with a wider variety of OpenStack environments that most others 
on the list.

But whatever.

-George

On Jul 12, 2012, at 2:40 PM, Jon Mittelhauser wrote:

George,

I am relatively new to this mailing list so I assume that there is some history 
that is prompting the vehemence but I do not understand what you are trying to 
accomplish.

Vish sent out two proposed ways for dealing with the migration.  Most of the 
early voting (including mine) has been for option #1 (happy to explain why if 
desired) but it isn't like the discussion is over.  If you believe that option 
#2 is better, please explain why you believe that.  If you believe that there 
is a 3rd option, please explain it to us.

You are complaining without offering a counter proposal.  That is simply not 
effective and makes semi-neutral folks (like me) tend to discard your point of 
view (which I assume is not your objective).

-Jon

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Thursday, July 12, 2012 10:14 AM
To: Brian Waldon brian.wal...@rackspace.commailto:brian.wal...@rackspace.com
Cc: Openstack 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

Well, I think overall OpenStack has done an absolute shit job of compatibility 
and I had hoped (and made a huge point of this at the OpenStack conference) 
Diablo - Essex would be the end of this compatibility bullshit.

But the attitudes in this thread and with respect to the whole Cinder question 
in general suggest to me that this cavalier attitude towards forward migration 
hasn't changed.

So you can kiss my ass.

-George

On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

We actually care a hell of a lot about compatibility. We also recognize there 
are times when we have to sacrifice compatibility so we can move forward at a 
reasonable pace.

If you think we are handling anything the wrong way, we would love to hear your 
suggestions. If you just want to make comments like this, I would suggest you 
keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:

This community just doesn't give a rat's ass about compatibility, does it?

-George

On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

Hello Everyone,

Now that the PPB has decided to promote Cinder to core for the Folsom
release, we need to decide what happens to the existing Nova Volume
code. As far as I can see it there are two basic strategies. I'm going
to give an overview of each here:

Option 1 -- Remove Nova Volume
==

Process
---
* Remove all nova-volume code from the nova project
* Leave the existing nova-volume database upgrades and tables in
  place for Folsom to allow for migration
* 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese
I don't think Cinder should exist.

Sometimes you have to live with the technical debt because that's the best way 
to preserve the investment your customers have made in your product.

Or if you're very smart, you find a way to refactor that technical debt 
invisibly to customers.

But you don't make the customer carry the burden of your refactoring technical 
debt.

-George

On Jul 12, 2012, at 2:52 PM, Jon Mittelhauser wrote:

 How can I disregard a position that you don't have?  (or at least I don't 
 understand yet)  You have failed to provide a position.
 
 Like I said, I'm fairly new to OpenStack….but I am *very* experienced in open 
 source and operating very large and complex production systems… so I am 
 trying to come up to speed and understand your position…
 
 Separating out the volume code from the compute code seems like a no-brainer 
 thing that needed to be done.  
 
 Do you disagree with that basic premise (e.g. That Cinder should exist)?  
 Do you disagree with the way that it was done (e.g. How Cinder is written)?  
 Or do you disagree with the migration strategies proposed (which is what 
 Vish's email was opening discussion about)?  
 
 Or…??
 
 -Jon
 
 From: George Reese george.re...@enstratus.com
 Date: Thursday, July 12, 2012 12:47 PM
 To: Jon Mittelhauser jon.mittelhau...@nebula.com
 Cc: Brian Waldon brian.wal...@rackspace.com, Openstack 
 (openstack@lists.launchpad.net) (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
 
 You are mistaking me for caring about the answer to this question.
 
 This ship has sailed. We are faced with two shitty choices as a result of 
 continued lack of concern by this community for compatibility.
 
 History? I've been pounding my head against the OpenStack all for years on 
 compatibility and we end up AGAIN in a situation like this where we have two 
 shitty options.
 
 I'm not offering an opinion or a third option because I just don't give a 
 damn what option is picked since both will suck.
 
 I'm trying to get everyone to get their heads out of their asses and not 
 stick us yet against in this situation in the future.
 
 You can discard my position if you want. I really don't give a damn. I just 
 happen to work with a wider variety of OpenStack environments that most 
 others on the list. 
 
 But whatever.
 
 -George
 
 On Jul 12, 2012, at 2:40 PM, Jon Mittelhauser wrote:
 
 George,
 
 I am relatively new to this mailing list so I assume that there is some 
 history that is prompting the vehemence but I do not understand what you are 
 trying to accomplish.
 
 Vish sent out two proposed ways for dealing with the migration.  Most of the 
 early voting (including mine) has been for option #1 (happy to explain why 
 if desired) but it isn't like the discussion is over.  If you believe that 
 option #2 is better, please explain why you believe that.  If you believe 
 that there is a 3rd option, please explain it to us.
 
 You are complaining without offering a counter proposal.  That is simply not 
 effective and makes semi-neutral folks (like me) tend to discard your point 
 of view (which I assume is not your objective).
 
 -Jon
 
 From: George Reese george.re...@enstratus.com
 Date: Thursday, July 12, 2012 10:14 AM
 To: Brian Waldon brian.wal...@rackspace.com
 Cc: Openstack (openstack@lists.launchpad.net) 
 (openstack@lists.launchpad.net) openstack@lists.launchpad.net
 Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
 
 Well, I think overall OpenStack has done an absolute shit job of 
 compatibility and I had hoped (and made a huge point of this at the 
 OpenStack conference) Diablo - Essex would be the end of this compatibility 
 bullshit.
 
 But the attitudes in this thread and with respect to the whole Cinder 
 question in general suggest to me that this cavalier attitude towards 
 forward migration hasn't changed.
 
 So you can kiss my ass.
 
 -George
 
 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:
 
 We actually care a hell of a lot about compatibility. We also recognize 
 there are times when we have to sacrifice compatibility so we can move 
 forward at a reasonable pace.
 
 If you think we are handling anything the wrong way, we would love to hear 
 your suggestions. If you just want to make comments like this, I would 
 suggest you keep them to yourself.
 
 Have a great day!
 Brian Waldon
 
 On Jul 12, 2012, at 9:32 AM, George Reese wrote:
 
 This community just doesn't give a rat's ass about compatibility, does it?
 
 -George
 
 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
 
 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread John Griffith
On Thu, Jul 12, 2012 at 1:14 PM, George Reese
george.re...@enstratus.com wrote:
 So if Im not coding, I should shut up?

 I think you answered your own question.

 Sent from my iPhone

 On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com wrote:

 What exactly was so offensive about what I said? Communities like OpenStack
 are built on top of people *doing* things, not *talking* about things. I'm
 just asking you to contribute code or design help rather than slanderous
 commentary.

 Brian  Offensive  Waldon

 On Jul 12, 2012, at 11:59 AM, George Reese wrote:

 You evidently have not had to live with the interoperability nightmare known
 as OpenStack in the same way I have. Otherwise, you would find responses
 like Brian's much more offensive.

 -George

 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:

 This level of response is unnecessary.

 That said, the perspectives which influenced the decision seemed somewhat
 weighted to the development community. I could be wrong, but I did not see
 much input from the operations community as to the impact.

 Clearly, going forward, we want to be more deliberate about changes that may
 have impact on operations and he broader ecosystem that bases its efforts on
 assumptions established at the start of a release cycle, rather than on
 changes introduced late in the cycle.

 Cheers

 Chris

 Sent from my iPad

 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com
 wrote:

 Well, I think overall OpenStack has done an absolute shit job of
 compatibility and I had hoped (and made a huge point of this at the
 OpenStack conference) Diablo - Essex would be the end of this compatibility
 bullshit.

 But the attitudes in this thread and with respect to the whole Cinder
 question in general suggest to me that this cavalier attitude towards
 forward migration hasn't changed.

 So you can kiss my ass.

 -George

 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

 We actually care a hell of a lot about compatibility. We also recognize
 there are times when we have to sacrifice compatibility so we can move
 forward at a reasonable pace.

 If you think we are handling anything the wrong way, we would love to hear
 your suggestions. If you just want to make comments like this, I would
 suggest you keep them to yourself.

 Have a great day!
 Brian Waldon

 On Jul 12, 2012, at 9:32 AM, George Reese wrote:

 This community just doesn't give a rat's ass about compatibility, does it?

 -George

 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

 Hello Everyone,

 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:

 Option 1 -- Remove Nova Volume
 ==

 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom

 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release

 Option 2 -- Deprecate Nova Volume
 =

 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder

 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported

 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.

 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.

 Vish


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 

Re: [Openstack] Nova Cells

2012-07-12 Thread Eric Windisch


On Thursday, July 12, 2012 at 15:13 PM, Chris Behrens wrote:

 Partially developed. This probably isn't much use, but I'll throw it out 
 there: http://comstud.com/cells.pdf
 
We're going to have to sync once more on removing _to_server calls from the RPC 
layer.

With the matchmaker upstream now, we should be able to use this to provide 
N-broker support to the AMQP drivers, although we'd need to work this into 
ampq.py.  Also, since the design summit, I should note the code has moved in 
the direction of providing a Bindings/Exchanges metaphor, which I hope should 
be easier to digest from the perspective of the queue-server buffs.

Let me know when you're ready to have a chat about it, it might do better to do 
this on the phone or IRC than by email.

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Gabriel Hurley
The stated and agreed-upon goal from Essex forward is to make the core 
OpenStack projects N+1 compatible (e.g. Essex-Folsom, Folsom-Grizzly), and to 
make the clients capable of talking to every API version forever.

Anything standing in the way of that should be considered a release-blocking 
bug, and should be filed against the appropriate projects. I for one intend to 
see to that as best I can.

That said, there *is* a grey area around migration steps like Nova Volume - 
Cinder. If the migration path is clear, stable, well-documented, uses the same 
schemas and same APIs... I'd say that *may* still fall into the category of N+1 
compatible. It sounds like that's the idea here, but that we need to thoroughly 
vet the practicality of that assertion. I don't think we can decide this 
without proof that the clean transition is 100% possible.

Code isn't the only thing of value; constructively and respectfully shaping 
design decisions is great, testing and filing bugs is also fantastic. Profanity 
and disrespect are not acceptable. Ever.

All the best,


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of George Reese
Sent: Thursday, July 12, 2012 12:15 PM
To: Brian Waldon
Cc: Openstack (openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

So if Im not coding, I should shut up?

I think you answered your own question.

Sent from my iPhone

On Jul 12, 2012, at 14:10, Brian Waldon 
brian.wal...@rackspace.commailto:brian.wal...@rackspace.com wrote:
What exactly was so offensive about what I said? Communities like OpenStack are 
built on top of people *doing* things, not *talking* about things. I'm just 
asking you to contribute code or design help rather than slanderous commentary.

Brian  Offensive  Waldon

On Jul 12, 2012, at 11:59 AM, George Reese wrote:


You evidently have not had to live with the interoperability nightmare known as 
OpenStack in the same way I have. Otherwise, you would find responses like 
Brian's much more offensive.

-George

On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:


This level of response is unnecessary.

That said, the perspectives which influenced the decision seemed somewhat 
weighted to the development community. I could be wrong, but I did not see much 
input from the operations community as to the impact.

Clearly, going forward, we want to be more deliberate about changes that may 
have impact on operations and he broader ecosystem that bases its efforts on 
assumptions established at the start of a release cycle, rather than on changes 
introduced late in the cycle.

Cheers

Chris

Sent from my iPad

On Jul 12, 2012, at 2:24 PM, George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com wrote:
Well, I think overall OpenStack has done an absolute shit job of compatibility 
and I had hoped (and made a huge point of this at the OpenStack conference) 
Diablo - Essex would be the end of this compatibility bullshit.

But the attitudes in this thread and with respect to the whole Cinder question 
in general suggest to me that this cavalier attitude towards forward migration 
hasn't changed.

So you can kiss my ass.

-George

On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:


We actually care a hell of a lot about compatibility. We also recognize there 
are times when we have to sacrifice compatibility so we can move forward at a 
reasonable pace.

If you think we are handling anything the wrong way, we would love to hear your 
suggestions. If you just want to make comments like this, I would suggest you 
keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:


This community just doesn't give a rat's ass about compatibility, does it?

-George

On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:


Hello Everyone,

Now that the PPB has decided to promote Cinder to core for the Folsom
release, we need to decide what happens to the existing Nova Volume
code. As far as I can see it there are two basic strategies. I'm going
to give an overview of each here:

Option 1 -- Remove Nova Volume
==

Process
---
* Remove all nova-volume code from the nova project
* Leave the existing nova-volume database upgrades and tables in
  place for Folsom to allow for migration
* Provide a simple script in cinder to copy data from the nova
  database to the cinder database (The schema for the tables in
  cinder are equivalent to the current nova tables)
* Work with package maintainers to provide a package based upgrade
  from nova-volume packages to cinder packages
* Remove the db tables immediately after Folsom

Disadvantages
-
* Forces deployments to go through the process of migrating to cinder
  if they want to use volumes in the Folsom release

Option 2 -- 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Stefano Maffulli
George,

your opinion is best conveyed if it comes with a polite choice of words.
Please refrain from adding more of your references to excrements and
help the community make a decision.

/stef


On 07/12/2012 12:14 PM, George Reese wrote:
 So if Im not coding, I should shut up? 
 
 I think you answered your own question. 
 
 Sent from my iPhone
 
 On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com
 mailto:brian.wal...@rackspace.com wrote:
 
 What exactly was so offensive about what I said? Communities like
 OpenStack are built on top of people *doing* things, not *talking*
 about things. I'm just asking you to contribute code or design help
 rather than slanderous commentary.

 Brian  Offensive  Waldon

 On Jul 12, 2012, at 11:59 AM, George Reese wrote:

 You evidently have not had to live with the interoperability
 nightmare known as OpenStack in the same way I have. Otherwise, you
 would find responses like Brian's much more offensive.

 -George

 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:

 This level of response is unnecessary. 

 That said, the perspectives which influenced the decision seemed
 somewhat weighted to the development community. I could be wrong,
 but I did not see much input from the operations community as to the
 impact.

 Clearly, going forward, we want to be more deliberate about changes
 that may have impact on operations and he broader ecosystem that
 bases its efforts on assumptions established at the start of a
 release cycle, rather than on changes introduced late in the cycle.

 Cheers

 Chris

 Sent from my iPad

 On Jul 12, 2012, at 2:24 PM, George Reese
 george.re...@enstratus.com mailto:george.re...@enstratus.com wrote:

 Well, I think overall OpenStack has done an absolute shit job of
 compatibility and I had hoped (and made a huge point of this at the
 OpenStack conference) Diablo - Essex would be the end of this
 compatibility bullshit.

 But the attitudes in this thread and with respect to the whole
 Cinder question in general suggest to me that this cavalier
 attitude towards forward migration hasn't changed.

 So you can kiss my ass.

 -George

 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

 We actually care a hell of a lot about compatibility. We also
 recognize there are times when we have to sacrifice compatibility
 so we can move forward at a reasonable pace.

 If you think we are handling anything the wrong way, we would love
 to hear your suggestions. If you just want to make comments like
 this, I would suggest you keep them to yourself.

 Have a great day!
 Brian Waldon

 On Jul 12, 2012, at 9:32 AM, George Reese wrote:

 This community just doesn't give a rat's ass about compatibility,
 does it?

 -George

 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

 Hello Everyone,

 Now that the PPB has decided to promote Cinder to core for the
 Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm
 going
 to give an overview of each here:

 Option 1 -- Remove Nova Volume
 ==

 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom

 Disadvantages
 -
 * Forces deployments to go through the process of migrating to
 cinder
   if they want to use volumes in the Folsom release

 Option 2 -- Deprecate Nova Volume
 =

 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder

 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported

 Personally I think Option 1 is a much more manageable strategy
 because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion
 is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume
 for another
 release.

 But we really need to know if this is going to cause major pain
 to existing
 deployments out there. If it causes a bad experience for
 deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Dolph Mathews
On Thu, Jul 12, 2012 at 2:37 PM, George Reese george.re...@enstratus.comwrote:

 This ain't the first time I've had a run in with you where your response
 was essentially if you don't like it, go code it.

 And obviously you missed the entire constructive point in my response.
 It's this:

 The proposed options suck. It's too late to do anything about that as this
 ship has sailed.


Perhaps my English is not the best, but what exactly is constructive
about this?



 What you need to understand going forward is that this community has an
 abysmal history when it comes to compatibility and interoperability.

 Abysmal.

 Not checkered. Not patchy. Not lacking. Abysmal.

 Horizontally. Vertically. Abysmal.

 Actually, shockingly abysmal.

 You saw one public response laughing at me for expecting this community to
 care about compatibility. I also received private responses with the same
 sentiment.

 If you guys really think you care about compatibility, you need to go sit
 in a corner and do some heavy thinking. Because the history of this project
 and this thread in particular suggest otherwise.

 In case you missed it again, here it is in a single sentence:

 The constructive point I am making is that it's time to wake up and get
 serious about compatibility and interoperability.

 -George

 On Jul 12, 2012, at 2:27 PM, Brian Waldon wrote:

 Planning the development of the projects is valuable as well as
 contributing code. If you review my last message, you'll see the words
 '... or design help', which I intended to represent non-code contribution.
 You seem to have strong opinions on how things should be done, but I don't
 see your voice in any of the community discussions.

 Moving forward, I wish you would share your expertise in a constructive
 manner. Keep in mind this list reaches 2200 people. Let's not waste
 anyone's time.

 WALDON


 On Jul 12, 2012, at 12:14 PM, George Reese wrote:

 So if Im not coding, I should shut up?

 I think you answered your own question.

 Sent from my iPhone

 On Jul 12, 2012, at 14:10, Brian Waldon brian.wal...@rackspace.com
 wrote:

 What exactly was so offensive about what I said? Communities like
 OpenStack are built on top of people *doing* things, not *talking* about
 things. I'm just asking you to contribute code or design help rather than
 slanderous commentary.

 Brian  Offensive  Waldon

 On Jul 12, 2012, at 11:59 AM, George Reese wrote:

 You evidently have not had to live with the interoperability nightmare
 known as OpenStack in the same way I have. Otherwise, you would find
 responses like Brian's much more offensive.

 -George

 On Jul 12, 2012, at 1:48 PM, Christopher B Ferris wrote:

 This level of response is unnecessary.

 That said, the perspectives which influenced the decision seemed somewhat
 weighted to the development community. I could be wrong, but I did not see
 much input from the operations community as to the impact.

 Clearly, going forward, we want to be more deliberate about changes that
 may have impact on operations and he broader ecosystem that bases its
 efforts on assumptions established at the start of a release cycle, rather
 than on changes introduced late in the cycle.

 Cheers

 Chris

 Sent from my iPad

 On Jul 12, 2012, at 2:24 PM, George Reese george.re...@enstratus.com
 wrote:

 Well, I think overall OpenStack has done an absolute shit job of
 compatibility and I had hoped (and made a huge point of this at the
 OpenStack conference) Diablo - Essex would be the end of this
 compatibility bullshit.

 But the attitudes in this thread and with respect to the whole Cinder
 question in general suggest to me that this cavalier attitude towards
 forward migration hasn't changed.

 So you can kiss my ass.

 -George

 On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:

 We actually care a hell of a lot about compatibility. We also recognize
 there are times when we have to sacrifice compatibility so we can move
 forward at a reasonable pace.

 If you think we are handling anything the wrong way, we would love to hear
 your suggestions. If you just want to make comments like this, I would
 suggest you keep them to yourself.

 Have a great day!
 Brian Waldon

 On Jul 12, 2012, at 9:32 AM, George Reese wrote:

 This community just doesn't give a rat's ass about compatibility, does it?

 -George

 On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:

 Hello Everyone,

 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:

 Option 1 -- Remove Nova Volume
 ==

 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Stefano Maffulli
On 07/12/2012 12:37 PM, George Reese wrote:
 It's too late to do anything about that as
 this ship has sailed.

This is wrong. You and anybody that believes options #1 and #2 proposed
by Vish and John are sub-optimal still have time to make a proposal.
Please, take time to write it down.

Cheers,
stef

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] After rebooting an LXC instance, no longer be able to SSH to it

2012-07-12 Thread Sajith Kariyawasam
Hi all,

I have faced a situation where, if an LXC instance is rebooted by logging
into it as root user, I no longer was able to remotely logged into it (SSH).

But I can ping to that instance.

Is it a known issue ? if so, would like to know if there is any workaround.

-- 
Best Regards
Sajith
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Narayan Desai
On Thu, Jul 12, 2012 at 2:38 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

 Agreed, I'm a developer, so I'm clearly biased towards what is easier for 
 developers. It will be a significant effort to have to maintain the 
 nova-volume code, so I want to be sure it is necessary. End users really 
 shouldn't care about this, so the other community members who are impacted 
 are operators.

 I really would like more feedback on how painful it will be for operators if 
 we force them to migrate. We have one clear response from Chuck, which is 
 very helpful. Is there anyone else out there running nova-volume that would 
 prefer to keep it when they move to folsom?

I think that the long term maintenance or removal of nova-volume in
its existing form is orthogonal to the actual issue of continuity from
one release to the next.

At this point, we've now run cactus, diablo and are in testing with
essex. Each of these has effectively been a flag day for us; we build
the new system, migrate users, images, etc, and let users do a bunch
of manual migration of volume data, etc, while running both systems in
parallel. This hasn't been as painful as it sounds because our
understanding of best practices for running openstack is moving pretty
quickly and each system has been quite different from the previous.

The lack of an effective process to move from one major release to the
next is the major issue here in my mind. It would be fantastic if
(some day, ha ha ha) you could apt-get upgrade from folsom to grizzly,
but i think that is likely to be more trouble than it is worth. A
reasonable compromise would be a well documented process as well as
tools to aid in the process. Each real deployment will have a
substantial set of local customizations, particularly if they are
running at any sort of scale. While it won't be feasible to support
any upgrade with these customizations, tools for the process (which
may only be used a straw man in complex cases) would go a long way.
 -nld

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Release Upgrades (was [nova] [cinder] Nova-volume vs. Cinder in Folsom)

2012-07-12 Thread Vishvananda Ishaya

On Jul 12, 2012, at 2:22 PM, Narayan Desai wrote:

 I think that the long term maintenance or removal of nova-volume in
 its existing form is orthogonal to the actual issue of continuity from
 one release to the next.

Agreed. Discussion the volume/cinder strategy is a separate topic. I've
taken the liberty of updating the subject to keep the discussions on point

 
 At this point, we've now run cactus, diablo and are in testing with
 essex. Each of these has effectively been a flag day for us; we build
 the new system, migrate users, images, etc, and let users do a bunch
 of manual migration of volume data, etc, while running both systems in
 parallel. This hasn't been as painful as it sounds because our
 understanding of best practices for running openstack is moving pretty
 quickly and each system has been quite different from the previous.

Upgrading has been painful and we are striving to improve this process
as much as possible.

 
 The lack of an effective process to move from one major release to the
 next is the major issue here in my mind. It would be fantastic if
 (some day, ha ha ha) you could apt-get upgrade from folsom to grizzly,
 but i think that is likely to be more trouble than it is worth. A
 reasonable compromise would be a well documented process as well as
 tools to aid in the process. Each real deployment will have a
 substantial set of local customizations, particularly if they are
 running at any sort of scale. While it won't be feasible to support
 any upgrade with these customizations, tools for the process (which
 may only be used a straw man in complex cases) would go a long way.

I would like to take this a bit further. Documentation is a great first step,
but I would actually like to have an actual Jenkins test that does the upgrade
from essex to Folsom with live resources created. I think this the only way
that we can be sure that the upgrade is working properly.

The first version of this doesn't even have to be on a full cluster. I'm 
thinking
something as simple as:

* configure devstack to checkout stable/essex from all of the projects
* run the system, launch some instances and volumes
* terminate the workers
* upgrade all of the code to folsom
* do any manual upgrade steps (nova-manage db sync, cinder migrate, etc.)
* run all the workers
* make sure the existing data still works and new api commands run

The manual upgrade steps should be contained in a single script so that the
distress can use it to help make their package upgrades and deployers can
use it for reference when upgrading their clusters.

This is something we can start working on today and we can run after each
commit. Then we will immediately know if we do something that breaks
upgradability, and we will have a testable documented process for upgrading.

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Vishvananda Ishaya

On Jul 12, 2012, at 2:36 PM, David Mortman wrote:

 On Thu, Jul 12, 2012 at 3:38 PM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 
 Two thoughts:
 
 1) I think this is the wrong forum to poll operators on their
 preferences in general
 
 2) We don't yet even have a fully laid out set of requirements and
 steps for how someone would convert or how hard it will be.
 Historically (with openstack and software in general), it is _always_
 harder to upgrade then we think it will be. I'm an optimist and I
 think it will be a disaster...


Excellent points. Let me make the following proposal:

1) Leave the code in nova-volume for now.
2) Document and test a clear migration path to cinder.
3) Take the working example upgrade to the operators list and ask them for 
opinions.
4) Decide based on their feedback whether it is acceptable to cut the 
nova-volume code out for folsom.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Brian Waldon
tl;dr: I vote for option 2 as it's the only reasonable path from a deployer's 
point of view

With my deployer hat on, I think option 1 isn't really valid. It's completely 
unfair to force deployers to use Cinder before they can upgrade to Folsom. 
There are real deployments using nova-volumes, let's not screw them.

With my developer hat on, I don't want to support two forks of the same 
slowly-diverging codebase. I definitely want to make sure our stuff is 
consumable, but we can't be expected to support everything forever. How about 
we leave nova-volumes in for the Grizzly release, but with a deprecation 
warning and a notice that we will only maintain it as we would a stable release 
branch (no features).

Waldon


On Jul 11, 2012, at 8:26 AM, Vishvananda Ishaya wrote:

 Hello Everyone,
 
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
 
 Option 1 -- Remove Nova Volume
 ==
 
 Process
 ---
 * Remove all nova-volume code from the nova project
 * Leave the existing nova-volume database upgrades and tables in
   place for Folsom to allow for migration
 * Provide a simple script in cinder to copy data from the nova
   database to the cinder database (The schema for the tables in
   cinder are equivalent to the current nova tables)
 * Work with package maintainers to provide a package based upgrade
   from nova-volume packages to cinder packages
 * Remove the db tables immediately after Folsom
 
 Disadvantages
 -
 * Forces deployments to go through the process of migrating to cinder
   if they want to use volumes in the Folsom release
 
 Option 2 -- Deprecate Nova Volume
 =
 
 Process
 ---
 * Mark the nova-volume code deprecated but leave it in the project
   for the folsom release
 * Provide a migration path at folsom
 * Backport bugfixes to nova-volume throughout the G-cycle
 * Provide a second migration path at G
 * Package maintainers can decide when to migrate to cinder
 
 Disadvantages
 -
 * Extra maintenance effort
 * More confusion about storage in openstack
 * More complicated upgrade paths need to be supported
 
 Personally I think Option 1 is a much more manageable strategy because
 the volume code doesn't get a whole lot of attention. I want to keep
 things simple and clean with one deployment strategy. My opinion is that
 if we choose option 2 we will be sacrificing significant feature
 development in G in order to continue to maintain nova-volume for another
 release.
 
 But we really need to know if this is going to cause major pain to existing
 deployments out there. If it causes a bad experience for deployers we
 need to take our medicine and go with option 2. Keep in mind that it
 shouldn't make any difference to end users whether cinder or nova-volume
 is being used. The current nova-client can use either one.
 
 Vish
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread John Postlethwait
So, in short, your entire purpose here is to troll everyone? Nice… : /

You obviously care. You keep responding… You have been asked numerous times 
what we can do to NOT stick us yet against in this situation in the future.  
Why is that such a difficult question to answer? Do you have an answer? Is your 
answer to not change anything, ever? That is not likely or reasonable – so 
what can be done here? Have you seen the other thread about what this 
cinder/nova-volume change entails?

There ARE people here willing to hear it out if you have an answer, or an 
actionable suggestion, or process, or SOMETHING besides get your heads out of 
your asses, which is hardly actionable, as it is vague and hopefully not a 
literal belief/suggestion…

So, George: What do you want from us here? You likely have some legitimate 
pain-points, concerns, and reasons to be upset, but they are absolutely lost in 
your angry and personally offensive responses. Can you maybe elaborate on what 
pain THIS change would cause, and how we might assuage that?

John Postlethwait
Nebula, Inc.


On Thursday, July 12, 2012 at 12:47 PM, George Reese wrote:

 You are mistaking me for caring about the answer to this question.
  
 This ship has sailed. We are faced with two shitty choices as a result of 
 continued lack of concern by this community for compatibility.
  
 History? I've been pounding my head against the OpenStack all for years on 
 compatibility and we end up AGAIN in a situation like this where we have two 
 shitty options.
  
 I'm not offering an opinion or a third option because I just don't give a 
 damn what option is picked since both will suck.
  
 I'm trying to get everyone to get their heads out of their asses and not 
 stick us yet against in this situation in the future.
  
 You can discard my position if you want. I really don't give a damn. I just 
 happen to work with a wider variety of OpenStack environments that most 
 others on the list.  
  
 But whatever.
  
 -George
  
 On Jul 12, 2012, at 2:40 PM, Jon Mittelhauser wrote:
  George,  
   
  I am relatively new to this mailing list so I assume that there is some 
  history that is prompting the vehemence but I do not understand what you 
  are trying to accomplish.  
   
  Vish sent out two proposed ways for dealing with the migration.  Most of 
  the early voting (including mine) has been for option #1 (happy to explain 
  why if desired) but it isn't like the discussion is over.  If you believe 
  that option #2 is better, please explain why you believe that.  If you 
  believe that there is a 3rd option, please explain it to us.  
   
  You are complaining without offering a counter proposal.  That is simply 
  not effective and makes semi-neutral folks (like me) tend to discard your 
  point of view (which I assume is not your objective).  
   
  -Jon  
   
  From: George Reese george.re...@enstratus.com 
  (mailto:george.re...@enstratus.com)
  Date: Thursday, July 12, 2012 10:14 AM
  To: Brian Waldon brian.wal...@rackspace.com 
  (mailto:brian.wal...@rackspace.com)
  Cc: Openstack (openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)) (openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)) openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Subject: Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
   
  Well, I think overall OpenStack has done an absolute shit job of 
  compatibility and I had hoped (and made a huge point of this at the 
  OpenStack conference) Diablo - Essex would be the end of this 
  compatibility bullshit.  
   
  But the attitudes in this thread and with respect to the whole Cinder 
  question in general suggest to me that this cavalier attitude towards 
  forward migration hasn't changed.  
   
  So you can kiss my ass.  
   
  -George  
   
  On Jul 12, 2012, at 12:11 PM, Brian Waldon wrote:  
   We actually care a hell of a lot about compatibility. We also recognize 
   there are times when we have to sacrifice compatibility so we can move 
   forward at a reasonable pace.  

   If you think we are handling anything the wrong way, we would love to 
   hear your suggestions. If you just want to make comments like this, I 
   would suggest you keep them to yourself.

   Have a great day!  
   Brian Waldon

   On Jul 12, 2012, at 9:32 AM, George Reese wrote:  
This community just doesn't give a rat's ass about compatibility, does 
it?  
 
-George  
 
On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:  
 Hello Everyone,
  
 Now that the PPB has decided to promote Cinder to core for the Folsom
 release, we need to decide what happens to the existing Nova Volume
 code. As far as I can see it there are two basic strategies. I'm going
 to give an overview of each here:
  
 Option 1 -- Remove Nova Volume
 ==
  
 Process
 ---
 * Remove all 

Re: [Openstack] [keystone] Rate limit middleware

2012-07-12 Thread Rafael Durán Castañeda

El 12/07/12 18:59, Jay Pipes escribió:

On 07/12/2012 12:26 PM, Rafael Durán Castañeda wrote:

Unless I'm missing something, nova_limits is not applicable to Keystone
since it takes the tenant_id from 'nova.context', which obiously is not
available for Keystone; thought adapt/extend it to keystone should be
trivial and probably is the way to go.

Sure, though I'm pointing out that this could/should be an external
project (like nova_limits) and not something to be proposed for merging
into Keystone core...

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Ok, I think I will do that.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Michael Basnight
On Thu, Jul 12, 2012 at 2:38 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

 On Jul 12, 2012, at 11:48 AM, Christopher B Ferris wrote:

 This level of response is unnecessary.

 That said, the perspectives which influenced the decision seemed somewhat 
 weighted to the development community. I could be wrong, but I did not see 
 much input from the operations community as to the impact.

 Agreed, I'm a developer, so I'm clearly biased towards what is easier for 
 developers. It will be a significant effort to have to maintain the 
 nova-volume code, so I want to be sure it is necessary. End users really 
 shouldn't care about this, so the other community members who are impacted 
 are operators.

 I really would like more feedback on how painful it will be for operators if 
 we force them to migrate. We have one clear response from Chuck, which is 
 very helpful. Is there anyone else out there running nova-volume that would 
 prefer to keep it when they move to folsom?

Us reddwarfers are running a grunch of nova volumes in multiple DCs.
While the developer in me says lets go w/ option 1, the
release/operator in me says lets wait to do this on a formal release
(#2?). It seems like we are too far gone in the Folsom release to do
this w/o people having to scramble. Everyone will have to eventually
migrate from old to new, and while i understand that there will be a
clear path to do this, and its going to be painful for some companies.
If we rip things out during Grizzly, it will at least give companies
that are not rolling on trunk the ability to decide when to migrate.
If you do it in Folsom, some companies who are depending on (or
wanting) landed features, but may not be able to do this migration,
could suffer. At least now deferring to El Oso will give companies
time to brace themselves, and successfully migrate to Folsom w/o any
major issues. In general it makes sense to do sweeping changes between
major releases, communicated at the beginning of a release cycle
rather than in the middle, to give operators/companies a decision to
upgrade if they want the features vs stay on old if they dont want to
migrate. At the end of the day, openstack depends on operators to
function. Id rather piss of us developers than piss off the people
that run the infrastructure we create!

That being said, im not worried about the migration, given that its
just a datastore/service/package/installation based migration. We will
likely roll to cinder much sooner than the Grizzly release, assuming
everything is stable (which im sure it will be). :)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Michael Basnight
On Thu, Jul 12, 2012 at 2:56 PM, George Reese
george.re...@enstratus.com wrote:
 I don't think Cinder should exist.

 Sometimes you have to live with the technical debt because that's the best
 way to preserve the investment your customers have made in your product.

 Or if you're very smart, you find a way to refactor that technical debt
 invisibly to customers.

 But you don't make the customer carry the burden of your refactoring
 technical debt.

I hate to fan the fire, but what would happened in cassandra if they
_never_ updated their data structures. Or hadoop, or any other open
source project like that. I understand where you are coming from, but
i would like to find an example of a project thats _never_ updated
their datastore or caused some sort of migration, be it configuration
or data based. I _do_ feel we should roll this migration in the least
painful way possible though.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Stefano Maffulli
[launchpad is slow at delivering messages to the list. Please everybody
keep it in mind and slow down your replies too to give people the chance
to comment, too.]

On 07/12/2012 12:47 PM, Matt Joyce wrote:
 Yes we maintain v2 api compatibility but there will be a cost to users
 in the confusion of decisions like this.  

Any change has costs, all decisions are the result of compromises. I'm
sure you all know this, it's a fact of life. We can't change that fact
but we can change how we get to an agreement of what compromise is
acceptable.

From what I understand, some people are regretting the decision to
create cinder. We should start from the beginning then:

  How many people regret the decision to start cinder?

  Where were you when the decision was taken?

  What prevented you to speak up then?

I'd appreciate your answers to these questions and your suggestions on
how to modify the decision-making process (if you think it's broken).

thanks,
stef

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread George Reese

On Jul 12, 2012, at 5:08 PM, John Postlethwait wrote:

 So, in short, your entire purpose here is to troll everyone? Nice… : /
 

If you think that, you're likely part of the problem.

 You obviously care. You keep responding… You have been asked numerous times 
 what we can do to NOT stick us yet against in this situation in the future. 
  Why is that such a difficult question to answer? Do you have an answer? Is 
 your answer to not change anything, ever? That is not likely or reasonable 
 – so what can be done here? Have you seen the other thread about what this 
 cinder/nova-volume change entails?
 

This is an idiotic question. What can I suggest everyone do about as yet 
unproposed changes to OpenStack? Seriously?

 There ARE people here willing to hear it out if you have an answer, or an 
 actionable suggestion, or process, or SOMETHING besides get your heads out 
 of your asses, which is hardly actionable, as it is vague and hopefully not 
 a literal belief/suggestion…
 
 So, George: What do you want from us here? You likely have some legitimate 
 pain-points, concerns, and reasons to be upset, but they are absolutely lost 
 in your angry and personally offensive responses. Can you maybe elaborate on 
 what pain THIS change would cause, and how we might assuage that?
 

This community needs offending.

How many years has it been? How many OpenStack upgrades can you point to that 
have been painless? How many interoperable, multi-vendor OpenStack clouds? How 
reliable is the API as an appropriate abstract representation of an OpenStack 
implementation that can be used to build an ecosystem?

The answer to those questions is:

- 3
- 0
- 0
- not at all

It is very clear that compatibility and upgradability is a huge issue. And a 
number of people, obviously including you, don't seem to grasp that.

We have silly comments like Michael's I hate to fan the fire, but what would 
happened in cassandra if they _never_ updated their data structures.

1. Obviously I am not talking about never changing anything. Any suggestion 
otherwise is being willfully obtuse.
2. There's a big difference between systems like Cassandra that generally can 
have maintenance windows and environments like clouds which should NEVER have 
maintenance windows
3. Every single other cloud platform on the planet manages to support a much 
less painful upgrade path with a much higher level of interoperability.

In all of yours and Jon's and Brian's nonsense, I don't see any actual defense 
of the compatibility and interoperability of OpenStack deployments. I can only 
assume that's because you can't actually defend it, yet you nevertheless have 
your head stuck in the sand.

-George

--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.comSkype: nspollutiont: @GeorgeReesep: 
+1.207.956.0217
enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com
To schedule a meeting with me: http://tungle.me/GeorgeReese

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Release Upgrades (was [nova] [cinder] Nova-volume vs. Cinder in Folsom)

2012-07-12 Thread Monty Taylor


On 07/12/2012 04:36 PM, Vishvananda Ishaya wrote:
 
 On Jul 12, 2012, at 2:22 PM, Narayan Desai wrote:
 
 I think that the long term maintenance or removal of nova-volume in
 its existing form is orthogonal to the actual issue of continuity from
 one release to the next.
 
 Agreed. Discussion the volume/cinder strategy is a separate topic. I've
 taken the liberty of updating the subject to keep the discussions on point
 

 At this point, we've now run cactus, diablo and are in testing with
 essex. Each of these has effectively been a flag day for us; we build
 the new system, migrate users, images, etc, and let users do a bunch
 of manual migration of volume data, etc, while running both systems in
 parallel. This hasn't been as painful as it sounds because our
 understanding of best practices for running openstack is moving pretty
 quickly and each system has been quite different from the previous.
 
 Upgrading has been painful and we are striving to improve this process
 as much as possible.
 

 The lack of an effective process to move from one major release to the
 next is the major issue here in my mind. It would be fantastic if
 (some day, ha ha ha) you could apt-get upgrade from folsom to grizzly,
 but i think that is likely to be more trouble than it is worth. A
 reasonable compromise would be a well documented process as well as
 tools to aid in the process. Each real deployment will have a
 substantial set of local customizations, particularly if they are
 running at any sort of scale. While it won't be feasible to support
 any upgrade with these customizations, tools for the process (which
 may only be used a straw man in complex cases) would go a long way.
 
 I would like to take this a bit further. Documentation is a great first step,
 but I would actually like to have an actual Jenkins test that does the upgrade
 from essex to Folsom with live resources created. I think this the only way
 that we can be sure that the upgrade is working properly.

++

 The first version of this doesn't even have to be on a full cluster. I'm 
 thinking
 something as simple as:
 
 * configure devstack to checkout stable/essex from all of the projects
 * run the system, launch some instances and volumes
 * terminate the workers
 * upgrade all of the code to folsom
 * do any manual upgrade steps (nova-manage db sync, cinder migrate, etc.)
 * run all the workers
 * make sure the existing data still works and new api commands run
 
 The manual upgrade steps should be contained in a single script so that the
 distress can use it to help make their package upgrades and deployers can
 use it for reference when upgrading their clusters.

Yes - especially if it's a self contained thing like devstack is currently.

For the upgrade all the code to folsom step - let's chat about making
sure that we get the right hooks/env vars in there so that we can make
that upgrade to tip of trunk in most projects + proposed patch in one
of them  - same as we do for devstack runs today.

 This is something we can start working on today and we can run after each
 commit. Then we will immediately know if we do something that breaks
 upgradability, and we will have a testable documented process for upgrading.

The creation of the self-contained script devstack was a HUGE step
forward for us for integration testing. I think a similar thing for
upgradability would similarly be huge.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Eric Windisch
 
 
 Excellent points. Let me make the following proposal:
 
 1) Leave the code in nova-volume for now.
 2) Document and test a clear migration path to cinder.
 3) Take the working example upgrade to the operators list and ask them for 
 opinions.
 4) Decide based on their feedback whether it is acceptable to cut the 
 nova-volume code out for folsom.
 
Finally something I can put a +1 against.

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Federico Lucifredi

On Jul 12, 2012, Christopher B Ferris wrote:

 Clearly, going forward, we want to be more deliberate about changes

Funny how compatibility is always a popular going forward item. 

Best -Federico

_
-- 'Problem' is a bleak word for challenge - Richard Fish
(Federico L. Lucifredi) - federico at canonical.com - GnuPG 0x4A73884C





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Joe Topjian
Hello,

I'm not an OpenStack developer nor any type of developer. I am, however,
heavily involved with operations for a few production OpenStack
environments. I understand the debate going on and wanted to add an
administrator's point of view.

For admins, OpenStack is not our job, but a tool we use in our job. It's
terribly frustrating when that tool drastically changes every six months.

I find Gabriel's reply interesting and sane. I think if it was agreed upon
to ensure N+1 compatibility, then OpenStack should adhere to that.

The change being discussed involves storage volumes. This is dead serious.
If the migration goes awry, there's potential for production data loss. If
the badly-migrated OpenStack environment is used to offer services for
outside customers, we've just lost data for those customers. It's one of
the worst scenarios for admins.

If upgrading from one version of OpenStack to the next is too dangerous due
to the possibility of getting into situations such as described above, then
it needs to be clearly announced. There's a reason why major RHEL releases
are maintained in parallel for so long.

With regard to Option 1, I understand the benefits of making this change.
If Option 1 was chosen, IMO, the best-case scenario would be if the extra
work involved with upgrading to Cinder/Folsom was just a schema migration
and everything else still worked as it did with Essex.

If this were to happen, though, I would spend /weeks/ testing and planning
the Folsom upgrade. I'd estimate that my production environments would make
it to Folsom 3 months after it was released. But then what major change am
I going to have to worry about in another 3 months?

Thanks,
Joe


On Thu, Jul 12, 2012 at 2:48 PM, Gabriel Hurley
gabriel.hur...@nebula.comwrote:

  The stated and agreed-upon goal from Essex forward is to make the core
 OpenStack projects N+1 compatible (e.g. Essex-Folsom, Folsom-Grizzly),
 and to make the clients capable of talking to every API version forever.**
 **

 ** **

 Anything standing in the way of that should be considered a
 release-blocking bug, and should be filed against the appropriate projects.
 I for one intend to see to that as best I can.

 ** **

 That said, there **is** a grey area around “migration” steps like Nova
 Volume - Cinder. If the migration path is clear, stable, well-documented,
 uses the same schemas and same APIs… I’d say that **may** still fall into
 the category of N+1 compatible. It sounds like that’s the idea here, but
 that we need to thoroughly vet the practicality of that assertion. I don’t
 think we can decide this without proof that the clean transition is 100%
 possible.

 ** **

 Code isn’t the only thing of value; constructively and respectfully
 shaping design decisions is great, testing and filing bugs is also
 fantastic. Profanity and disrespect are not acceptable. Ever.

 ** **

 All the best,

 ** **

 **-  **Gabriel

 **


-- 
Joe Topjian
Systems Administrator
Cybera Inc.

www.cybera.ca

Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Doug Davis
On the flip side - to refresh people's memory it might be useful to send 
out a link to some of the email threads (or wikis) that explained why this 
move is critical to OS's success.  Perhaps some of those reasons aren't as 
valid any more given the impact people are now seeing it will have.  Never 
hurts to measure again before you cut  :-)

thanks
-Doug

STSM |  Standards Architect  |  IBM Software Group
(919) 254-6905  |  IBM 444-6905  |  d...@us.ibm.com
The more I'm around some people, the more I like my dog.



Stefano Maffulli stef...@openstack.org 
Sent by: openstack-bounces+dug=us.ibm@lists.launchpad.net
07/12/2012 06:38 PM

To
openstack@lists.launchpad.net
cc

Subject
Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom






[launchpad is slow at delivering messages to the list. Please everybody
keep it in mind and slow down your replies too to give people the chance
to comment, too.]

On 07/12/2012 12:47 PM, Matt Joyce wrote:
 Yes we maintain v2 api compatibility but there will be a cost to users
 in the confusion of decisions like this. 

Any change has costs, all decisions are the result of compromises. I'm
sure you all know this, it's a fact of life. We can't change that fact
but we can change how we get to an agreement of what compromise is
acceptable.

From what I understand, some people are regretting the decision to
create cinder. We should start from the beginning then:

  How many people regret the decision to start cinder?

  Where were you when the decision was taken?

  What prevented you to speak up then?

I'd appreciate your answers to these questions and your suggestions on
how to modify the decision-making process (if you think it's broken).

thanks,
stef

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Questions about ceilometer

2012-07-12 Thread ??????
Dear all,
 
As the project named ceilometer appeared,I paid close attention to it.
According to the docs of ceilometer,I deploied it in openstack exsse 
environment.
While,I cannot start the ceilometer collector and agent.

The follows are my operations.

1.Install openstack nova ,mongodb and ceilometer.
2.configurate nova ,mongodb and ceilometer

The /etc/nova/nova.conf file is here:
http://pastebin.com/sW5d8eRv 

Here is the /etc/ceilometer-collector.conf file is here:
http://pastebin.com/u5vH22Lh 

And the /etc/mongodb.conf is here:
http://pastebin.com/D5GMkLsb 

3.Start openstack nova ,mongodb

4.Then I start the ceilometer-collector
/usr/bin/ceilometer-collector start
While,some errors occurred:

2012-07-11 05:25:35 INFO ceilometer.storage [-] Loaded mongodb storage 
engine EntryPoint.parse('mongodb = 
ceilometer.storage.impl_mongodb:MongoDBStorage')
2012-07-11 05:25:35 INFO ceilometer.storage [-] Loaded mongodb storage 
engine EntryPoint.parse('mongodb = 
ceilometer.storage.impl_mongodb:MongoDBStorage')
2012-07-11 05:25:35 INFO ceilometer.storage.impl_mongodb [-] connecting to 
MongoDB on localhost:27017
2012-07-11 05:25:35 INFO ceilometer.collector.dispatcher [-] attempting to 
load notification handler for ceilometer.collector.compute:instance
2012-07-11 05:25:35 INFO ceilometer.collector.dispatcher [-] subscribing 
instance handler to compute.instance.create.end events
2012-07-11 05:25:35 INFO ceilometer.collector.dispatcher [-] subscribing 
instance handler to compute.instance.exists events
2012-07-11 05:25:35 INFO ceilometer.collector.dispatcher [-] subscribing 
instance handler to compute.instance.delete.start events
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 97, 
in wait
readers.get(fileno, noop).cb(fileno)
  File /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 
192, in main
result = function(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/service.py, line 101, in 
run_server
server.start()
  File /usr/lib/python2.6/site-packages/nova/service.py, line 162, in 
start
self.manager.init_host()
  File 
/usr/lib/python2.6/site-packages/ceilometer-0-py2.6.egg/ceilometer/collector/manager.py,
 line 69, in init_host
topic='%s.info' % flags.FLAGS.notification_topics[0],
  File /usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py, 
line 867, in __getattr__
return self._substitute(self._get(name))
  File /usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py, 
line 1070, in _get
info = self._get_opt_info(name, group)
  File /usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py, 
line 1161, in _get_opt_info
raise NoSuchOptError(opt_name, group)
NoSuchOptError: no such option: notification_topics
Removing descriptor: 12
 
If anyone can help me ? Waiting your reply. Thanks !
 
   --
   Best Regards
  
 ZhangJialong___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Cells

2012-07-12 Thread Chris Behrens
Sorry about this.  I've had other priorities at Rackspace lately, but I have a 
functioning implementation that I can hope to start to merge ASAP.

I'm on vacation for a couple days, so I can provide a better update on Monday.

On Jul 12, 2012, at 4:29 PM, Jaesuk Ahn bluejay@gmail.com wrote:

 +1 here. 
 
 I am also highly interested in an update on cells implementation since the 
 last design summit. 
 I have not seen any blueprints nor implementation progress update from the 
 community. 
 
 We have been reviewing all the cell related docs and info. and have been 
 trying to make potential reference use cases.
 It will be super helpful if someone directs me to any kinds of update 
 regarding cell.
 Thanks in advance. 
 
 
 -- 
 Jaesuk Ahn, Ph.D.
 Team Leader | Cloud OS Dev. Team
 Cloud Infrastructure Department
 KT (Korea Telecom)
 T. +82-10-9888-0328 | F. +82-303-0993-5340
 Active member on OpenStack Korea Community
 
 
 On Fri, Jul 13, 2012 at 4:47 AM, Nathanael Burton 
 nathanael.i.bur...@gmail.com wrote:
 That's a good question. I'm also interested in an update on cells. How
 is progress on cells going? Is there a blueprint for it? Is it
 targeted to a folsom milestone?
 
 Thanks,
 
 Nate
 
 On Thu, Jul 12, 2012 at 1:39 PM, Michael J Fork mjf...@us.ibm.com wrote:
  Outside of the Etherpad (http://etherpad.openstack.org/FolsomComputeCells)
  and presentation referenced there (http://comstud.com/FolsomCells.pdf), are
  there additional details available on the architecture / implementation of
  Cells?
 
  Thanks.
 
  Michael
 
  -
  Michael Fork
  Cloud Architect, Emerging Solutions
  IBM Systems  Technology Group
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Release Upgrades (was [nova] [cinder] Nova-volume vs. Cinder in Folsom)

2012-07-12 Thread Narayan Desai
On Thu, Jul 12, 2012 at 4:36 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

 Upgrading has been painful and we are striving to improve this process
 as much as possible.

I think that this needs to be a core value of the developer community,
if Openstack is going to become pervasive.

 I would like to take this a bit further. Documentation is a great first step,
 but I would actually like to have an actual Jenkins test that does the upgrade
 from essex to Folsom with live resources created. I think this the only way
 that we can be sure that the upgrade is working properly.

I don't want to dampen enthusiasm around this issue at all, but I
think this goal is pretty difficult to achieve, just due to the
existing complexity in real deployments. I'm also worried that this
would take away from a high level upgrade documentation goal.


 The first version of this doesn't even have to be on a full cluster. I'm 
 thinking
 something as simple as:

 * configure devstack to checkout stable/essex from all of the projects
 * run the system, launch some instances and volumes
 * terminate the workers
 * upgrade all of the code to folsom
 * do any manual upgrade steps (nova-manage db sync, cinder migrate, etc.)
 * run all the workers
 * make sure the existing data still works and new api commands run

 The manual upgrade steps should be contained in a single script so that the
 distress can use it to help make their package upgrades and deployers can
 use it for reference when upgrading their clusters.

 This is something we can start working on today and we can run after each
 commit. Then we will immediately know if we do something that breaks
 upgradability, and we will have a testable documented process for upgrading.

Having a testable process for upgrading is definitely a start. I guess
that I am more or less resigned to upgrades being pretty effort
intensive, at least in the near term, so my personal bias is towards
getting pretty extreme documentation done first.
 -nld

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing NOVA-OVS-Quantum setup

2012-07-12 Thread Trinath Somanchi
Hi-

With respect to your document, on Openstack - OVS and Quantum,

I'm unable to understand the setup of OVS and Quantum in
ESSEX-2/Compute-node machine.

In the Document, https://lists.launchpad.net/openstack/pdfuNjHGvU5UA.pdf,
Page.No: 17/19,

In the section Open-vSwitch  Quantum-agent, Do we need to install OVS and
Quantum in the Compute-nodes too..??





.



On Wed, Jun 20, 2012 at 5:38 PM, Emilien Macchi emilien.mac...@stackops.com
 wrote:

 Hi,

 I wrote a documentation about installation of Essex with Quantum, OVS in
 multi-node architecture.

 You can read it here :

 https://github.com/EmilienM/doc-openstack



 Regards




 On Wed, Jun 20, 2012 at 1:30 PM, Joseph Suh j...@isi.edu wrote:

 Trinath,

 I found the following Quantun admin guide was useful for that purpose:


 http://www.google.com/url?sa=trct=jq=esrc=ssource=webcd=4ved=0CHsQFjADurl=http%3A%2F%2Fdocs.openstack.org%2Ftrunk%2Fopenstack-network%2Fadmin%2Fquantum-admin-guide-trunk.pdfei=prHhT-SMMMa70QG_uJTwAwusg=AFQjCNEq2fuo4dQrvFQT0zw8v05zMdIFWwsig2=6eAgFutMS_VLrhpR4Lhy2w

 Thanks,

 Joseph

 
 (w) 703-248-6160
 (f) 703-812-3712
 3811 N. Fairfax Drive Suite 200
 Arlington, VA, 22203, USA
 http://www.east.isi.edu/~jsuh

 - Original Message -
 From: Trinath Somanchi trinath.soman...@gmail.com
 To: openstack@lists.launchpad.net
 Sent: Wednesday, June 20, 2012 7:04:21 AM
 Subject: [Openstack] Testing NOVA-OVS-Quantum setup


 Hi-


 I have installed configured NOVA-OVS-Quantum based setup using the guide
 provided by openstack and OVS.


 I have a instance up and running.


 I'm new to Openstack.


 Can you any one help me out on the testing/validating on is instance up
 with OVS and Quantum.


 Thanking you..


 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 Emilien Macchi
 *SysAdmin (Intern)*
 *www.stackops.com* | emilien.mac...@stackops.com**
 *

 *

  ADVERTENCIA LEGAL 
 Le informamos, como destinatario de este mensaje, que el correo
 electrónico y las comunicaciones por medio de Internet no permiten asegurar
 ni garantizar la confidencialidad de los mensajes transmitidos, así como
 tampoco su integridad o su correcta recepción, por lo que STACKOPS
 TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
 Si no consintiese en la utilización del correo electrónico o de las
 comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
 conocimiento de manera inmediata. Este mensaje va dirigido, de manera
 exclusiva, a su destinatario y contiene información confidencial y sujeta
 al secreto profesional, cuya divulgación no está permitida por la ley. En
 caso de haber recibido este mensaje por error, le rogamos que, de forma
 inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
 atención y proceda a su eliminación, así como a la de cualquier documento
 adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
 utilización de este mensaje, o de cualquier documento adjunto al mismo,
 cualquiera que fuera su finalidad, están prohibidas por la ley.

 * PRIVILEGED AND CONFIDENTIAL 
 We hereby inform you, as addressee of this message, that e-mail and
 Internet do not guarantee the confidentiality, nor the completeness or
 proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
 does not assume any liability for those circumstances. Should you not agree
 to the use of e-mail or to communications via Internet, you are kindly
 requested to notify us immediately. This message is intended exclusively
 for the person to whom it is addressed and contains privileged and
 confidential information protected from disclosure by law. If you are not
 the addressee indicated in this message, you should immediately delete it
 and any attachments and notify the sender by reply e-mail. In such case,
 you are hereby notified that any dissemination, distribution, copying or
 use of this message or any attachments, for any purpose, is strictly
 prohibited by law.




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Networking issues in Essex

2012-07-12 Thread Michael Chapman
Thanks for the tip, unfortunately the interfaces are already up.

 - Michael

On Thu, Jul 12, 2012 at 10:15 PM, Jonathan Proulx j...@csail.mit.edu wrote:


 I've only deployed openstack for the first time a couple weeks ago,
 but FWIW...

 I had similar symptoms on my Essex test deployment (on Ubuntu 12.04)
 turned out my problem was taht while the br100 bridge was up and
 configured the underlying eth1 physical interface was down so the bits
 went nowhere.  'ifconfig eth1 up' fixed all, followed ofcoures by
 fixing in /etc/network/interfaces as well so this happens on it's own
 in future.

 -Jon

 On Thu, Jul 12, 2012 at 02:56:57PM +1000, Michael Chapman wrote:
 :Hi all, I'm hoping I could get some assistance figuring out my networking
 :problems with a small Essex test cluster. I have a small Diablo cluster
 :running without any problems but have hit a wall in deploying Essex.
 :
 :I can launch VMs without issue and access them from the compute host, but
 :from there I can't access anything except the host, DNS services, and
 other
 :VMs.
 :
 :I have separate machines running keystone, glance, postgresql, rabbit-mq
 :and nova-api. They're all on the .os domain with 172.22.1.X IPs
 :
 :I have one machine running nova-compute, nova-network and nova-api, with a
 :public address 192.43.239.175 and also an IP on the 172.22.1.X subnet in
 :the .os domain. It has the following nova/conf:
 :
 :--dhcpbridge_flagfile=/etc/nova/nova.conf
 :--dhcpbridge=/usr/bin/nova-dhcpbridge
 :--logdir=/var/log/nova
 :--state_path=/var/lib/nova
 :--lock_path=/var/lock/nova
 :--force_dhcp_release
 :--iscsi_helper=tgtadm
 :--libvirt_use_virtio_for_bridges
 :--connection_type=libvirt
 :--root_helper=sudo nova-rootwrap
 :--verbose
 :--ec2_private_dns_show_ip
 :
 :--network_manager=nova.network.manager.FlatDHCPManager
 :--rabbit_host=os-amqp.os
 :--sql_connection=postgresql://[user]:[password]@os-sql.os/nova
 :--image_service=nova.image.glance.GlanceImageService
 :--glance_api_servers=os-glance.os:9292
 :--auth_strategy=keystone
 :--scheduler_driver=nova.scheduler.simple.SimpleScheduler
 :--keystone_ec2_url=http://os-key.os:5000/v2.0/ec2tokens
 :
 :--api_paste_config=/etc/nova/api-paste.ini
 :
 :--my_ip=192.43.239.175
 :--flat_interface=eth0
 :--public_interface=eth1
 :--multi_host=True
 :--routing_source_ip=192.43.239.175
 :--network_host=192.43.239.175
 :
 :--dmz_cidr=$my_ip
 :
 :--ec2_host=192.43.239.175
 :--ec2_dmz_host=192.43.239.175
 :
 :I believe I'm seeing a natting issue of some sort - my VMs cannot ping
 :external IPs, though DNS seems to work.
 :ubuntu@monday:~$ ping www.google.com
 :PING www.l.google.com (74.125.237.148) 56(84) bytes of data.
 :AWKWARD SILENCE
 :
 :When I do a tcpdump on the compute host things seem fairly normal, even
 :though nothing is getting back to the VM
 :
 :root@ncios1:~# tcpdump icmp -i br100
 :tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 :listening on br100, link-type EN10MB (Ethernet), capture size 65535 bytes
 :14:35:28.046416 IP 10.0.0.8  syd01s13-in-f20.1e100.net: ICMP echo
 request,
 :id 5002, seq 9, length 64
 :14:35:28.051477 IP syd01s13-in-f20.1e100.net  10.0.0.8: ICMP echo reply,
 :id 5002, seq 9, length 64
 :14:35:29.054505 IP 10.0.0.8  syd01s13-in-f20.1e100.net: ICMP echo
 request,
 :id 5002, seq 10, length 64
 :14:35:29.059556 IP syd01s13-in-f20.1e100.net  10.0.0.8: ICMP echo reply,
 :id 5002, seq 10, length 64
 :
 :I've pored over the iptables nat rules and can't see anything amiss apart
 :from the masquerades that are automatically added: (I've cut out some
 empty
 :chains for brevity)
 :
 :root@ncios1:~# iptables -L -t nat -v
 :Chain PREROUTING (policy ACCEPT 22 packets, 2153 bytes)
 : pkts bytes target prot opt in out source
 :destination
 :   22  2153 nova-network-PREROUTING  all  --  anyany anywhere
 :  anywhere
 :   22  2153 nova-compute-PREROUTING  all  --  anyany anywhere
 :  anywhere
 :   22  2153 nova-api-PREROUTING  all  --  anyany anywhere
 :  anywhere
 :
 :Chain INPUT (policy ACCEPT 12 packets, 1573 bytes)
 : pkts bytes target prot opt in out source
 :destination
 :
 :Chain OUTPUT (policy ACCEPT 31 packets, 2021 bytes)
 : pkts bytes target prot opt in out source
 :destination
 :   31  2021 nova-network-OUTPUT  all  --  anyany anywhere
 :  anywhere
 :   31  2021 nova-compute-OUTPUT  all  --  anyany anywhere
 :  anywhere
 :   31  2021 nova-api-OUTPUT  all  --  anyany anywhere
 :anywhere
 :
 :Chain POSTROUTING (policy ACCEPT 30 packets, 1961 bytes)
 : pkts bytes target prot opt in out source
 :destination
 :   31  2021 nova-network-POSTROUTING  all  --  anyany anywhere
 :anywhere
 :   30  1961 nova-compute-POSTROUTING  all  --  anyany anywhere
 :anywhere
 :   30  1961 nova-api-POSTROUTING  all  --  anyany anywhere
 :anywhere
 :   30  1961 nova-postrouting-bottom  all  --  anyany anywhere
 

[Openstack-ubuntu-testing-notifications] Build Still Failing: quantal_folsom_horizon_trunk #44

2012-07-12 Thread openstack-testing-bot
Title: quantal_folsom_horizon_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_horizon_trunk/44/Project:quantal_folsom_horizon_trunkDate of build:Thu, 12 Jul 2012 19:01:53 -0400Build duration:2 min 35 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAllow arbitrarily setting the entry point in a workflow.by gabrieledithorizon/workflows/base.pyedithorizon/workflows/views.pyedithorizon/tests/workflows_tests.pyConsole Output[...truncated 1385 lines...]Applying patch turn-off-debug.patchpatching file openstack_dashboard/local/local_settings.py.exampleApplying patch use-memcache.patchpatching file openstack_dashboard/local/local_settings.py.exampleApplying patch fix-ubuntu-tests.patchpatching file horizon/tests/testsettings.pyHunk #1 succeeded at 91 (offset 2 lines).patching file run_tests.shHunk #1 FAILED at 267.1 out of 1 hunk FAILED -- rejects in file run_tests.shPatch fix-ubuntu-tests.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/buildERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'quantal-amd64-fef681a0-faee-45f2-b349-ae99977987c9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.git archive master --format tar --prefix horizon-2012.2-201207121901/git archive master --format tar --prefix horizon-2012.2-201207121901/git log -n1 --no-merges --pretty=format:%Hgit log f3dc3b93c49d922505b2d15e6b064ba9ed716413..HEAD --no-merges --pretty=format:[%h] %sbzr branch lp:~openstack-ubuntu-testing/horizon/quantal-folsom-proposed horizonbzr merge lp:~openstack-ubuntu-testing/horizon/quantal-folsom --forcedch -b -D quantal --newversion 2012.2+git201207121901~quantal-0ubuntu1 Automated Ubuntu testing build:dch -b -D quantal --newversion 2012.2+git201207121901~quantal-0ubuntu1 Automated Ubuntu testing build:debcommitbzr builddeb -S -- -sa -us -ucmk-build-deps -i -r -t apt-get -y /tmp/tmpKdQKEl/horizon/debian/controlbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 135, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'quantal-amd64-fef681a0-faee-45f2-b349-ae99977987c9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwdu(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 135, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'quantal-amd64-fef681a0-faee-45f2-b349-ae99977987c9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp