Re: [Openstack] Please approve Allow [:print:] chars for security group names

2012-06-01 Thread Vishvananda Ishaya
It doesn't have 2 +2 from core members, and it has a -1 from Sandy which will 
block it.  Hopefully he can pop in and change it.

Vish

On Jun 1, 2012, at 12:09 AM, Alexey Ababilov wrote:

> Hi!
> Please, could someone approve https://review.openstack.org/#/c/7584/ ? It 
> passed two code reviews and is verified.
> It's about validating security group names in EC2 API.
> 
> -- 
> Alessio Ababilov
> Software Engineer
> Grid Dynamics
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding shutdown VM behavior

2012-06-01 Thread Vishvananda Ishaya
I did some cleanup of stop and power_off in the review here.

https://review.openstack.org/#/c/8021/

I removed the weird  shutdown_terminate handling. Honestly I feel like
that is compatibility we don't need. It should be up to the provider whether
a stop_instances counts as a terminate. In my mind they are two different 
things.

Comments welcome on the review.

Vish

On May 31, 2012, at 6:40 PM, Yun Mao wrote:

> shutdown, stop, are power_off are synonym in this discussion. They all
> mean to stop the VM from running, but keep the disk image and network,
> so that the VM could be started back on again.
> 
> There are three ways to do it: 1) using EC2 stop-instance API. 2) use
> OS API stop-server. 3) inside the VM, execute halt or equivalent.
> However, the devil is in the details.
> 
> In EC2 API, a shutdown_terminate flag is checked when a stop-instance
> call is issued. If it's true, then stop-instances actually means
> terminate instances. The flag is true by default unless there is block
> device mapping provided, and it doesn't appear to be configurable by a
> user.
> 
> In OS API, it's defined in v1.1, neither the specification nor the
> implementation check the shutdown_terminate flag at all. It will
> always do stop instead of terminate.
> 
> So, when shutdown_terminate is true (default), the OS API and the EC2
> API will behave differently. If we accept this, it might still be
> acceptable. After all they are different APIs and could have different
> behavior. But the pickle is the case where a user initiates a shutdown
> inside the VM. What's the expected behavior after it's detected?
> Should it respect the shutdown_terminate flag or work more like an OS
> API?  Right now when a shutdown in a VM is detected, the vm state is
> updated to SHUTOFF and that's pretty much it..
> 
> To summarize, there are 3 ways of doing the same thing, each now has a
> different behavior. I'd vote to patch the code to be a little more
> consistent. But what should be the right behavior?
> 
> Yun
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] question about security

2012-06-01 Thread Vishvananda Ishaya
Generally I handle this by using a different eth device (or vlan) for the 
instance network.  Then you make sure that no services on compute are listening 
on 0.0.0.0

If you have only one interface for example, you can run three vlans across it

eth0:10 -> public network  for routing and floating ips and 
such. Nothing should listen here
eth0:11 -> management network <192.168.0.0/24 range> Rabbit and mysql run on 
this network. All services (ssh, etc.) run here
eth0:12 -> vm network <10.0.0.0/8 range> for vms. Nothing should listen here 
(except dnsmasq obviously)

Vish

On May 31, 2012, at 7:35 PM, William Herry wrote:

> We use FlatDHCP network mode, all thing work fine, instance has 10.0.0.x ip 
> and 10.0.0.1 as gateway
> Our problem is that service(most time compute node) has little restrict from 
> instance, 
> which instance can see a lot opened port on service, I am thinking if this is 
> a security problem
> 
> restrict service on compute node not listen on 10.0.0.x ip is the way I can 
> thing to solve this, any other ways?
> 
> Thanks
> 
> -- 
> 
> 
> 
> William Herry
> 
> williamherrych...@gmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] inter vm communication issue

2012-06-01 Thread Vishvananda Ishaya
Ideas inline.

Vish

On May 31, 2012, at 1:41 PM, Bram De Wilde wrote:

> Hi all,
> 
> Can I request some help in resolving a vlan networking issue we are 
> encountering in the final stages of our openstack installation?
> 
> We have installed a multi host vlan network configuration on 3 hosts all 
> running ubuntu 12.04 (openstack essex ).
> 
> One of these hosts is a "public" host running the compute and network 
> services, the other 2 hosts are on a private vlan and are running compute and 
> network as well as all other components of the openstack installation.
> All physical hosts have 2 nic's in a bond (for redundancy) configured with an 
> ip in the 10.0.0.0/24 range as a private network.
> 
> The vm networks we have created are in the 192.168.0.0/16 range and the 
> appropriate vlan tagged networks have been created on the switch.
> 
> All openstack components are running fine as we can create, run and live 
> migrate instances with no issues. All vm's can contact all physical hosts in 
> the 10.0.0.0/24 range as well as the outside word using a proxy running on 
> the 10.0.0.254 ip.
> 
> The problem arrises when we try to communicate in between vm's running on 
> different hosts:
> - name resolution is not working for vm's running on different physical hosts 
> ( I suppose dns should work, no? )

This is expected in multihost mode. The copy of dnsmasq that runs on each host 
only knows about its own vms.  You will need to set up a shared dns if you 
really need this to work.

> - all packages of communication performed using the ip of the vm directly ( 
> ping, ssh, ...) are arriving on the bridge interface of the physical host 
> running the vm we are tying to reach, but the vm itself is not picking up or 
> responding to the requests...

Have you set up security group rules to allow the traffic? That is the only 
reason I can think that packets wouldn't be getting into the vnet if it is 
showing up on the bridge.  There is also a possiblity that bonding + bridging + 
vlans has some sort of an issue.

> 
> The weird thing is, when we start 2 vm's on the same physical host, name 
> resolution and networking are working fine. When we then live-migrate one of 
> the vm's to a new physical host, the networking will continue to work for a 
> varying amount of time after the live migration has completed! A variable 
> amount of the packages start getting lost until we end up with no 
> communication being possible in between the virtual machines. ( after new 
> dhcp lease? arp table getting flushed?... )
> 
> As no errors are appearing in any of the nova logs (all on verbose...) or in 
> the syslog (from the dnsmasq) I really have no clue as to what might be 
> causing this issue... or is it a bug?
> 
> My feeling is the per physical host vm gateway is not performing as it should 
> and not routing the packages correctly in between physical hosts but I have 
> no idea on how to check this other than capture the packages on the bridge 
> interface and observe the requests not getting answered...
> Another option is the problem residing with the 2 physical interfaces in the 
> network bond... but wireshark is showing all packages are arriving on the 
> bridge interface where the vm we are trying to reach is residing so this 
> seems unlikely?
> 
> I have included the nova.conf the ifconfig and the iptables (+nat) of one of 
> the physical hosts in this mail but can provide any other output if this 
> might be helpful.
> 
> Kind regards,
> Bram
> 
> ###
> #  /etc/nova/nova.conf
> ###
> 
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --lock_path=/var/lock/nova
> ##--force_dhcp_release
> ##--iscsi_helper=tgtadm
> --libvirt_use_virtio_for_bridges
> --connection_type=libvirt
> --root_helper=sudo nova-rootwrap
> --verbose
> --ec2_private_dns_show_ip
> --auth_strategy=keystone
> --rabbit_host=10.0.0.100
> --nova_url=http://10.0.0.100:8774/v1.1/
> --floating_range=999.999.999.0/24
> --fixed_range=192.168.0.0/16
> --routing_source_ip=10.0.0.103
> --sql_connection=postgresql://clouddbadmin:password@10.0.0.100/nova
> --glance_api_servers=10.0.0.100:9292
> --image_service=nova.image.glance.GlanceImageService
> --network_manager=nova.network.manager.VlanManager
> --vlan_interface=bond0
> --public_interface=eth0
> --multi-host=true
> 
> ###
> #  ifconfig
> ###
> 
> bond0 Link encap:Ethernet  HWaddr bc:30:5b:dd:0c:8a  
>  inet addr:10.0.0.103  Bcast:10.0.0.255  Mask:255.255.255.0
>  inet6 addr: fe80::be30:5bff:fedd:c8a/64 Scope:Link
>  UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
>  RX packets:1400289 errors:0 dropped:67725 overruns:0 frame:0
>  TX packets:2414277 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0 
>  RX bytes:1288957456 (1.2 GB)  TX bytes:3217320483 (3.2 G

Re: [Openstack] dhcp is not leasing an ip address in vlan mode

2012-05-31 Thread Vishvananda Ishaya
do you see sent and received packets on the vlan?  I would suspect that you 
actually don't have the vlans trunked on the ports so the packets aren't making 
it across the switch.

Vish

On May 31, 2012, at 9:53 AM, Vijay wrote:

> Thanks for the reply. Network controller assigns a private ip address to the 
> vm launched on compute node. However, I still cannot ping this ip address 
> from the network(controller node). I am running nova-network service only on 
> the controller.
>  
> Thanks,
> -vj
> From: Narayan Desai 
> To: Vijay  
> Cc: "openstack@lists.launchpad.net"  
> Sent: Wednesday, May 30, 2012 5:28 PM
> Subject: Re: [Openstack] dhcp is not leasing an ip address in vlan mode
> 
> This sounds like it might be working properly. In VLAN mode, all
> instances are connected to one of the project vlans. The .1 address
> (gateway, dhcp, etc) exists on an interface on the nova-network node
> (or one of them, in the case that you are running multiple. This
> interface is bridged to a tagged interface on the appropriate vlan
> tag. On the nova-compute nodes, a vnet interface for the instance is
> bridged to the vlan tagged interface. On the compute node, there isn't
> an IP interface on this network, so the private IP for instances isn't
> reachable, even if the instance is running on the same node.
> 
> The canonical test for correct network function is if an instance is
> reachable via ping from the nova-network server that is currently
> serving the instance's project network.
> hth
> -nld
> 
> On Wed, May 30, 2012 at 5:42 PM, Vijay  wrote:
> > Hello,
> > I am trying install Essex in VLAN mode on multiple compute nodes.
> >
> > I am able to lauch instances on controller (which also runs nova-compute)
> > and ping/ssh those instances.
> > I am able to launch instances on compute only node. However, I cannot ping
> > the VM launched  on compute only node.
> > When i did the euca-get-console-output on that instance, I see that it is
> > not getting an IP leased from DHCP .. Because of that it is not able to
> > reach metadata server.
> > Any help is appreciated.
> >
> > Console output is
> > udhcpc (v1.17.2) started
> > Sending discover...
> > Sending discover...
> > Sending discover...
> > No lease, forking to background
> > starting DHCP forEthernet interface eth0  [  OK  ]
> > cloud-setup: checking
> > http://169.254.169.254/2009-04-04/meta-data/instance-id
> > wget: can't connect to remote host (169.254.169.254): Network is unreachable
> > cloud-setup: failed 1/30: up 17.71. request failed
> > nova.conf:
> > --dhcpbridge_flagfile=/etc/nova/nova.conf
> > --dhcpbridge=/usr/local/bin/nova-dhcpbridge
> > --logdir=/var/log/nova
> > --state_path=/var/lib/nova
> > --lock_path=/var/lock/nova
> > --force_dhcp_release=True
> > --use_deprecated_auth
> > --iscsi_helper=tgtadm
> > --verbose
> > --vncserver_listen=0.0.0.0
> > --sql_connection=mysql://novadbadmin:novasecret@192.168.198.85/nova
> > --daemonize
> > --s3_host=192.168.198.85
> > --rabbit_host=192.168.198.85
> > --cc_host=192.168.198.85
> > --ospi_host=192.168.198.85
> > --ec2_host=192.168.198.85
> > --ec2_url=http://192.168.198.85:8773/services/Cloud
> > --nova_url=http://192.168.198.85:8774/v1.1/
> >
> > # VLAN mode
> > --flat_interface=eth1
> > --flat_injected=False
> > --flat_network_bridge=br100
> > --flat_network_dhcp_start=192.168.4.2
> >
> > --network_manager=nova.network.manager.VlanManager
> > --vlan_interface=eth1
> > --public_interface=vlan100
> > --allow_same_net_traffic=True
> > --fixed_range=192.168.4.0/24
> > --network_size=256
> > --FAKE_subdomain=ec2
> > --routing_source_ip=192.168.198.85
> > --glance_api_servers=192.168.198.85:9292
> > --image_service=nova.image.glance.GlanceImageService
> > --iscsi_ip_prefix=192.168.
> > --connection_type=libvirt
> > --libvirt_type=qemu
> >
> > # Keystone
> > --auth_strategy=keystone
> > --api_paste_config=/etc/nova/api-paste.ini
> > --keystone_ec2_url=http://192.168.198.85:5000/v2.0/ec2tokens
> >
> >
> >
> >
> > Thanks,
> > -vj
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] deference between live-migration and migrate

2012-05-31 Thread Vishvananda Ishaya

On May 25, 2012, at 2:36 AM, John Garbutt wrote:

> I have been meaning to draft a blueprint around this.
>  
> What we have today:
> · Migrate: move a VM from one server to another, reboots across the 
> move (I think) and destination is picked by scheduler
> · LiveMigration: move a VM form one server to another, VM doesn’t 
> appear to reboot, need to specify the destination
>  
> I propose we extent the Migrate API (thinking about nova CLI here really) to 
> include:
> · Optional Flag to force non-live migration, default to live migration
> · Optional destination host, by default let the scheduler choose
> · Deprecate the existing live migration API and CLI calls
> What do people think?

+1

Keep in mind that we actually have three options:

live migration on shared storage
live migration without shared storage (block migration)
resize/migrate

Yun actually suggested that resize/migrate be simplified to do the following 
instead of scping the file over:

 * snapshot to glance
 * boot new image from snapshot

This would definitely simplify the code, unfortunately it could have 
billing/metering repercussions.

Vish

>  
> I am in the process of adding Live migration support to XenServer:
> https://blueprints.launchpad.net/nova/+spec/xenapi-live-migration
>  
> If people like the idea, I should get chance to draft and implement this for 
> Folsom, but I am happy for others to do this if they are also interested.
>  
> Cheers,
> John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] dnsmasq

2012-05-30 Thread Vishvananda Ishaya
This is correct.

Vish

On May 24, 2012, at 9:45 PM, William Herry wrote:

> Hi
> 
> sorry for too much silly questions
> 
> I am interested in the dnsmasq of nova when use FlatDHCP network mode
> 
> I am guess it works like this:
> nova write MAC -> IP info to /var/lib/nova/networks/nova-br1.conf when boot a 
> instance
> 
> nova tell dnsmasq to reread /var/lib/nova/networks/nova-br1.conf 
> 
> instance boot up and send DHCP request
> 
> dnsmasq receive DHCP request and give instance proper IP
> 
> I know it totally wrong and I want some one correct me, more details would be 
> great
> 
> Thanks in advice
> 
> Regards
> 
> -- 
> 
> ===
> William Herry
> 
> williamherrych...@gmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] release fixed_ip

2012-05-30 Thread Vishvananda Ishaya
If you are using flatdhcp or vlan mode where nova is managing the leases, do 
not set the value of fixed_ip_disassociate_timeout lower than the 
dhcp_lease_time or you might end up giving an ip to an instance and have 
dnsmasq fail to hand it out.  The best way to reuse ips quickly is to set the 
config:

force_dhcp_release=true

This will force nova to send out a release packet when it destroys the vm so 
the ip is immediately released. Just be aware that it requires some extra 
binaries that come with dnsmasq that may not exist on all linux distributions.

Vish

On May 29, 2012, at 2:42 AM, Vaze, Mandar wrote:

>> I want to know if there is a timeout for a fixed ip to be reused, how long 
>> this time is
> 
> Parameter you are looking for is fixed_ip_disassociate_timeout. Set it to low 
> value like "1" (This is number of seconds)
> Default value is 600 i.e. 10 minutes.
> 
> -Mandar
> 
> __
> Disclaimer:This email and any attachments are sent in strictest confidence 
> for the sole use of the addressee and may contain legally privileged, 
> confidential, and proprietary data.  If you are not the intended recipient, 
> please advise the sender by replying promptly to this email and then delete 
> and destroy this email and any attachments without any further use, copying 
> or forwarding
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why is an image required when booting from volume

2012-05-26 Thread Vishvananda Ishaya
If there is a separate kernel and ramdisk needed for the boot from volume, it 
is pulled from image properties.  Otherwise it is basically useless.

Vish

On May 26, 2012, at 8:22 AM, Lorin Hochstein wrote:

> I'm trying to figure out boot from volume, both so I can use it and so I can 
> add it to the docs. 
> 
> 
>  It seems that when calling "nova boot" or using Horizon, you need to specify 
> an image. Why is that?
> 
> I naively tried to create a volume image by creating a volume and then doing 
> on my volume server:
> 
> dd if=/tmp/precise-server-cloudimg-amd64-disk1.img 
> of=/dev/nova-volumes/volume-000d
> 
> Then I tried this:
> 
> $ nova boot --flavor 2 --key_name lorin --block_device_mapping 
> /dev/vda=13:::0 test
> 
> Which generated an error:
> 
> Invalid imageRef provided. (HTTP 400)
> 
> If I try to specify an image, it at least attempts to boot:
> 
> $ nova boot --flavor 2 --key_name lorin --block_device_mapping 
> /dev/vda=13:::0 --image 7d6923d9-1c13-4405-ba0c-41c7487dd6bc test
> 
> I noticed that the devstack example specifies an image: 
> https://github.com/openstack-dev/devstack/blob/master/exercises/boot_from_volume.sh:
> 
> VOL_VM_UUID=`nova boot --flavor $INSTANCE_TYPE --image $IMAGE 
> --block_device_mapping vda=$VOLUME_ID:::0 --security_groups=$SECGROUP 
> --key_name $KEY_NAME $VOL_INSTANCE_NAME | grep ' id ' | get_field 2`
> 
> Looking at nova/api/openstack/compute/servers.py, it does look like 
> _image_uuid_from_href() is called regardless of whether we are booting from 
> volume or not. What is "--image" used for when booting from volume?
> 
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Third Party APIs

2012-05-18 Thread Vishvananda Ishaya

On May 18, 2012, at 11:00 AM, Doug Davis wrote:

> 
> Vish wrote on 05/17/2012 02:18:19 PM:
> ... 
> > 3 Feature Branch in Core 
> > 
> > We are doing some work to support Feature and Subsystem branches in 
> > our CI system. 3rd party apis could live in a feature branch so that
> > they can be tested using our CI infrastructure. This is very similar
> > to the above solution, and gives us a temporary place to do 
> > development until the internal apis are more stable. Changes to 
> > internal apis and 3rd party apis could be done concurrently in the 
> > branch and tested. 
> 
> can you elaborate on this last sentence?  When you say "changes to internal 
> apis" do you mean "in general" or only when in the context of those 
> 3rd party APIs needing a change?  I can't see the core developers wanting 
> to do internal API changes in a 3rd party api branch.  I would expect 
> 3rd party api branches to mainly include just stuff that sits on top of 
> the internal APIs and (hopefully very few) internal API tweaks. 
> Which to me means that these 3rd party API branches should be continually 
> rebased off of the trunk to catch breaking changes immediately.


I agree.  I was suggesting that initially internal api changes could be made in 
the feature branch in order to enable the new top level apis, tested, and then 
proposed for merging back into core.  This is generally easier than trying to 
make changes in two separate repositories to support a feature (as we have to 
do frequently in openstack).

> 
> If I understand it correctly, of those options, I like option 3 because 
> then the CI stuff will detect breakages in the 3rd party APIs right away 
> and not until some later date when it'll be harder to fix (or undo) those 
> internal API changes.

Well it won't automatically do so, but it should alllow for an easy way for the 
third party developers to run ci tests without setting up their own 
infrastructure.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Dashboard] Can't access images/snapshots

2012-05-18 Thread Vishvananda Ishaya
i think you need to update your endpoint to:

http://192.168.111.202:8776/v1/%(tenant_id)s

note that the volume endpoint should be v1 not v2

Vish

On May 18, 2012, at 6:01 AM, Leander Bessa Beernaert wrote:

> Ok, i've removed swift from the endpoints and services. Nova volumes is 
> running with a 2GB file as volume on disk and the log files seem ok. However, 
> i still keep getting this error for volume-list 
> (http://paste.openstack.org/show/17991/) and this error for snapshot-list 
> (http://paste.openstack.org/show/17992/).
> 
> On Thu, May 17, 2012 at 7:39 PM, Gabriel Hurley  
> wrote:
> Two points:
> 
>  
> 
> Nova Volume is a required service for Essex Horizon. That’s documented, and 
> there are plans to make it optional for Folsom. However, not having it should 
> yield a pretty error message in the dashboard, not a KeyError in novaclient, 
> which leads me to my second point…
> 
>  
> 
> It sounds like your Keystone service catalog is misconfigured. If you’re 
> seeing Swift (AKA Object Store) in the dashboard, that means it’s in your 
> keystone service catalog. Swift is a completely optional component and is 
> triggered on/off by the presence of an “object-store” endpoint returned by 
> Keystone.
> 
>  
> 
> I’d check and make sure the services listed in Keystone’s catalog are correct 
> for what’s actually running in your environment.
> 
>  
> 
> All the best,
> 
>  
> 
> -  Gabriel
> 
>  
> 
> From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
> [mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
> Behalf Of Leander Bessa Beernaert
> Sent: Thursday, May 17, 2012 8:45 AM
> To: Sébastien Han
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] [Dashboard] Can't access images/snapshots
> 
>  
> 
> Now i made sure nova-volume is installed and running. I still keep running 
> into the same problem. It also happens from the command line tool. This is 
> the output produced > http://paste.openstack.org/show/17929/
> 
> On Thu, May 17, 2012 at 11:17 AM, Leander Bessa Beernaert 
>  wrote:
> 
> I have no trouble from the command line. One thing i find peculiar is that i 
> haven't installed swift and nova-volume yet and they show up as enabled 
> services in the dashboard. Is that normal?
> 
>  
> 
> On Wed, May 16, 2012 at 11:39 PM, Sébastien Han  
> wrote:
> 
> Hi,
> 
>  
> 
> Do you also have an error when retrieving from the command line?
> 
> 
> 
> ~Cheers!
> 
> 
> 
> 
> On Wed, May 16, 2012 at 5:38 PM, Leander Bessa Beernaert 
>  wrote:
> 
> Hello,
> 
>  
> 
> I keep running into this error when i try to list the images/snapshot in 
> dashboard: http://paste.openstack.org/show/17820/
> 
>  
> 
> This is my local_settings.py file: http://paste.openstack.org/show/17822/ , 
> am i missing something?
> 
>  
> 
> Regards,
> 
>  
> 
> Leander 
> 
>  
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
>  
> 
>  
> 
>  
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC - dynamically loading virt drivers

2012-05-17 Thread Vishvananda Ishaya

On May 17, 2012, at 1:52 PM, Sean Dague wrote:
> 
> What I'm mostly looking for is comments on approach. Is importutils the 
> prefered way to go about this (which is the nova.volume approach) now, or 
> should this be using utils.LazyPluggable as is in nova.db.api, or some other 
> approach entirely? Comments, redirections, appreciated.

-1 to LazyPluggable

So we already have plugabillity by just specifying a different compute_driver 
config option.  I don't like that we defer another level in compute and call 
get_connection.  IMO the best cleanup would be to remove the get_connection 
altogether and just construct the driver directly based on compute_driver.

The main issue with changing this is breaking existing installs.

So I guess this would be my strategy:

a) remove get_connection from the drivers (and just have it construct the 
'connection' class directly)
b) modify the global get_connection to construct the drivers for backwards 
compatibilty
c) modify the documentation to suggest changing drivers by specifying the full 
path to the driver instead of connection_type
d) rename the connection classes to something reasonable representing drivers 
(libvirt.driver:LibvirtDriver() vs libvirt.connection.LibvirtConnection)
e) bonus points if it could be done with a short path for ease of use 
(compute_driver=libvirt.LibvirtDriver vs 
compute_driver=nova.virt.libvirt.driver.LibvirtDriver)

> 
> * one test fails for Fake in test_virt_drivers, but only when it's run as the 
> full unit test, not when run on it's own. It looks like it has to do with 
> FakeConnection.instance() caching, which actually confuses me a bit, as I 
> would have assumed one unit test file couldn't affect another (i.e. they 
> started a clean env each time).

Generally breakage like this is due to some global state that is not cleaned 
up, so if FakeConnection is caching globally, then this could happen.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Third Party APIs

2012-05-17 Thread Vishvananda Ishaya
On May 17, 2012, at 11:57 AM, Andy Edmonds wrote:

> Hey Vish,
> In the case of option 2, can such a isolated component remain under the 
> 'nova' module (e.g. the ec2 module under nova/api/) on the nova repository or 
> must it be on a separate repository and yet be, via some means, made easily 
> integrated to the master branch code?

Remaining under the nova module would be option 3 (doing it in a feature 
branch). Option 2 is putting it in a separate repository.

Vish

> 
> Cheers,
> 
> Andy
> andy.edmonds.be
> 
> 
> On Thu, May 17, 2012 at 8:18 PM, Vishvananda Ishaya  
> wrote:
> Hello Everyone,
> 
> In the ppb meeting last week[1] we discussed third party apis and decided 
> that the policy is not to include them in core.  Specifically the motion that 
> passed is:
> 
> An OpenStack project will support an official API in it's core implementation 
> (the OpenStack API). other APIs will be implemented external to core. the 
> core project will expose stable, complete, performant interfaces so that 3rd 
> party APIs can be implemented in a complete and performant manner.
> 
> So now that we have settled on a long term goal for third party apis, we need 
> to deal with the short term. We do have a stable interface in Nova in the 
> form of the OpenStack API but it remains to be seen whether it is complete 
> and performant enough to allow other apis to be layered on top of it.
> 
> Ultimately, I would like to see a stable internal python api that the other 
> apis could speak through (including the OpenStack api layer), but it will 
> probably take a while to get there. In the short term I see three 
> possibilities for third party apis.
> 
> 1 Proxy Layer
> 
> This is the approach being taken by AWSOME, and it is definitely the easiest 
> to maintain. It has some big advantages, like allowing new apis deployed in a 
> completely decoupled manner. The main potential drawbacks are performance and 
> an incomplete mapping of concepts from one api to another. This will most 
> likely require adding OpenStack api extensions to support some of the extra 
> features in other apis
> 
> 2 Separate Project that talks to internal apis
> 
> It is possible to write a separate component that imports the compute.api in 
> nova and uses it directly.  This will deal with the performance issues of the 
> above approach, but it runs the risk of being broken if the compute.api 
> changes over the course of the release. The advantage of this approach is it 
> will drive requirements for having a stable/versioned internal api. In this 
> model, automated testing would be necessary to alert any breakages.
> 
> 3 Feature Branch in Core
> 
> We are doing some work to support Feature and Subsystem branches in our CI 
> system. 3rd party apis could live in a feature branch so that they can be 
> tested using our CI infrastructure. This is very similar to the above 
> solution, and gives us a temporary place to do development until the internal 
> apis are more stable. Changes to internal apis and 3rd party apis could be 
> done concurrently in the branch and tested. Once the branch has stabilized, 
> the updates could be pushed into the internal apis in nova, and the 3rd party 
> api could grow up into its own project like option 2
> 
> 
> It may be that there are other options that I haven't thought of, but 
> regardless of the approach taken by the various 3rd party apis, I think it is 
> valuable for us all to work together on stabilizing the internal apis.  I'd 
> like the ec2 api to be able to live separately as well.
> 
> Vish
> 
> [1] 
> http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-08-20.00.log.txt
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Third Party APIs

2012-05-17 Thread Vishvananda Ishaya
Hello Everyone,

In the ppb meeting last week[1] we discussed third party apis and decided that 
the policy is not to include them in core.  Specifically the motion that passed 
is:

An OpenStack project will support an official API in it's core implementation 
(the OpenStack API). other APIs will be implemented external to core. the core 
project will expose stable, complete, performant interfaces so that 3rd party 
APIs can be implemented in a complete and performant manner.

So now that we have settled on a long term goal for third party apis, we need 
to deal with the short term. We do have a stable interface in Nova in the form 
of the OpenStack API but it remains to be seen whether it is complete and 
performant enough to allow other apis to be layered on top of it.

Ultimately, I would like to see a stable internal python api that the other 
apis could speak through (including the OpenStack api layer), but it will 
probably take a while to get there. In the short term I see three possibilities 
for third party apis.

1 Proxy Layer

This is the approach being taken by AWSOME, and it is definitely the easiest to 
maintain. It has some big advantages, like allowing new apis deployed in a 
completely decoupled manner. The main potential drawbacks are performance and 
an incomplete mapping of concepts from one api to another. This will most 
likely require adding OpenStack api extensions to support some of the extra 
features in other apis

2 Separate Project that talks to internal apis

It is possible to write a separate component that imports the compute.api in 
nova and uses it directly.  This will deal with the performance issues of the 
above approach, but it runs the risk of being broken if the compute.api changes 
over the course of the release. The advantage of this approach is it will drive 
requirements for having a stable/versioned internal api. In this model, 
automated testing would be necessary to alert any breakages.

3 Feature Branch in Core

We are doing some work to support Feature and Subsystem branches in our CI 
system. 3rd party apis could live in a feature branch so that they can be 
tested using our CI infrastructure. This is very similar to the above solution, 
and gives us a temporary place to do development until the internal apis are 
more stable. Changes to internal apis and 3rd party apis could be done 
concurrently in the branch and tested. Once the branch has stabilized, the 
updates could be pushed into the internal apis in nova, and the 3rd party api 
could grow up into its own project like option 2


It may be that there are other options that I haven't thought of, but 
regardless of the approach taken by the various 3rd party apis, I think it is 
valuable for us all to work together on stabilizing the internal apis.  I'd 
like the ec2 api to be able to live separately as well.

Vish

[1] 
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-08-20.00.log.txt___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] Blueprint and core cleanup

2012-05-17 Thread Vishvananda Ishaya
Hello Everyone,

I've implemented both of the cleanups I mentioned last week. More info below.

Core Cleanup

Nova Core is now down to seventeen members.  I want to take a minute to thank 
all of the core folks for the incredible work they have done over the last 
couple of years.  Some of these people have moved on to other things, and I 
hope that they are equally successful in their new projects. There are others 
on the list that are simply too busy to review consistently, and may be 
returning to core-duty at some point. Normally a nova-core member needs to be 
recommended by an existing core member, but I'd like to propose an alternative 
method for former core members.

If a former core member has time to start participating in reviews again, i 
think he should be able to review for a couple of weeks or two and send an 
email to the list saying, "Hey, I've got time to review again, can I be added 
back in".  If we don't here any -1 votes by other core members for three days 
we will bring them back.  In other words, its former members can be accelerated 
back into core.  Sound reasonable?

Blueprint Cleanup

As I mentioned in my previous email, I've now obsoleted all blueprints not 
targetted to folsom. The blueprint system has been used for "feature requests", 
and I don't think it is working because there is no one grabbing unassigned 
blueprints. I think it has to be up to the drafter of the blueprint to find a 
person/team to actually implement the blueprint or it will just sit there. 
Therefore I've removed all of the "good idea" blueprints. This was kind of sad, 
because there were some really good ideas there.

If I've inadvertently removed a blueprint of yours that is actually being 
worked on, please let me know and I will resurrect it and target it to folsom.

Unassigned Blueprints

There are still a few blueprints targetted to folsom[1] with no assignee.  If 
there is anyone looking for important things to work on, that is a good place 
to start.  Config Drive v2[2] is an important one, and so is db-threadpool[3].  
Any takers?

[1] https://blueprints.launchpad.net/nova/folsom

[2] https://blueprints.launchpad.net/nova/+spec/config-drive-v2

[3] https://blueprints.launchpad.net/nova/+spec/db-threadpool___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] vlan network question

2012-05-16 Thread Vishvananda Ishaya

On May 16, 2012, at 6:12 PM, Paras pradhan wrote:

> Thanks for the reply.
> 
> Another question :). To have multiple compute nodes (say 5), I can
> either use vlan or FlatDHCP mode right?

correct

> Also, with FlatDHCP can I
> have multiple tenants or I need to stick with VLAN to have multiple
> tenants?

you can have multiple tenants in both.  VLAN mode just creates a separate 
network/vlan per tenant

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] vlan network question

2012-05-16 Thread Vishvananda Ishaya
Yes it is possible to do everything over one interface, but it is probably 
better to separate traffic out over multiple vlans if you can.  You could have 
rabbit and mysql on one vlan (call it the management vlan), and the public 
internet traffic/api on a separate vlan.  Finally you could put all of the vms 
on their own vlan (or if you are using vlan mode as you say, then each project 
will get its own vlan).

Vish

On May 16, 2012, at 10:24 AM, Paras pradhan wrote:

> Hi,
> 
> I have 2 servers. One is the controller that runs everything except
> nova-compute. Another is a compute node that runs nova-compute only.
> Both of them has only one NIC (eth0) . In VLAN mode, do I need the
> second NIC (eth1) ? If yes in both nodes?
> 
> This is what I am tyring to acheive.  Install only one nic in both
> server and via bridge from eth0, route the instances. Is this doable?
> 
> OS: ubuntu 12.04, Openstack: Essex.
> 
> 
> Thanks in Adv!
> Paras.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [client] Hacking on the client

2012-05-16 Thread Vishvananda Ishaya
can't you use python setup.py develop?

That is the general way of setting stuff up in dev mode.

Vish

On May 16, 2012, at 9:12 AM, Lorin Hochstein wrote:

> If I want to hack on python-openstackclient, how should I set things up so I 
> don't need to install the egg to run it? I know how to install it, but I'd 
> like to be able to make changes and run them without going through an install 
> cycle. 
> 
> I tried to do this:
> 
> export PYTHONPATH=~/python-openstackclient
> alias openstack="python ~/python-openstackclient/openstackclient/shell.py"
> 
> And it sort of works, but I get an unpleasant warning whenever I run things:
> 
> /Users/lorin/.virtualenvs/client/lib/python2.7/site-packages/cliff/commandmanager.py:6:
>  UserWarning: Module argparse was already imported from 
> /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/argparse.pyc,
>  but /Users/lorin/.virtualenvs/client/lib/python2.7/site-packages is being 
> added to sys.path
> 
> 
> So I assume there's a better way than what I'm doing.
> 
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Unable to create a floating ip

2012-05-16 Thread Vishvananda Ishaya
to create an individual ip, use the full ip address without /32. The floating 
ip create code ignores the network and broadcast address for a range, so you 
will end up with no usable ips if you specify a /32 or /31

Vish

On May 16, 2012, at 8:05 AM, Alessandro Tagliapietra wrote:

> Hello guys,
> 
> this is my nova.conf (network part)
> 
> --network_manager=nova.network.manager.FlatDHCPManager
> --public_interface=eth0
> --flat_interface=eth2
> --flat_network_bridge=br100--fixed_range=192.168.4.1/27--network_size=32
> --flat_network_dhcp_start=192.168.4.33--flat_injected=False
> --force_dhcp_release=true
> 
> i'm trying to add a floating ip with the command
> 
> nova-manage floating create --ip_range=94.23.x.x/32
> 
> and i get no output in logs or cmdline, then when i do nova-manage floating 
> list i get  "no floating ip addresses have been defined" and in logs and 
> cmdline
> 
> 2012-05-16 16:48:55 DEBUG nova.utils 
> [req-58cd4a1d-1801-4380-8472-81dc55ed1b86 None None] backend  'nova.db.sqlalchemy.api' from 
> '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from 
> (pid=12824) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
> 
> Any idea on what's the problem?
> 
> Best Regards
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with attaching disks to an instance

2012-05-15 Thread Vishvananda Ishaya
Yes that code is unused and the removal is under review here:

https://review.openstack.org/#/c/7450/

Vish

On May 15, 2012, at 8:18 PM, Lorin Hochstein wrote:

> On May 15, 2012, at 1:27 PM, Vishvananda Ishaya wrote:
> 
>> FYI iscsi_ip_prefix doesn't exist in essex.  
> 
> That flag is referenced in the XenAPI code in essex: 
> https://github.com/openstack/nova/blob/stable/essex/nova/virt/xenapi/volume_utils.py#L408
> 
> However, it doesn't appear anywhere else in essex. Is this a bug?
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Server UUID from metadata service?

2012-05-15 Thread Vishvananda Ishaya
AFAIK there isn't a way to get the uuid from the metadata server in essex. We 
were also discussing that it might be valuable for the ec2 api to tag the uuid 
onto the instance, but that doesn't help for essex either.

Vish

On May 15, 2012, at 3:40 PM, Martin Packman wrote:

> For juju, I need a snippet of shell that cloud-init can use to get the
> server id on startup. For the ec2 provider, the following is used:
> 
>$(curl http://169.254.169.254/1.0/meta-data/instance-id)
> 
> Is there any way of getting the server's uuid rather than the ec2
> style i-08x version? Requests against the openstack api with the
> integer form work, but not for comparing id values. Using the api to
> resolve the integer to a uuid would require reauthenticating on the
> instance.
> 
> There was some discussion about exposing openstack specific values via
> the metadata service as well for folsom, but is there a method that
> would work with essex?
> 
> Martin
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with attaching disks to an instance

2012-05-15 Thread Vishvananda Ishaya
FYI iscsi_ip_prefix doesn't exist in essex.  The ip is passed back to the compute node based on what it has stored in the database, so the compute node no longer finds it through discovery and matching to the prefix.  You should only need iscsi_ip_address on the volume node to make sure that the db entry is created properly.VishOn May 15, 2012, at 12:25 AM, Razique Mahroua wrote:

In fact, it looks like the service is not able to retrieve the 
nova-volume' IP; as if there were some issue parsing the flag or 
something like that.Could you try by commenting that entry on all 
your servers : --iscsi_ip_address and
 only keep the prefix ?Razique 	   
   	Shashank Sahni  
  14 mai 2012 18:22
  

  
Hi,

Oh! They are same. I just masked the values before pasting the
configuration files. Although, now that I think of it, its pretty
harmless. Here are the originals.

controller node - http://paste.openstack.org/show/17513/
compute node - http://paste.openstack.org/show/17514/
volume node - http://paste.openstack.org/show/17515/

As per my understanding, I just need to figure out how the volume
node is identified. Thank you for replying.

Regards,
Shashank Sahni


  ___Mailing list: 
https://launchpad.net/~openstackPost to : 
openstack@lists.launchpad.netUnsubscribe : 
https://launchpad.net/~openstackMore help   : 
https://help.launchpad.net/ListHelp-- Nuage & Co - Razique Mahroua 
razique.mahr...@gmail.com








 






___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with attaching disks to an instance

2012-05-14 Thread Vishvananda Ishaya
you have to set iscsi_ip_address on the volume node.  The volume node is the 
one that creates this db entry.

Vish

On May 14, 2012, at 1:54 PM, Shashank Sahni wrote:

> Hi Vish,
> 
> Yeah you are right. I checked the settings in the volume database and for all 
> volumes the entry is similar to following.
> 
> controller_node:3260,3 iqn.2010-10.org.openstack:volume-0004 
> 
> But it seems these entries are being generated automatically, i.e. as soon as 
> I issue volume creation command. How do I fix this? I am already using 
> properly configured iscsi_ip_address option in the controller's nova.conf 
> file.
> 
> Regards,
> Shashank Sahni
> 
> On 05/15/2012 02:15 AM, Vishvananda Ishaya wrote:
>> 
>> It should be getting the connection properties via the call to the volume 
>> node.  Is it possible your volume in the database has incorrect properties 
>> stored in provider_location?
>> 
>> It is set from the cofig iscsi_ip_address, so if you have not set that 
>> configuration option to a routable ip from compute -> volume then it will 
>> not work.  Also, changing the config option will not change the existing 
>> values in the db, so you might have to change those manually.
>> 
>> Vish
>> 
>> On May 14, 2012, at 7:28 AM, Shashank Sahni wrote:
>> 
>>> Hi,
>>> 
>>> Thanks for the reply.
>>> 
>>> Yes, I've gone through the document. Volume creation and deletion are 
>>> working perfectly fine. When I run "iscsiadm -m discovery -t st -p 
>>> volume_node" on the compute node, I can see the volumes. But somehow the 
>>> compute node is being misinformed about the volume node after giving the 
>>> attach command.
>>> 
>>> I'm not using iscsitarget as per that document. Installation of nova-volume 
>>> on ubuntu precise automatically took care of it using tgt.
>>> 
>>> Kind Regards,
>>> Shashank Sahni
>>> 
>>> On 05/14/2012 07:34 PM, raja.me...@wipro.com wrote:
>>>> 
>>>> Hi Shashank ,
>>>>  
>>>> I preassume that the steps outlined in the link below has been followed.
>>>>  
>>>> http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html
>>>>  
>>>>  
>>>> Thanks
>>>> Meena Raja
>>>>  
>>>>  
>>>>  
>>>>  
>>>> From: openstack-bounces+raja.meena=wipro@lists.launchpad.net 
>>>> [mailto:openstack-bounces+raja.meena=wipro@lists.launchpad.net] On 
>>>> Behalf Of Shashank Sahni
>>>> Sent: Monday, May 14, 2012 6:23 PM
>>>> To: Razique Mahroua
>>>> Cc: openstack@lists.launchpad.net
>>>> Subject: Re: [Openstack] Problem with attaching disks to an instance
>>>>  
>>>> Hi,
>>>> 
>>>> I set this option in the configuration files of both compute and 
>>>> controller. Restarted the service, but unfortunately same result.
>>>> 
>>>> Regards,
>>>> Shashank Sahni
>>>> 
>>>> On 05/14/2012 05:58 PM, Razique Mahroua wrote:
>>>> Hi,
>>>> do you have the flag iscsi_ip_prefix configured in your nova.conf ?
>>>> Razique
>>>> 
>>>> 
>>>> 
>>>> Shashank Sahni
>>>> 14 mai 2012 14:22
>>>> Hi everyone,
>>>> 
>>>> I'm trying to configure a multi-node installation. Here is a brief 
>>>> overview.
>>>> 
>>>> 1) controller - api+network+scheduler+novnc+glance+keystone (nova.conf - 
>>>> http://paste.openstack.org/show/17470/)
>>>> 2) compute node (nova.conf - http://paste.openstack.org/show/17469)
>>>> 3) volume node(single)
>>>> 
>>>> Compute and vnc are working fine. I'm able to create and delete volumes. 
>>>> Iscsi discovery from the compute nodes is working too. But when I try to 
>>>> attach a volume, the compute node tries to connect to the controller node 
>>>> instead of volume and hence crashes with the following error.
>>>> 
>>>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected 
>>>> error while running command.
>>>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Command: sudo nova-rootwrap 
>>>> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-0003 -p 
>>>> controller:3260 --rescan
>>>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Exit code: 255

Re: [Openstack] Dhcp lease errors in vlan mode

2012-05-14 Thread Vishvananda Ishaya
Thanks lorin!

Vish

On May 14, 2012, at 12:59 PM, Lorin Hochstein wrote:

> 
> On May 14, 2012, at 1:46 PM, Vishvananda Ishaya wrote:
> 
>> TL;DR
>> 
>> To fix issues with failed dhcp leases in vlan mode, upgrade to dnsmasq 
>> 2.6.1[1]
>> 
> 
> I attempted to document this issue in the docs: 
> https://review.openstack.org/7403
> 
> (As an aside, we're using VLAN mode at Nimbis).
> 
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with attaching disks to an instance

2012-05-14 Thread Vishvananda Ishaya
It should be getting the connection properties via the call to the volume node. 
 Is it possible your volume in the database has incorrect properties stored in 
provider_location?

It is set from the cofig iscsi_ip_address, so if you have not set that 
configuration option to a routable ip from compute -> volume then it will not 
work.  Also, changing the config option will not change the existing values in 
the db, so you might have to change those manually.

Vish

On May 14, 2012, at 7:28 AM, Shashank Sahni wrote:

> Hi,
> 
> Thanks for the reply.
> 
> Yes, I've gone through the document. Volume creation and deletion are working 
> perfectly fine. When I run "iscsiadm -m discovery -t st -p volume_node" on 
> the compute node, I can see the volumes. But somehow the compute node is 
> being misinformed about the volume node after giving the attach command.
> 
> I'm not using iscsitarget as per that document. Installation of nova-volume 
> on ubuntu precise automatically took care of it using tgt.
> 
> Kind Regards,
> Shashank Sahni
> 
> On 05/14/2012 07:34 PM, raja.me...@wipro.com wrote:
>> 
>> Hi Shashank ,
>>  
>> I preassume that the steps outlined in the link below has been followed.
>>  
>> http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html
>>  
>>  
>> Thanks
>> Meena Raja
>>  
>>  
>>  
>>  
>> From: openstack-bounces+raja.meena=wipro@lists.launchpad.net 
>> [mailto:openstack-bounces+raja.meena=wipro@lists.launchpad.net] On 
>> Behalf Of Shashank Sahni
>> Sent: Monday, May 14, 2012 6:23 PM
>> To: Razique Mahroua
>> Cc: openstack@lists.launchpad.net
>> Subject: Re: [Openstack] Problem with attaching disks to an instance
>>  
>> Hi,
>> 
>> I set this option in the configuration files of both compute and controller. 
>> Restarted the service, but unfortunately same result.
>> 
>> Regards,
>> Shashank Sahni
>> 
>> On 05/14/2012 05:58 PM, Razique Mahroua wrote:
>> Hi,
>> do you have the flag iscsi_ip_prefix configured in your nova.conf ?
>> Razique
>> 
>> 
>> 
>> Shashank Sahni
>> 14 mai 2012 14:22
>> Hi everyone,
>> 
>> I'm trying to configure a multi-node installation. Here is a brief overview.
>> 
>> 1) controller - api+network+scheduler+novnc+glance+keystone (nova.conf - 
>> http://paste.openstack.org/show/17470/)
>> 2) compute node (nova.conf - http://paste.openstack.org/show/17469)
>> 3) volume node(single)
>> 
>> Compute and vnc are working fine. I'm able to create and delete volumes. 
>> Iscsi discovery from the compute nodes is working too. But when I try to 
>> attach a volume, the compute node tries to connect to the controller node 
>> instead of volume and hence crashes with the following error.
>> 
>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected 
>> error while running command.
>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Command: sudo nova-rootwrap iscsiadm 
>> -m node -T iqn.2010-10.org.openstack:volume-0003 -p controller:3260 
>> --rescan
>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Exit code: 255
>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Stdout: ''
>> 2012-05-14 17:32:13 TRACE nova.rpc.amqp Stderr: 'iscsiadm: No portal 
>> found.\n'
>> 
>> Any suggestions?
>> 
>> Kind Regards,
>> Shashank Sahni
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>  
>> -- 
>> Nuage & Co - Razique Mahroua 
>> razique.mahr...@gmail.com
>> 
>> 
>>  
>> 
>>  
>> Please do not print this email unless it is absolutely necessary.
>> 
>> The information contained in this electronic message and any attachments to 
>> this message are intended for the exclusive use of the addressee(s) and may 
>> contain proprietary, confidential or privileged information. If you are not 
>> the intended recipient, you should not disseminate, distribute or copy this 
>> e-mail. Please notify the sender immediately and destroy all copies of this 
>> message and any attachments.
>> 
>> WARNING: Computer viruses can be transmitted via email. The recipient should 
>> check this email and any attachments for the presence of viruses. The 
>> company accepts no liability for any damage caused by any virus transmitted 
>> by this email.
>> 
>> www.wipro.com
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Dhcp lease errors in vlan mode

2012-05-14 Thread Vishvananda Ishaya
TL;DR

To fix issues with failed dhcp leases in vlan mode, upgrade to dnsmasq 2.6.1[1]

THE LONG VERSION

There is an issue with the way nova uses dnsmasq in VLAN mode. It starts up a 
single copy of dnsmasq for each vlan on the network host (or on every host in 
multi_host mode). The problem is in the way that dnsmasq binds to an ip address 
and port[2]. Both copies can respond to broadcast packet, but unicast packets 
can only be answered by one of the copies.

In nova this means that guests from only one project will get responses to 
their unicast dhcp renew requests.  Unicast projects from guests in other 
projects get ignored. What happens next is different depending on the guest os. 
 Linux generally will send a broadcast packet out after the unicast fails, and 
so the only effect is a small (tens of ms) hiccup while interface is 
reconfigured.  It can be much worse than that, however. I have seen cases where 
Windows just gives up and ends up with a non-configured interface.

This bug was first noticed by some users of openstack who rolled their own fix. 
Basically, on linux, if you set the SO_BINDTODEVICE socket option, it will 
allow different daemons to share the port and respond to unicast packets, as 
long as they listen on different interfaces. I managed to communicate with 
Simon Kelley, the maintainer of dnsmasq and he has integrated a fix[3] for the 
issue in the current version[1] of dnsmaq.

I don't know how may users out there are using vlan mode, but you should be 
able to deal with this issue by upgrading dnsmasq. It would be great if the 
various distributionss could upgrade as well, or at least try to patch in the 
fix[3]. If upgrading dnsmasq is out of the question, a possible workaround is 
to minimize lease renewals with something like the following combination of 
config options.

# release leases immediately on terminate
force_dhcp_release=true
# one week lease time
dhcp_lease_time=604800
# two week disassociate timeout
fixed_ip_disassociate_timeout=1209600

Vish

[1] http://www.thekelleys.org.uk/dnsmasq/dnsmasq-2.61.tar.gz

[2] http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2011q3/005233.html

[3] 
http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commitdiff;h=9380ba70d67db6b69f817d8e318de5ba1e990b12___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ERROR: Resize requires a change in size

2012-05-14 Thread Vishvananda Ishaya
If you have actually specified a different flavor with the command, then it 
sounds like this is a bug.  I see the check here:

1450 if (current_memory_mb == new_memory_mb) and flavor_id:
1451 raise exception.CannotResizeToSameSize()

And I don't see any reason for it. It seems like it should be checking if the 
flavor_id is different.

Vish

On May 13, 2012, at 10:11 AM, Jimmy Tsai wrote:

> Hi all,
> 
> I only want to resize just the disk space with root_gb or ephemeral_gb.
> if I run a "nova resize" command to change the disk space of a instance, and 
> leave the vcpu and memory untouched,
> I'll get the error message like this : 
> ERROR: Resize requires a change in size. (HTTP 400)
> The condition of the resize procedure seems must be a change with memory 
> (vcpu is not must be ).
> Is there any nova config option to do with this? 
> 
> Thanks,
> -Jimmy
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-11 Thread Vishvananda Ishaya

On May 11, 2012, at 2:04 PM, Mark McLoughlin wrote:
> 
> I'm guessing we could easily flick a switch in gerrit to cause it to
> rebase instead of merge.
> 
> I don't remember any debate about it, but I'm also guessing there aren't
> any hugely strong opinions in OpenStack about which is better.
> 
> The thing we'd lose is the context of which parent commit a patch was
> written against. If I was to go by some of Linus's rants I'd think this
> was a cardinal sin ("NEVER destroy other people's history") yet kernel
> folks do this all the time by emailing around patches.
> 
> On balance, I think I'd prefer if we did switch over to rebasing.

I would prefer a rebase as well, the merge commits make it hard to figure out 
via grep exactly where a fix/feature hit master. I actually suggested this on 
irc the other day. There was some concern that it would cause more merges to be 
rejected because they don't rebase cleanly, although It is a little tough for 
me to come up with a situation where a merge commit applies cleanly but a 
rebase fails.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Object Storage ACLs with KeyStone

2012-05-11 Thread Vishvananda Ishaya
I'm not totally sure about this, but you might have to use the project_id
from keystone instead of the project_name when setting up acls.   The same
may be true of user_id.

Vish

On Fri, May 11, 2012 at 12:51 AM, 张家龙  wrote:

>
> Hello, everyone.
>
> I encountered some problems when i set permissions (ACLs) on Openstack
> Swift containers.
> I installed swift-1.4.8(essex) and use keystone-2012.1 as
> authentication system on CentOS 6.2 .
>
> My swift proxy-server.conf and keystone.conf are here:
> http://pastebin.com/dUnHjKSj
>
> Then,I use the script named opensatck_essex_data.sh(
> http://pastebin.com/LWGVZrK0) to
> initialize keystone.
>
> After these operations,I got the token of demo:demo and newuser:newuser
>
> curl -s -H 'Content-type: application/json' \
> -d '{"auth": {"tenantName": "demo", "passwordCredentials":
> {"username": "demo", "password": "admin"}}}' \
> http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool
>
> curl -s -H 'Content-type: application/json' \
> -d '{"auth": {"tenantName": "newuser", "passwordCredentials":
> {"username": "newuser", "password": "admin"}}}' \
> http://127.0.0.1:5000/v2.0/tokens | python -mjson.tool
>
> Then,enable read access to newuser:newuser
>
> curl –X PUT -i \
> -H "X-Auth-Token: " \
> -H "X-Container-Read: newuser:newuser" \
>
> http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc
>
> Check the permission of the container:
>
> curl -k -v -H 'X-Auth-Token:' \
>
> http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc
>
> This is the reply of the operation:
>
> HTTP/1.1 200 OK
> X-Container-Object-Count: 1
> X-Container-Read: newuser:newuser
> X-Container-Bytes-Used: 2735
> Accept-Ranges: bytes
> Content-Length: 24
> Content-Type: text/plain; charset=utf-8
> Date: Fri, 11 May 2012 07:30:23 GMT
>
> opensatck_essex_data.sh
>
> Now,the user newuser:newuser visit the container of demo:demo
>
> curl -k -v -H 'X-Auth-Token:' \
>
> http://127.0.0.1:8080/v1/AUTH_f1723800c821453d9f22d42d1fbb334b/demodirc
>
> While,I got 403 error.Can someone help me?
>
> **
> --
> Best Regards
>
> ZhangJialong
> **
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 'admin' role hard-coded in keystone and nova, and policy.json

2012-05-11 Thread Vishvananda Ishaya
Most of nova is configurable via policy.json, but there is the issue with
context.is_admin checks that still exist in a few places. We definitely
need to modify that.

Joshua, the idea is that policy.json will ultimately be managed in keystone
as well. Currently the policy.json is checked for modifications, so it
would be possible to throw it on shared storage and modify it for every
node at once without having to restart the nodes.  This is an interim
solution until we allow for creating and retrieving policies inside of
keystone.

Vish

On Thu, May 10, 2012 at 7:13 PM, Joshua Harlow wrote:

>  I was also wondering about this, it seems there are lots of policy.json
> files with hard coded roles in them, which is weird since keystone supports
> the creation of roles and such, but if u create a role which isn’t in a
> policy.json then u have just caused yourself a problem, which isn’t very
> apparent...
>
>
> On 5/10/12 2:32 PM, "Salman A Baset"  wrote:
>
> It seems that 'admin' role is hard-coded cross nova and horizon. As a
> result if I want to define 'myadmin' role, and grant it all the admin
> privileges, it does not seem possible. Is this a recognized limitation?
>
> Further, is there some good documentation on policy.json for nova,
> keystone, and glance?
>
> Thanks.
>
> Best Regards,
>
> Salman A. Baset
> Research Staff Member, IBM T. J. Watson Research Center
> Tel: +1-914-784-6248
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving Xen support in the libvirt driver

2012-05-10 Thread Vishvananda Ishaya

On May 9, 2012, at 10:08 PM, Jim Fehlig wrote:

> Hi,
> 
> I've been tinkering with improving Xen support in the libvirt driver and
> wanted to discuss a few issues before submitting patches.

Awesome!

> 
> Even the latest upstream release of Xen (4.1.x) contains a rather old
> qemu, version 0.10.2, which rejects qcow2 images with cluster size >
> 64K.  The libvirt driver creates the COW image with cluster size of 2M. 
> Is this for performance reasons?  Any objections to removing that option
> and going with 'qemu-img create' default of 64K?

As per other email, 64K seems correct.
> 
> In a setup with both Xen and KVM compute nodes, I've found a few options
> for controlling scheduling of an instance to the correct node.  One
> option uses availability zones, e.g.
> 
> # nova.conf on Xen compute nodes
> node_availability_zone=xen-hosts
> 
> # launching a Xen PV instance
> nova boot --image  --availability_zone xen-hosts ...
> 
> The other involves a recent commit adding additional capabilities for
> compute nodes [1] and the vm_mode image property [2] used by the
> XenServer driver to distinguish HVM vs PV images.  E.g.
> 
> # nova.conf on Xen compute nodes
> additional_compute_capabilities="pv,hvm"
> 
> # Set vm_mode property on Xen image
> glance update  vm_mode=pv
> 
> I prefer that latter approach since vm_mode will be needed in the
> libvirt driver anyhow to create proper config for PV vs HVM instances. 
> Currently, the driver creates usable config for PV instances, but needs
> some adjustments for HVM.

Agree that this is best. Once general host aggregates[1] is done, the 
capabilities and the availability zone will move into aggregate metadata and it 
will just be making sure that we have reasonable image properties to help the 
scheduler place the guest correctly.

Vish

[1] https://blueprints.launchpad.net/nova/+spec/general-host-aggregates ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving Xen support in the libvirt driver

2012-05-10 Thread Vishvananda Ishaya

On May 10, 2012, at 1:56 AM, Daniel P. Berrange wrote:

> On Thu, May 10, 2012 at 09:06:58AM +0100, Daniel P. Berrange wrote:
> 
> I had a quick chat with Kevin Wolf who's the upstream QEMU qcow2 maintainer
> and he said that 64k is the current recommended cluster size for qcow2.
> Above this size, the cost of COW becomes higher causing an overall
> drop in performance.
> 
> Looking at GIT history, Nova has used cluster_size=2M since Vish first
> added qcow2 support, and there's no mention of why in the commit message.
> So unless further info comes to light, I'd say we ought to just switch
> to use qemu-img's default setting of 64K for both Xen and KVM.
> 

This is good info.  Sounds like we should switch to 64K

Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 'nova flavor-list' fails with "ERROR: string indices must be integers, not str", but 'nova-manage flavor list' succeeds.

2012-05-09 Thread Vishvananda Ishaya
Is there a traceback from nova-api?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] questions on the dynamic loading of virt drivers in nova

2012-05-09 Thread Vishvananda Ishaya
No this is mostly just legacy stuff that was never refactored.

Vish
On May 9, 2012 3:33 PM, "Sean Dague"  wrote:

> I'm familiarizing myself with the nova code and trying to reconcile that
> while there is dynamic class based loading in ComputeManager using
> import_utils in __init__() there is also a defaulting to the
> nova.virt.connection.get_**connection function.
>
> That's actually got a big if / else statement of string literals of known
> virt drivers, and then loads specific virt drivers from there.
>
> Is there a reason for both approaches? Can we refactor to a point where we
> don't need need of a common file with driver specific imports and string
> literals? Is there a reason not to?
>
> Thanks,
>
>-Sean
>
> --
> Sean Dague
> IBM Linux Technology Center
> email: sda...@linux.vnet.ibm.com
> alt-email: slda...@us.ibm.com
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Floating IPs don't get dissociated after delete

2012-05-09 Thread Vishvananda Ishaya
This definitely sounds like a bug. Floating Ips should be automatically 
disassociated on delete

Vish

On May 9, 2012, at 8:26 AM, Steven Dake wrote:

> On 05/09/2012 07:20 AM, Bilel Msekni wrote:
>> Hi , 
>> 
>> I am having this problem just like many others.
>> 
>> Each time I delete a VM, the floating IP doesn't get automatically
>> dissociated, has anyone encountred this problem and solved it ?
>> 
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> Bilel,
> 
> We had this problem in openstack during our floating ip implementation
> of heat (http://www.heat-api.org).  We solved it by using the floating
> ip.delete api in nova.  See:
> 
> https://github.com/heat-api/heat/blob/master/heat/engine/eip.py
> line 118
> 
> The program flow on delete is:
> delete an instance
> wait for instance to disappear
> delete floating ip
> 
> Regards
> -steve
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Core Cleanup

2012-05-08 Thread Vishvananda Ishaya

On May 8, 2012, at 3:19 PM, Brian Lamar wrote:

> I'll be the second to admit I haven't been doing a ton of reviews lately, but 
> that doesn't mean I'm not dedicated to making the project the best it can be. 
> That is not to say your email implies any judgement but I'd love a couple 
> clarifications.
> 
> What are the minimum requirements, in your mind (since you're proposing 
> members for removal), for keeping core membership? (Your answer could be just 
> pointing me to the wiki page where guidelines for minimum flair have been 
> described.)

There isn't an exact set of requirements, but I feel that people with rights to 
approve things into trunk should be people that consistently do reviews.  I 
personally average > 2 hours a day doing reviews. I expect to take a larger 
load than most because of my role as PTL, but I think it is reasonable to 
expect people to average 1 review per work day each month or ~20 reviews.  The 
numbers in the previous mail were since Feb. 22nd, which means ~70 days.

> Is the re-joining of nova-core the same process as joining in the first place?

That was my assumption, but we could come up with something different.

> 
> What are the benefits of having a smaller team in your mind? Many people on 
> this list have dedicated countless hours to the project and have contributed 
> thousands of lines of code. One huge benefit I see of being nova-core is the 
> ability to -2 code which  you believe is going to be detrimental to the 
> project as a whole, perhaps because you have worked with or have contributed 
> that code in the past.

The -2 issue is a good point. I personally treat a -1 (or +1) from the author 
of a given piece of code quite strongly when I do reviews, but you're right 
that the -1 could be more trivially overridden. The removal is primarily to 
keep core a manageable size.  We currently have 25 core members and still have 
many patches that are not being quickly reviewed.  Giving too many people the 
ability to approve patches leads to inconsistency in code and the review 
process.  It seems like overkill to have > 20 people. I expect this number to 
decrease further if out plans to create substystem branches materialize.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Live migration libvirt authentication error.

2012-05-08 Thread Vishvananda Ishaya
I haven't tried sasl so hopefully someone else has an idea.  I have sucessfully 
used qemu+ssh with ssh keys setup though.

Vish

On May 8, 2012, at 1:13 AM, Szymon Grzybowski wrote:

> Hey,
> 
> I'm trying to migrate machine from HostA to HostB, but I have "
> virtNetSASLSessionClientStart:484: authentication failed:  Failed to start 
> SASL negotiation: -20" in /var/log/libvirt.log when i'm trying to 
> "nova live-migration  HostB"
> 
> I have SASL enabled on both machines. I've tried manually connect to remote 
> libvirt through virsh and defined user (ex. "virsh --connect 
> qemu+tcp://HostA/system list") and everything works fine, but i have to give 
> proper credentials (username/password). I saw that nova's connection string 
> is also "qemu+tcp://HostA/system", but i can't see any user there. How does 
> it work in nova and how can I fix it?
> 
> Cheers,
> 
> -- 
> Semy
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Glance] Replication implementations

2012-05-08 Thread Vishvananda Ishaya
Alternatively, we could just consider the ec2 mapping layer to be global data 
that must be replicated somehow across the system.  I don't think we can really 
ensure no collisions mapping from uuid -> ec2_id deterministically, and I don't 
see a clear path forward when we do get a collision.

Vish

On May 8, 2012, at 12:24 AM, Michael Still wrote:

> On 04/05/12 20:31, Eoghan Glynn wrote:
> 
> Sorry for the slow reply, I've been trapped in meetings.
> 
> [snip]
> 
>> So the way things currently stand, the EC2 image ID isn't really capable of
>> migration.
>> 
>> I was thinking however that we should change the EC2 image generation logic,
>> so that there is a reproducible glance UUID -> EC2 mapping (with a small
>> chance of collision). This change would allow the same EC2 ID to be generated
>> in multiple regions for a given glance UUID (modulo collisions).
>> 
>> Would that be helpful in your migration use-case?
> 
> I do think this is a good idea. Or even if the column wasn't
> auto-increment, but just picked a random number or something (because
> that would be marginally less likely to clash). Without somehow making
> these ec2 ids more global, replication between regions is going to
> suffer from ec2 api users having to somehow perform a lookup out of band.
> 
> Now, my use case is a bit special, because I can enforce that images are
> only ever uploaded to one master region, and then copied to all others.
> I think that's probably not true for other users though.
> 
> Mikal
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nova Core Cleanup

2012-05-08 Thread Vishvananda Ishaya
Hey Everyone,

There appear to be a number of people on nova-core that no longer have 
sufficient time to participate in reviews.  In order to facilitate the review 
process, I'd like to remove some of these people and consider if we may need to 
add some new core members.

By my count (from 
http://nova.openstack.org/~soren/stats/nova-review-stats.html) the following 
people have been very low on reviews over the past few months:

Brian Lamar (12)
Jesse Andrews (12)
Joshua McKenty (0)
Monsyne Dragon (12)
Monty Taylor (4)
Paul Voccio (7)
Soren Hansen (10)
termie (0)
Todd Willey (0)
Lorin Hochstein (2)

I would like to propose that we remove the above 10 people from nova-core.  
When they have time in busy schedule to start reviewing again, we can bring 
them back in. If I don't hear any dissenting votes in the next week, I will 
remove them.

Please keep in mind that this isn't intended to question the amazing work that 
all of these people have done for nova.  Nova-core is a review team, so we have 
to make sure the people on it actually have time to do reviews.

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] proposal for Russell Bryant to be added to Nova Core

2012-05-08 Thread Vishvananda Ishaya
More than a week has passed without -1s, and I count 7 votes. Welcome to core 
Russell!

Vish

On Apr 27, 2012, at 8:09 AM, Dan Prince wrote:

> Russell Bryant wrote the Nova Qpid rpc implementation and is a member of the 
> Nova security team. He has been helping chipping away at reviews and 
> contributing to discussions for some time now.
> 
> I'd like to seem him Nova core so he can help out w/ reviews... definitely 
> the RPC ones.
> 
> Dan
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] Blueprints for Folsom

2012-05-07 Thread Vishvananda Ishaya
Hello everyone,

The number of blueprints for nova has gotten entirely out-of-hand.  I've 
obsoleted about 40 blueprints and there are still about 150 blueprints for 
nova. Many of these are old, or represent features that are cool ideas, but 
haven't had any activity in a long time. I've attempted to target all of the 
relevant blueprints to Folsom.  You can see the progress here:

https://blueprints.launchpad.net/nova/folsom

I would like to get our nova blueprints cleaned up as much as possible.  In one 
week, I am going to mark all blueprints that are not targeted to folsom 
Obsolete. This will allow us to start over from a clean slate. So here is what 
I need from everyone:

1. If you see a blueprint on the main nova list that is not targeted to folsom 
that should stay around, please let me know ASAP or it will get deleted:

https://blueprints.launchpad.net/nova

2. Operational Support Team, there are a bunch of blueprints that are not 
targeted to folsom, so please either target them or mark them obsolete by next 
week.

3. Orchestration Team, there are a whole bunch of blueprints relating to 
orchestration, some seem like duplicates, I'm tempted to just delete them all 
and start over with one or two simple blueprints with clear objectives.  I 
don't really know which ones are current, so please help with this.

4. There are a bunch of blueprints targeted to folsom that don't have people 
assigned yet. If you want to help with coding, there is a lot of opportunity 
there, so let me know if I can assign one of the blueprints to you.

5. If there is any work being done as a result of the summit that doesn't have 
an associated blueprint, please make one and let me know so I can target it.

6. If you are a blueprint assignee, please let me know when you can have the 
work completed so I can finish assigning blueprints to milestones

I'm currently working on prioritizing the targeted blueprints, so hopefully we 
have a decent list of priorities by the meeting tomorrow.

Thanks for the help,
Vish___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problem with security_groups quota exceeded.

2012-05-07 Thread Vishvananda Ishaya
Sorry for the confusion, but if you are using essex with keystone, you actually 
will need to use the tenant_id for your quota changes, not your tenant name. 
Nova has no way of mapping names to ids since that data is in keystone now.

try nova-manage project quota 

Vish

On May 7, 2012, at 7:07 AM, Jorge Luiz Correa wrote:

> Hi! I would like some help with security group quotas. I'm using juju with 
> Essex, all from 12.04 repos. 
> 
> I have two charms to create a hadoop cluster. Everything works fine up to 6 
> instances, then juju can't instantiate no one more. 
> 
> #!/bin/bash
> clear
> juju bootstrap
> sleep 60;
> juju deploy --repository ~/charms local:hadoop-master
> juju deploy --repository ~/charms local:hadoop-slave
> sleep 200;
> juju add-relation hadoop-slave hadoop-master
> juju expose hadoop-master
> sleep 10;
> for i in {1..10} ; do 
>   juju add-unit hadoop-slave; sleep 20;
> done
> 
> The problem is in "juju add-unit hadoop-slave; sleep 20;" call, when 6 
> instances have already been instantiated. 
> 
> 
> The error in /var/log/nova/nova.api.log is:
> 
> 2012-05-07 10:42:29 INFO nova.api.ec2 
> [req-f6d4cb5d-0e78-42b6-9ec9-3576ea8e882d f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] 0.207494s 172.16.0.2 GET /services/Cloud 
> CloudController:DescribeSecurityGroups 200 [Twisted PageGetter] text/plain 
> text/xml
> 2012-05-07 10:42:30 DEBUG nova.api.ec2 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] action: CreateSecurityGroup from (pid=9798) 
> __call__ /usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py:435
> 2012-05-07 10:42:30 DEBUG nova.api.ec2 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] arg: GroupName  val: 
> juju-sample-8 from (pid=9798) __call__ 
> /usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py:437
> 2012-05-07 10:42:30 DEBUG nova.api.ec2 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] arg: GroupDescription   val: 
> juju group for sample machine 8 from (pid=9798) __call__ 
> /usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py:437
> 2012-05-07 10:42:30 AUDIT nova.api.ec2.cloud 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] Create Security Group juju-sample-8
> 2012-05-07 10:42:30 ERROR nova.api.ec2 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] EC2APIError raised: Quota exceeded, too 
> many security groups.
> 2012-05-07 10:42:30 TRACE nova.api.ec2 Traceback (most recent call last):
> 2012-05-07 10:42:30 TRACE nova.api.ec2   File 
> "/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 582, in 
> __call__
> 2012-05-07 10:42:30 TRACE nova.api.ec2 result = 
> api_request.invoke(context)
> 2012-05-07 10:42:30 TRACE nova.api.ec2   File 
> "/usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py", line 81, in 
> invoke
> 2012-05-07 10:42:30 TRACE nova.api.ec2 result = method(context, **args)
> 2012-05-07 10:42:30 TRACE nova.api.ec2   File 
> "/usr/lib/python2.7/dist-packages/nova/api/ec2/cloud.py", line 797, in 
> create_security_group
> 2012-05-07 10:42:30 TRACE nova.api.ec2 raise exception.EC2APIError(msg)
> 2012-05-07 10:42:30 TRACE nova.api.ec2 EC2APIError: Quota exceeded, too many 
> security groups.
> 2012-05-07 10:42:30 TRACE nova.api.ec2
> 2012-05-07 10:42:30 ERROR nova.api.ec2 
> [req-6cb8c3ea-87d7-411d-9f9a-780f56a9c5f4 f542658cb19a45319b765d58e7dcd320 
> 31861e37c6be41b797ea9454c758f5a1] EC2APIError: Quota exceeded, too many 
> security groups.
> 
> ---
> 
> The quotas have already been changed. 
> 
> root@044:~# nova-manage project quota admin
> 2012-05-07 10:57:17 DEBUG nova.utils 
> [req-c516e88b-f184-4def-8106-9f1e884ddc8d None None] backend  'nova.db.sqlalchemy.api' from 
> '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from 
> (pid=27673) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
> metadata_items: 128
> volumes: 10
> gigabytes: 1000
> ram: 51200
> security_group_rules: 500 <<
> instances: 50
> security_groups: 100 <
> injected_file_content_bytes: 10240
> floating_ips: 62
> injected_files: 20
> cores: 24
> 
> 
> Analyzing the security groups, less than 10:
> 
> root@044:/var/lib/nova# nova secgroup-list
> +---+-+
> |  Name |   Description   |
> +---+-+
> | default   | default |
> | juju-sample   | juju group for sample   |
> | juju-sample-0 | juju group for sample machine 0 |
> | juju-sample-1 | juju group for sample machine 1 |
> | juju-sample-2 | juju group for sample machine 2 |
> | juju-sample-3 | juju group for sample machin

Re: [Openstack] Provisioning performance in Essex

2012-05-07 Thread Vishvananda Ishaya

On May 6, 2012, at 11:48 PM, Salman A Baset wrote:

> Hello folks,
> 
> I was looking into the provisioning process in OpenStack. If force_raw_images 
> flag is set to False, the provisioning process looks like:
> 
> (1) Copy the image 
> (2) Create a copy of the copied image to an appropriate flavor name
> (3) Start the VM
> 
> while if it is set to True, the provisioning process looks like:
> 
> (1) Copy the image
> (2) Convert image to raw
> (3) Create a copy of the converted raw image to an appropriate flavor name
> (4) Start the VM
> 
> For the former, copying the image to a flavor name is a redundant process. 
> Can that be eliminated? Any thoughts?
> 
> 

The copy is to resize the image to the size requested by the user.  Are you 
suggesting that it could be streamed from glance and resized to the proper size 
on the fly?  It seems like that would be possible although it would probably 
make future launches of the same instances of a different size a bit slower.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Questions on VM Lifecycle

2012-05-04 Thread Vishvananda Ishaya

On May 3, 2012, at 4:50 PM, Mark Moseley wrote:

> 
> * What is the right way to stop/start a persistent instance? So far
> I've been using 'nova boot' to start and 'nova delete' to stop. E.g.:
> 
>> nova boot --flavor=1 --image c5cecc17-295c-4ebc-9019-2ccc222d3f52 
>> --key_name=key3 --nic 
>> net-id=63743ee0-f8a1-45f8-888a-cce38d09cca2,v4-fixed-ip=192.168.1.2 
>> --block_device_mapping vda=11 --block_device_mapping vdb=12 
>> --nic=net-id=63743ee0-f8a1-45f8-888a-cce38d09cca2 myvm1

You might try 

nova suspend/resume

(does a hypervisor-level suspend where all memory is written to disk and the vm 
is stopped)

or

nova pause/unpause

(stops emulation of the vm

or

api.openstack.org -- stop/start

(this one is not exposed to novaclient, but it will shutdown the instance but 
not delete any files)

> 
> * Not really a question, but if I'm using persistent storage, it'd be
> nice if I didn't have to specify "--image", since I'm not actually
> using the image.

the image should only be necessary for the initial boot

> 
> 
> * Is there a way to add an interface, but without Nova trying to
> configure it? That is, add an eth1 to the vm but do nothing else (yes,
> I realize I'll have to muck with ebtables/iptables on the compute
> node). Our imaging process will take care of the network
> configuration, so it'd be one less thing to add to Nova's management
> overhead. I can probably accomplish the same thing with a dummy
> network on the VM's eth1, since it's not actually being injected.

not now

> 
> 
> * Any plans to add a mac= subarg to the "--nic" option in 'nova boot'?

this was discussed at some point, but no active development that i know of



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Any issue with pycurl?

2012-05-04 Thread Vishvananda Ishaya
unknown, but your httplib issues might be solved with from eventlet.green 
import httplib

Vish

On May 3, 2012, at 3:55 PM, Ken Thomas wrote:

> Hi all,
> 
> We're working on a custom plugin where we make a web service call. We're 
> having some issues with urllib2 and httplib that we're trying to track down.  
> In the meantime, we've discovered that it all works fine if we use pycurl.
> 
> If we don't suss out the problem, does anybody know if there are any 
> interaction issues with pycurl being used in a nova plugin?
> 
> Thanks!
> 
> Ken
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-04 Thread Vishvananda Ishaya
Apologies for top posting.  Just wanted to say +1

This all makes sense to me.

Vish

On May 3, 2012, at 4:08 AM, Mark McLoughlin wrote:

> Hey,
> 
> We discussed this during the "baking area for features" design summit
> session. I found that discussion fairly frustrating because there were
> so many of us involved and we all were either wanting to discuss
> slightly different things or had a slightly different understanding of
> what we were discussing. So, here's my attempt to put some more
> structure on the discussion.
> 
> tl;dr - subsystem branches are managed by trusted domain experts and
> feature branches are just temporary rebasing branches on personal github
> forks. We've got a tonne of work to do figuring out how this would all
> work. We should probably pick a single subsystem and start with that.
> 
> ...
> 
> Firstly, problem definition:
> 
>  - Nova is big, complex and has a fairly massive rate of churn. While 
>the nova-core team is big, there isn't enough careful review going 
>on by experts in particular areas and there's a consistently large
>backlog of reviews.
> 
>  - Developers working on features are very keen to have their work 
>land somewhere and this leads to half-finished features being 
>merged onto master rather than developers collaborating to get a 
>feature to a level of completeness and polish before merging into 
>master.
> 
> Some assumptions about the solution:
> 
>  - There should be a small number of domain experts who can approve 
>changes to each of major subsystems. This will encourage 
>specialization and give more clear lines of responsibility.
> 
>  - There should be a small number of project dictators who have final 
>approval on merge proposals, but who are not expected to review 
>every patch in great detail. This is good because we need someone 
>with an overall view of the project who can make sure efforts in 
>the various subsystems are coordinated, without that someone being 
>massively overloaded.
> 
>  - New features should be developed on a branch and brought to a level 
>of completeness before being merged into master. This is good 
>because we don't want half-baked stuff in master but also because 
>it encourages developers to break their features into stages where 
>each stage of the work can be brought to completion and merged 
>before moving on to the next stage.
> 
>  - In essence, we're assuming some variation of the kernel distributed 
>development model.
> 
>(FWIW, my instinct is to avoid the kernel model on projects. Mostly 
>because it's extremely complex and massive overkill for most 
>projects. Looking at the kernel history with gitk is enough to send 
>anyone screaming for the hills. However, Nova seems to be big 
>enough that we're experiencing the same pressures that drove the 
>kernel to adopt their model)
> 
> Ok, what are "subsystem branches" and how would they work?
> 
>  - Subsystem branches would have a small number of maintainers who can 
>approve a change. These would be domain experts providing strong 
>oversight over a particular area.
> 
>(In gerrit, this is a branch with a small team or single person who 
>can +1 approve a review)
> 
>  - Project dictators don't need to do detailed reviews of merge 
>proposals from subsystem maintainers. The dictator's role is mostly 
>just to sign off on the merge proposal. However, the dictator can 
>comment in the proposal on things which could have been done better 
>and the subsystem maintainer should take note of these comments and 
>perhaps retroactively fix them up. Ultimately, though, the dictator 
>can have exercise a veto if the merge proposal is unacceptable or 
>if the subsystem maintainer is consistently making the same 
>mistakes.
> 
>  - It would be up to the project dictators to help drive patches 
>through the right subsystem branches - e.g. they might object if 
>one subsystem maintainer merged a patch that inappropriately cut 
>into another subsystem or they might refuse to merge a given patch
>into the main branch unless it went through the appropriate 
>subsystem branch.
> 
>(In gerrit, this would mean a small team or single person who can 
>+1 approve merge proposals on master. They would -1 proposals
>submitted against master which should have been submitted against a 
>subsystem branch.)
> 
>  - Subsystem branches might not necessarily be blessed centrally. It 
>might be a case that anyone can create such a branch and, over 
>time, establish trust with the project dictators. Subsystem 
>branches would come and go. This is the mechanism by which 
>subsystem maintainership is transferred between people over time.
> 
>(In gerrit, this means people need to easily be able to create 
>their own branches)
> 
>(What's more difficult to imagine in gerrit is how

Re: [Openstack] Nova compute manager: trying to understand rationale for kpartx atop qemu-nbd

2012-05-02 Thread Vishvananda Ishaya
That seems like a reasonable approach.  Would be nice to work with packagers to 
verify that the packages are properly installing nbd.  I'm pretty sure i used 
kpartx because i didn't know about the max_part parameter.

Vish

On May 2, 2012, at 11:37 AM, Lee Schermerhorn wrote:

> With diablo plus some of our own changes, we've discovered our compute
> nodes in some of our test nova environments are littered with
> orphaned /dev/mapper/nbd* links to /dev/dm-* devices that are holding
> the respective nbd devices.  Of course, this causes injection failure
> for VMs that attempt to reuse those wedged nbd devices that appear
> available by the method that nova uses to determine nbd device
> availability.
> 
> This could well be self-inflicted, and we haven't gotten down to the
> root cause.  However, in looking at it, we're wondering why the compute
> manager uses kpartx, specifically atop nbd device.  It does appear to be
> rather fragile and nbd devices do support partitioned images if the
> module is installed with max_part > 0, where zero is the unfortunate
> default.
> 
> We're thinking that maybe we can dispense with kpartx and, ensuring that
> the nbd module is installed with max_part > 0 on our compute nodes, use
> the resulting /dev/nbdXpY devices directly.   But, before we charge off
> down that path, we want to understand the rationale for using kpartx
> atop nbd.  We've searched the wiki and git logs and the wider 'net for
> enlightenment.  Finding none, we turn to the collective wisdom of the
> Community.
> 
> Is there some problem with qemu-nbd and partitioned images that argues
> against this approach?  Perhaps qemu-nbd doesn't recognize/support all
> of the partition table types that kpartx does?  Something more
> insidious?
> 
> Anyone know?
> 
> Regards,
> Lee Schermerhorn
> 
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] proposal for Russell Bryant to be added to Nova Core

2012-05-02 Thread Vishvananda Ishaya
+1 From me!

On Apr 28, 2012, at 11:41 PM, Mark McLoughlin wrote:

> Definite +1
> 
> Mark.
> 
> On Fri, 2012-04-27 at 11:09 -0400, Dan Prince wrote:
>> Russell Bryant wrote the Nova Qpid rpc implementation and is a member of the 
>> Nova security team. He has been helping chipping away at reviews and 
>> contributing to discussions for some time now.
>> 
>> I'd like to seem him Nova core so he can help out w/ reviews... definitely 
>> the RPC ones.
>> 
>> Dan
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [client] creating blueprints for the unified CLI project

2012-05-02 Thread Vishvananda Ishaya
going into interactive mode when no args are specified works well for virsh.

Vish

On May 1, 2012, at 8:08 PM, Doug Hellmann wrote:

> I thought having it run like that by default made sense, but if the list 
> agrees we want a flag I'm happy to change it.
> 
> On Tue, May 1, 2012 at 8:06 PM, Matt Joyce  wrote:
> So.  I see that now.  ( got a devstack vm setup with trunk build )
> 
> Do we want the interactive mode to engage by default when executing
> without parameters?  Or do we want a flag to engage that ( people can
> alias on their own if they want ).
> 
> -Matt
> 
> On Tue, May 1, 2012 at 4:52 PM, Doug Hellmann
>  wrote:
> > I borrowed Guido's time machine and added an interactive mode to cliff this
> > weekend.
> >
> > http://cliff.readthedocs.org/en/latest/index.html
> >
> >
> > On Tue, May 1, 2012 at 7:48 PM, Matt Joyce  wrote:
> >>
> >> Question do we want to consider an interactive mode on the CLI?  Or is
> >> the shell enough for our use case?
> >>
> >> Maybe just ignore that question until we've got something beyond the
> >> basics.
> >>
> >> -Matt
> >>
> >> On Tue, May 1, 2012 at 4:34 PM, Matt Joyce  wrote:
> >> > Awesome. will do.
> >> >
> >> > On Tue, May 1, 2012 at 1:47 PM, Doug Hellmann
> >> >  wrote:
> >> >> I have started creating blueprints from my notes about activities we
> >> >> need to
> >> >> complete for the unified CLI. Please check the list
> >> >> at https://blueprints.launchpad.net/python-openstackclient/ and make
> >> >> sure I
> >> >> haven't missed anything that has been discussed so far, and open a
> >> >> blueprint
> >> >> if I have.
> >> >>
> >> >> Thanks,
> >> >> Doug
> >> >>
> >> >> ___
> >> >> Mailing list: https://launchpad.net/~openstack
> >> >> Post to : openstack@lists.launchpad.net
> >> >> Unsubscribe : https://launchpad.net/~openstack
> >> >> More help   : https://help.launchpad.net/ListHelp
> >> >>
> >
> >
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova volume service and the nova client command

2012-05-02 Thread Vishvananda Ishaya
Correct.

Vish

On May 1, 2012, at 3:01 PM, Lillie Ross-CDSR11 wrote:

> Once again, I think I'm answering my own question.
> 
> Nova_volume works in conjunction with nova-api to talk to any number of iscsi 
> targets that you might have configured in your cloud.  Each target runs an 
> instance of nova-volume.  The URL for the volume service should point at the 
> host(s) running the nova-api front end.  Nova-volume listens to the 
> appropriate AMQP channel to perform the necessary LVM commands.
> 
> Am I missing anything?
> 
> Sorry for all questions.  I just needed to think things through a bit more…
> 
> /ross
> 
> On May 1, 2012, at 3:15 PM, Lillie Ross-CDSR11 wrote:
> 
>> I've configured the nova_volume service using the the Ubuntu 12.04LTS 
>> packages, and am able to create/delete volumes using the euca2ools package.  
>> However, dashboard is not able to retrieve volume info or perform volume 
>> operations with the nova client.  If I issue the 'nova volume-list' command, 
>> I receive a 400 error response.  For example
>> 
>> root@essex1:/etc/nova# nova --debug volume-list
>> connect: (essex1, 5000)
>> send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: essex1:5000\r\nContent-Length: 
>> 100\r\ncontent-type: application/json\r\naccept-encoding: gzip, 
>> deflate\r\naccept: application/json\r\nuser-agent: 
>> python-novaclient\r\n\r\n{"auth": {"tenantName": "admin", 
>> "passwordCredentials": {"username": "admin", "password": "admin"}}}'
>> reply: 'HTTP/1.1 200 OK\r\n'
>> header: Content-Type: application/json
>> header: Vary: X-Auth-Token
>> header: Date: Tue, 01 May 2012 20:06:39 GMT
>> header: Transfer-Encoding: chunked
>> connect: (essex4, 8776)
>> connect fail: (u'essex4', 8776)
>> DEBUG (shell:416) n/a (HTTP 400)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 413, in 
>> main
>>   OpenStackComputeShell().main(sys.argv[1:])
>> File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 364, in 
>> main
>>   args.func(self.cs, args)
>> File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 858, 
>> in do_volume_list
>>   volumes = cs.volumes.list()
>> File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/volumes.py", line 79, 
>> in list
>>   return self._list("/volumes/detail", "volumes")
>> File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 71, in _list
>>   resp, body = self.api.client.get(url)
>> File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 136, in 
>> get
>>   return self._cs_request(url, 'GET', **kwargs)
>> File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 124, in 
>> _cs_request
>>   **kwargs)
>> File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 107, in 
>> request
>>   raise exceptions.from_response(resp, body)
>> BadRequest: n/a (HTTP 400)
>> ERROR: n/a (HTTP 400)
>> root@essex1:/etc/nova# 
>> 
>> Node essex1 is the cloud controller.  essex4 is the volume controller.  To 
>> verify the connection failure, if I try to telnet to port 8776 I also get a 
>> connect refused.  My nova-volume endpoint is specified in keystone as 
>> 'http://essex4:8776/v1/$(tenant_id)s'. With the above connection failure, 
>> nothing is written to the nova-volume.log (which makes sense if the server 
>> isn't listening to port 8776). Obviously, none of the other nova volume 
>> commands work.
>> 
>> Anything obvious that I'm missing?  Thanks in advance
>> 
>> Regards,
>> Ross
> 
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] RPC API Versioning Prototype

2012-04-30 Thread Vishvananda Ishaya
Looking good.

A few points:

a) can we just do hasattr dispatch instead of isinstance.  it seems more 
pythonic than forcing the use of the dispatcher base class

b) it seems like we should make the dispatcher pick version 1.0 instead of 
failing if version is not passed in, that way a new dispatcher could handle 
unversioned messages.  Or did i miss some other magic way this is happening.

Vish

On Apr 30, 2012, at 3:31 PM, Russell Bryant wrote:

> Greetings,
> 
> I held a session on adding version numbers to the RPC APIs at the last
> design summit.  The idea was fairly non-controversial.  The next step
> was to do some prototyping to nail down what it should look like.  This
> will end up touching quite a bit of code, so it's important to get some
> consensus around what it will look like up front.
> 
> I've made it far enough that there is enough to look at and provide
> feedback on.
> 
> The code is in this branch:
> 
>https://github.com/russellb/nova/tree/bp/versioned-rpc-apis
> 
> The best place to start is in this doc:
> 
> 
> https://github.com/russellb/nova/blob/bp/versioned-rpc-apis/README-versioned-rpc-apis.rst
> 
> There may be room for some additional code around helping managers
> support more than one version of an API.  I figure that can shake out on
> an as-needed basis as existing code gets converted and APIs get changed.
> 
> Feedback welcome!
> 
> Thanks,
> 
> -- 
> Russell Bryant
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-manage vpn run not available issue

2012-04-30 Thread Vishvananda Ishaya
the vpn commands were moved to apis and are now launched by the nova cli tool.  
As an admin user:

nova cloudpipe-create 

or using the api look for cloudpipe on:

http://api.openstack.org/


On Apr 30, 2012, at 2:34 PM, Vijay wrote:

> Hello,
> I am trying to launch cloudpipe image. I created the cloudpipe image and 
> uploaded into glance.  I am testing with Essex Release candidate version 
> (RC4) that was released before the final Essex release.
> When I run the command
> nova-manage vpn run  , nova-manage says the only option 
> available is
> nova-manage vpn change and "run" is not available.
> Does any package need to be installed?
> Thanks,
> -VJ
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Periodic clean-up of fixed_ip addresses in multi-host DHCP mode

2012-04-30 Thread Vishvananda Ishaya
Hey Phil.  I think you have a case of old-coditis

This was modified to properly query in multihost mode before the essex release:

996 def fixed_ip_disassociate_all_by_timeout(context, host, time):
 997 session = get_session()
 998 # NOTE(vish): only update fixed ips that "belong" to this
 999 # host; i.e. the network host or the instance
1000 # host matches. Two queries necessary because
1001 # join with update doesn't work.
1002 host_filter = or_(and_(models.Instance.host == host,
1003models.Network.multi_host == True),
1004   models.Network.host == host)
1005 result = session.query(models.FixedIp.id).\
1006  filter(models.FixedIp.deleted == False).\
1007  filter(models.FixedIp.allocated == False).\
1008  filter(models.FixedIp.updated_at < time).\
1009  join((models.Network,
1010models.Network.id == 
models.FixedIp.network_id)).\
1011  join((models.Instance,
1012models.Instance.id == 
models.FixedIp.instance_id)).\
1013  filter(host_filter).\
1014  all()
1015 fixed_ip_ids = [fip[0] for fip in result]
1016 if not fixed_ip_ids:
1017 return 0
1018 result = model_query(context, models.FixedIp, session=session).\
1019  filter(models.FixedIp.id.in_(fixed_ip_ids)).\
1020  update({'instance_id': None,
1021  'leased': False,
1022  'updated_at': utils.utcnow()},
1023  synchronize_session='fetch')
1024 return result


On Apr 27, 2012, at 11:30 AM, Day, Phil wrote:

> Hi Folks,
>  
> In multi-host mode the “host” field of a network never seems to get set (as 
> only IPs are allocated, not networks)
>  
> However the periodic revovery task in NetworkManager uses the host field to 
> filter what addresses it should consider cleaning up (to catch the case where 
> the message from dnsmasq is either never sent or not delivered for some 
> reason)
>  
> if self.timeout_fixed_ips:
> now = utils.utcnow()
> timeout = FLAGS.fixed_ip_disassociate_timeout
> time = now - datetime.timedelta(seconds=timeout)
> num = self.db.fixed_ip_disassociate_all_by_timeout(context,
>self.host,
>time)
> if num:
> LOG.debug(_('Dissassociated %s stale fixed ip(s)'), num)
>  
>  
> Where “db.fixed_ip_disassociate_all_by_timeout”   is:
>  
> def fixed_ip_disassociate_all_by_timeout(_context, host, time):
> session = get_session()
> inner_q = session.query(models.Network.id).\
>   filter_by(host=host).\
>   subquery()
> result = session.query(models.FixedIp).\
>  filter(models.FixedIp.network_id.in_(inner_q)).\
>  filter(models.FixedIp.updated_at < time).\
>  filter(models.FixedIp.instance_id != None).\
>  filter_by(allocated=False).\
>  update({'instance_id': None,
>  'leased': False,
>  'updated_at': utils.utcnow()},
>  synchronize_session='fetch')
> return result
>  
>  
> So what this seems to do to me is:
> -  Find all of the fixed_ips which are:
> o   on networks assigned to this host
> o   Were last updated more that “Timeout” seconds ago
> o   Are associated to an instance
> o   Are not allocated
>  
> Because in multi-host mode the network host field is always Null, this query 
> does nothing apart from give the DB a good work out every 10 seconds – so 
> there could be a slow leakage of IP addresses.
>  
> Has anyone else spotted this – and if so do you have a good strategy for 
> dealing with it ?
>  
> It seems that running this on every network_manager every 10 seconds is 
> excessive – so what still running on all netwok_managers but using a long 
> random sleep between runs in mult-host mode ?
>  
> Thoughts ?
>  
> Cheers,
> Phil

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] database migration cleanup

2012-04-26 Thread Vishvananda Ishaya
+1.  Might be nice to have some kind of test to verify that the new migration 
leaves the tables in exactly the same state as the old migrations.

Vish

On Apr 26, 2012, at 12:24 PM, Dan Prince wrote:

> The OpenStack Essex release had 82 database migrations. As these grow in 
> number it seems reasonable to clean house from time to time. Now seems as 
> good a time as any.
> 
> I came up with a first go at it here:
> 
> https://review.openstack.org/#/c/6847/
> 
> The idea is that we would:
> 
> * Do this early in the release cycle to minimize risk.
> 
> * Compact all pre-Folsom migrations into a single migration. This migration 
> would be used for new installations.
> 
> * New migrations during the Folsom release cycle would proceed as normal.
> 
> * Migrations added during Folsom release cycle could be compacted during "E" 
> release cycle. TBD if/when we do the next compaction.
> 
> * Users upgrading from pre-Essex would need to upgrade to Essex first. Then 
> Folsom.
> 
> --
> 
> I think this scheme would support users who follow stable releases as well as 
> users who follow trunk very closely.
> 
> We talked about this at the conference but I thought this issue might be near 
> and dear to some of our end users so it was worth discussing on the list.
> 
> What are general thoughts on this approach?
> 
> Dan (dprince)
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] questions about IP addressing and network config

2012-04-26 Thread Vishvananda Ishaya

On Apr 25, 2012, at 7:31 PM, Jimmy Tsai wrote:

> 
> Hi everyone,
> 
> I'm running with Essex 2012.1, 
> and have some questions about the nova network operation, 
> 
> 1. Is it possible manually assigned IP address to a launched instance, my 
> situation is :
> after instance boot up (OS: CentOS 6.2), I changed the 
> /etc/sysconfig/network-scripts/ifcfg-eth0 setting 
> from dhcp to static (the same subnet as created by command : nova-manage 
> create network), and restart the network service, 
> And then I couldn't ssh or ping the instance from other server with the same 
> subnet.
> What is the problem ?  I checked the iptables policies on the compute host, 
> and find nothing about the DROP packets.
> I also tried to changed the record from nova.fixed_ips table and libvirt.xml 
> of the instance, then reboot the instance, still not worked.
> I used FlatDHCP  as my network manager.

You can't do this.  Libvirt sets up no mac spoofing and no ip spoofing so the 
ip address needs to match the dhcp'd one. You should be able to switch to a 
static and use the same info that you get from dhcp though.
> 
> 2. According to the first question, I have another requirement to set up a 
> loopback IP address (lo:0) on the running instance, after setting was 
> completed,I couldn't ping or ssh the loopback IP from the same subnet, and I 
> tried to set a alias IP address with eth0:0, but still not get worked.
> Any ideas with this ?

Not sure

> 
> 3. Is there any way to use 2 NICs with different subnets on instances? I want 
> to separate the network traffic.  
> Now I'm running with one bridged interface (br100), and it works well.  In 
> order to backup the large log files,
> I'm planing to use 2 NICs for the compute hosts, I want use 2 vNICs on 
> instance, one for web service and the other for log backup,
> I think I should create a new network for the second bridged interface, but I 
> can't find any document to guild me.

This is definitely possible with FlatManager (You could use cloud_config drive 
and some version of contrib/openstack-config converted to work with centos to 
set up the interfaces)

It was possible at one point with FlatDHCPManager as well by creating multiple 
networks and using a specific combination of config options like 
use_single_default_gateway. I don' t know if anyone has tried this for a while 
so there may be issues with it. You might try creating a second network and 
setting use_single_default_gateway and see what happens.

There are plans underway to support this by only dhcping the first interface 
and allowing a guest agent to set up the other interfaces, but it isn't in 
place yet.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using VMWare ESXi with openstack

2012-04-26 Thread Vishvananda Ishaya

On Apr 25, 2012, at 7:44 PM, Michael March wrote:

> I just curious. Is anyone using the VMware functionality in OpenStack?  
> 
> I'm getting the feeling that it is more of a 'check box' thing of "yeah, we 
> have that hypervisor covered" than something that's seriously being used.
> 
> If my feeling is wrong, I'd like to know. 

I am hoping some commercial interests take over the esx hypervisor.  It is 
definitely behind kvm and xen, and I think this is only because no one is 
committing development resources to improve it.  The other hypervisors seem to 
get more support.  Hyper-V will be coming back because Microsoft has made a big 
commitment to it. Hopefully vmware or some other commercial entity cares enough 
to start really putting development effort into it.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using VMWare ESXi with openstack

2012-04-25 Thread Vishvananda Ishaya
On Apr 25, 2012, at 7:22 PM, Atul Gosain wrote:

> Hi 
> 
>  Thanks a lot for the responses. I still have some clarifications. 
> Openstack documentation states 
> (http://docs.openstack.org/trunk/openstack-compute/admin/content/hypervisor-configuration-basics.html)
> Hypervisor Configuration Basics
> 
> The node where the nova-compute service is installed and running is the 
> machine that runs all the virtual machines, referred to as the compute node 
> in this guide.
> 
> This gave me an impression that nova-compute service has to be installed on 
> hypervisor(and not VM). 
> 
> Though from the links below, it seems that nothing needs to be installed on 
> ESX server for openstack to manage it, from the documentation 
> (http://wiki.openstack.org/XenServer/GettingStarted), it seems like this 
> package needs to be built on Ubuntu and then install on XenServer for it to 
> be managed by Openstack. 
> 
> Sorry for my confusion, but couldnt figure out from the documentation. 
> 
> -
> 
> Secondly, what i mainly wanted to know was that how does openstack assign IP 
> addresses to the new VMs it creates. If i see in the API's of individual 
> hypervisors Xen, HyperV etc doesnt provide any direct method of setting IP 
> address on the host, though they provide to set the MAC address. Cloudstack 
> uses an internal DHCP server to set the IP addresses in VM's for the assigned 
> MAC. 
> 
> I looked into the openstack code, it seems like the network configuration is 
> injected into the VM images. Is it true ? As i understand, these images are 
> stored on Glance server. How are these images installed on hypervisors by 
> openstack. It does need to have some software running on hypervisors to which 
> controller can push the image of the VM. 
> 

It depends on the network mode, but the most common configuration uses a dhcp 
server as well.
> Where does the network information get injected inside the image ? Is it on 
> the glance server or hypervisor ?
> 
In the injected version (not recommended) it is injected after the image is 
downloaded from glance on the compute host.
> 
> Thanks
> 
> Atul
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does everyone build OpenStack disk images?

2012-04-25 Thread Vishvananda Ishaya
IDEA:

Add pxe boot support to nova (which seems interesting on its own!), and pxe 
boot from an installer image, then snapshot it.

OR:

Modify boot from iso image to allow the iso to attach separately (currently it 
replaces the root drive in KVM) so that you could boot from an iso but still 
have a snapshot-able disk to install to.

When we discussed this before, we thought you could do it by adding some 
metadata in glance about whether or not the image should replace the root drive 
or be attached separately {'attach_separately': True}

(Sorry for hijacking your thread with potential ways to do it in the future 
instead of current approaches)

Vish


On Apr 25, 2012, at 6:14 PM, Justin Santa Barbara wrote:

> How does everyone build OpenStack disk images?  The official documentation 
> describes a manual process (boot VM with ISO), which is sub-optimal in terms 
> of repeatability / automation / etc.  I'm hoping we can do better!
> 
> I posted how I do it on my blog, here: 
> http://blog.justinsb.com/blog/2012/04/25/creating-an-openstack-image/
> 
> Please let me know the many ways in which I'm doing it wrong :-) 
> 
> I'm thinking we can have a discussion here, and then I can then compile the 
> responses into a wiki page and/or a nice script...
> 
> Justin
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Foreign Keys

2012-04-25 Thread Vishvananda Ishaya
The main issue is when the relevant tables are moved into a separate service a 
la quantum or cinder.  We can't keep referential integrity across multiple 
databases, so the foreign keys in this case need to be removed. It leads to an 
odd situation when there is still an internal implementation in addition to the 
external implementation because the internal implementation no longer has 
foreign keys.

As an example, we used to have foreign key relationships between instances and 
networks.  We can no longer have these because we support networks declared 
externally.  The internal network management now has no referential integrity, 
but this is the price we pay for separation of concerns.  We are going through 
a similar set of relationship-breaking with the volume code.

Vish

On Apr 25, 2012, at 11:02 AM, Doug Hellmann wrote:

> 
> 
> On Wed, Apr 25, 2012 at 7:38 AM, Andrew Hutchings  
> wrote:
> On 12/04/12 13:35, J. Daniel Schmidt wrote:
> > While testing our SUSE OpenStack packages we hit a nasty bug and reported it
> > as:  https://bugs.launchpad.net/keystone/+bug/972502
> >
> > We found out that the underlying cause was a lack of referential 
> > integrity[1]
> > using sqlite or mysql. When we tried to reproduce this issue on postgresql 
> > the
> > usage of foreign keys greatly helped to find the cause.
> 
> >From a MySQL prospective that is probably more of an argument to use
> transactions, not foreign keys.
> 
> Transactions and referential integrity are related, but not equivalent. 
> Without referential integrity it's quite easy to commit a transaction that 
> leaves the database in a logically inconsistent state (it sounds like that's 
> what was happening in the case described by the OP).
> 
> Is there a technical reason to disable strict referential integrity checking 
> with MySQL?
> 
> Doug
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova and external NFS

2012-04-25 Thread Vishvananda Ishaya
There was discussion on the list recently of a similar problem using nfs4. 
Perhaps their solution will work for you:

http://www.mail-archive.com/openstack@lists.launchpad.net/msg09440.html

Vish

On Apr 25, 2012, at 8:18 AM, Sergio Ariel de la Campa Saiz wrote:

> Hello:
>  
> I have found some examples to configure NFS in Nova. But all of them use one 
> of the hosts as NFS server. In my case, I need to use an external NFS server, 
> which do not have installed OpenStack.
> I have been working on it but I have found some problems. For example: when I 
> created a new instance these are the files that were created:
>  
> root@opst1:~# ll /nfs/NOVA-INST-DIR/instances/instance-0015/
> total 29284
> drwxrwxr-x 2 nobody nogroup 4096 abr 25  2012 ./
> drwxrwxrwx 4 root   root4096 abr 25  2012 ../
> -rw-rw 1 root   root   0 abr 25  2012 console.log
> -rw-r--r-- 1 root   root25165824 abr 25  2012 disk
> -rw-r--r-- 1 root   root 6291968 abr 25  2012 disk.local
> -rw-rw-r-- 1 root   root 4790624 abr 25  2012 kernel
> -rw-rw-r-- 1 nobody nogroup 1856 abr 25  2012 libvirt.xml
> Instead of libvirt-qemu and kvm, root is the owner of the files.
>  
> Please.. help me about it
>  
> Thanks a lot...
>  
>  
> Sergio Ariel
> de la Campa Saiz
> GMV-SES Infraestructura / 
> GMV-SES Infrastructure
>  
>  
>  
> GMV
> Isaac Newton, 11
> P.T.M. Tres Cantos
> E-28760 Madrid
> Tel.
> +34 91 807 21 00
> Fax
> +34 91 807 21 99
>  www.gmv.com
>  
> 
>  
> 
>  
> This message including any attachments may contain confidential information, 
> according to our Information Security Management System, and intended solely 
> for a specific individual to whom they are addressed. Any unauthorised copy, 
> disclosure or distribution of this message is strictly forbidden. If you have 
> received this transmission in error, please notify the sender immediately and 
> delete it.
> Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener 
> información clasificada por su emisor como confidencial en el marco de su 
> Sistema de Gestión de Seguridad de la Información siendo para uso exclusivo 
> del destinatario, quedando prohibida su divulgación copia o distribución a 
> terceros sin la autorización expresa del remitente. Si Vd. ha recibido este 
> mensaje erróneamente, se ruega lo notifique al remitente y proceda a su 
> borrado. Gracias por su colaboración.
> Esta mensagem, incluindo qualquer ficheiro anexo, pode conter informação 
> confidencial, de acordo com nosso Sistema de Gestão de Segurança da 
> Informação, sendo para uso exclusivo do destinatário e estando proibida a sua 
> divulgação, cópia ou distribuição a terceiros sem autorização expressa do 
> remetente da mesma. Se recebeu esta mensagem por engano, por favor avise de 
> imediato o remetente e apague-a. Obrigado pela sua colaboração.
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-25 Thread Vishvananda Ishaya

On Apr 25, 2012, at 7:44 AM, Lorin Hochstein wrote:

> Since we're talking snapshots, quick doc-related snapshot questions:
> 
> - Are snapshots only supported on qemu/kvm, or do they work with other 
> hypervisors as well? (Does Xen support qcow2 images?)

I was speaking about libvirt/kvm, but the xenapi driver also supports 
snapshotting and uses vids internally
> 
> - Does OpenStack do anything with snapshots other than using them to generate 
> new images? I was a little confused by the existence of the "Snapshots" pane 
> in Diablo Horizon. I originally thought snapshotting was just a qemu/kvm 
> implementation detail about how nova created a new image from a running 
> instance, so I didn't understand why there was a "Snapshots" pane in addition 
> to an "Images" pane.

In this case snapshotting refers specifically to turning a running instance 
back into an image so that it can be used to launch new vms. The internal parts 
are implementation details of this process.

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-24 Thread Vishvananda Ishaya
?

Did you mistype your comment or misread mine?  Raw does NOT work for snapshots. 
snapshots only work for qcow2. Implementing snapshotting with raw would be 
possible. Logic just needs to be added to skip the internal snapshot step and 
just use the entire file when uploading to glance.  This would be pretty darn 
slow for large images though.

If you are asking about differencing images in glance that is a different 
question and one that we haven't addressed. It has a lot of implications and 
needs changes in both nova and glance to be useful. Logic needs to be added 
around dependency chains and coalescing. Plus it has implications when trying 
to migrate and resize instances, so there is a lot to consider.

As caitlin mentioned, something will be implemented in the volume service 
anyway, so it might be better to wait and see what happens there.

Vish

On Apr 24, 2012, at 4:30 PM, Joshua Harlow wrote:

> What changes would be needed to make qcow2 files work as snapshots?
> Some type of image “dependency” management in glance (and failure cases) and 
> the corresponding “dependency” fetching in nova (and failure cases)?
> Might be something pretty useful to have, instead of forcing raw for 
> snapshots?
> 
> On 4/24/12 3:51 PM, "Vishvananda Ishaya"  wrote:
> 
> On Apr 17, 2012, at 2:04 AM, William Herry wrote:
> 
> > so, what changes should I make if I want use raw in openstack, I didn't 
> > find some configure option in nova.conf.sample
> >
> > I also try to modify the source code in nova/virt/libvirt/utils.py, and 
> > didn't succeed
> >
> > I noticed that the type of snapshot is same as the instance's image by 
> > default, does this right, and what about the type of model image that 
> > uploaded to glance, does it affect the disk type I use?
> >
> > Thanks
> 
> snapshots will not work with raw images.  To make openstack use raw images, 
> you simply have to set:
> 
> use_cow_images=false
> 
> you can upload to glance in qcow or raw, it will be decoded to raw when the 
> image is downloaded to the compute host.
> 
> Vish
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-24 Thread Vishvananda Ishaya
On Apr 17, 2012, at 2:04 AM, William Herry wrote:

> so, what changes should I make if I want use raw in openstack, I didn't find 
> some configure option in nova.conf.sample
> 
> I also try to modify the source code in nova/virt/libvirt/utils.py, and 
> didn't succeed
> 
> I noticed that the type of snapshot is same as the instance's image by 
> default, does this right, and what about the type of model image that 
> uploaded to glance, does it affect the disk type I use?
> 
> Thanks

snapshots will not work with raw images.  To make openstack use raw images, you 
simply have to set:

use_cow_images=false

you can upload to glance in qcow or raw, it will be decoded to raw when the 
image is downloaded to the compute host.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ubuntu packaging - cactus, older releases

2012-04-20 Thread Vishvananda Ishaya
Not sure why it isn't working, but releases are also tagged on github:

https://github.com/openstack/nova/tarball/2011.2

Vish

On Apr 20, 2012, at 1:10 PM, Razique Mahroua wrote:

> Yah I see that... I've all the packages if you want to for Ubuntu server 
> (Just made a copy of my /var/cache/apt/ for the nova packaes)
> Does someone know why it is closed now ?
> thanks 
> 
> Nuage & Co - Razique Mahroua 
> razique.mahr...@gmail.com
> 
> 
> 
> Le 20 avr. 2012 à 21:59, Leandro Reox a écrit :
> 
>> I need 2011.2 (cactus) razique , it shows permission denied on the archive 
>> page
>> 
>> On Fri, Apr 20, 2012 at 4:53 PM, Razique Mahroua  
>> wrote:
>> Hey Leandro,
>> try that 
>> https://launchpad.net/~openstack-release/+archive/2011.3
>> Nuage & Co - Razique Mahroua 
>> razique.mahr...@gmail.com
>> 
>> 
>> 
>> Le 20 avr. 2012 à 20:09, Leandro Reox a écrit :
>> 
>>> Hi all, 
>>> 
>>> Is there any ppa, or repo where we can get older estables releases like 
>>> cactus or diablo ? the old  http://wiki.openstack.org/2011.2 is not working 
>>> anymore and i saw on the wiki that only the last releases will be 
>>> available, am i right ? If i not were can i find them ? (cactus 
>>> especifically)
>>> 
>>> Regards ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>> 
>> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Foreign Keys

2012-04-19 Thread Vishvananda Ishaya
On Apr 19, 2012, at 8:59 PM, Vaze, Mandar wrote:

> +1 for data integrity  ...
> 
> Here is an example that could use data integrity check :
> 
> tenant information is managed in keystone DB
> ovs_quantum DB has tenant_id column for networks table.
> When I use stack.sh - it puts a string "default" in tenant_id column - when 
> it creates network via "nova-manage network create" and it WORKS  

> 
> I see two problems here :
> 
> 1. tenant_id are uuid - so string "default" should be rejected with check 
> _is_like_uuid - but that is only partial solution.

tenant_ids are strings. It is an implementation detail that keystone uses uuids.

> 2. tenant_id should be valid ID from keystone.tenants

This would require nova-manage having logic to be able to connect to keystone 
which it doesn't have.  One of the drawbacks of having decoupled services is 
everything isn't in one database where you can support foreign keys. We could 
in theory add logic to nova to allow it to verify things inside of keystone, 
but I'm not sure this makes sense from a security perspective. It would require 
nova to have administrative access to keystone to find out what tenants exist.

Alternatively we could force administrative commands like network create to be 
done through the api using the context of the intended network. This has a 
drawback as well of making things administratively more difficult. An admin 
would have to get an administrative token for the intended tenant somehow 
before making the call.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] image_service=nova.image.s3.S3ImageService???

2012-04-19 Thread Vishvananda Ishaya
Correct, S3ImageService is a wrapper, you can't specify it in the image_service 
config option.

Vish

On Apr 19, 2012, at 8:31 AM, Lorin Hochstein wrote:

> I'm updating the documentation for this page: 
> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-compute-to-use-the-image-service.html
> 
> My question is: is there any use case where you would configure nova to do:
> 
> image_service=nova.image.s3.S3ImageService
> 
> Looking at the code, it seems like this would not even work. The 
> S3ImageService defers several methods (e.g., index, create, delete) to the 
> image service so by the image_service flag, so you'd get an infinite 
> recursion. It appears like theS3ImageService can only be used as a wrapper 
> around the default image service, and can't be a default image service on its 
> own.
> 
> I'm going to zap this as a valid option from the documentation (unless 
> someone sets me straight here).
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] question about configuring FlatDHCPManager

2012-04-19 Thread Vishvananda Ishaya
unfortunately there is a bug where deleting a network does not delete 
associated fixed ips:

https://bugs.launchpad.net/nova/+bug/754900

The fix has landed in trunk and is proposed for backport into stable/essex

https://review.openstack.org/6664

To work around this issue, you will have to delete the fixed ips manually from 
the database or drop and recreate the database.

Vish

On Apr 19, 2012, at 9:00 AM, Xin Zhao wrote:

> Hello,
> 
> I run nova compute (diablo) on RHEL6. Following the instruction, I configured 
> the network as following, and it works: 
> 
> $>nova-manage network create ostester 10.0.0.0/24 1 256 --bridge=br0 
> --bridge_interface=em1
> 
> Now I want to change it to use a new fixed_range, like 10.1.1.0/24, so I 
> delete the network, then redefine it following the same 
> command format.  But it doesn't work, the instance starts still with the 
> 10.0.0.X ip, and of course, network doesn't work in the 
> instances. 
> 
> What do I miss here? 
> 
> Thanks,
> Xin
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Unable to ping launched Instance

2012-04-15 Thread Vishvananda Ishaya

On Apr 14, 2012, at 10:27 PM, Marton Kiss wrote:

> In that case, check the libvirt type value
> /opt/stack/etc/nova/nova.conf, and check the value inside the log file
> of nova-compute after starting.
> 
> M.

Devstack puts conf files in /etc/nova/nova.conf now, just like most packages.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] The Scheduling feature when reboot the instance

2012-04-12 Thread Vishvananda Ishaya
Nova has stayed out of doing this auto-migrations so far.  You are requesting a 
very specific type of overallocation of resources, as in don't count machines 
that are shutoff when scheduling, but then automatically move them if they are 
turned back on.  It is an interesting idea, but not something we have discussed 
implementing.

You can generally over-allocate quite a bit, assuming you have swap enabled, 
and if some of your machines are not fully allocating their ram internally 
(which is usually the case) you should be fine. In deployments I have done when 
performance is really important, we simply do not over-allocate at all.

Keep in mind that the main use case of OpenStack is to run a cloud system, not 
a virtualization system. In a cloud, you very rarely stop machines and start 
them later, you generally delete the machine completely when you are done and 
launch a new one when you need it.  For this reason the particular use-case you 
are looking for hasn't really been considered.

Vish

On Apr 12, 2012, at 9:50 PM, Văn Đình Phúc wrote:

> Ignoring issues of Ram. I have a number of questions as the following model:
>  
> My node-1's resource as the following:
> CPU : 4 core
> RAM: 4 GB.
> Now , My Node-1 is hosting 4 instances:
> 
> 3 instance: (each VM: 1 core CPU, 2 GB ram,..). 
> 2 instance(i-0001, i-0002) is turn of and 1 instance(i-0003) is 
> running.
> 1 instance : (1 core CPU, 1 GB ram). 
> It is running.(i-0004)
> I request new ínstance by the euca-run-instance command. My new instance ( 
> i-0005) is located on the node 1. This is the problem for the 
> nova-scheduler.
> 
> What is happen ? If I reboot 2 instances : i-0001,i-0002.
> 
> How do we manage this problem in Openstack ?
> 
> I think that: My 2 instances (i-0001 , i-0002) should be migrated to 
> the rest available resources as node-2 and node-3.
> 
> If not, it's actually very difficult to manage and reboot  the instances 
> which are turned off. And their resources-allocated will be reused for the 
> new instances.
> 
> - Thông điệp gốc : -
> Gửi từ   : Vishvananda Ishaya [vishvana...@gmail.com]
> Gửi lúc  : 12/04/2012 11:43 PM
> Gửi tới  : openstack@lists.launchpad.net;vishvana...@gmail.com
> Chủ đề : Re: [Openstack] The Scheduling feature when reboot the instance
>  
> No this does not exist.  If you are going to overprovision ram, you should 
> probably make sure that you have swap enabled on your host.
> 
> Vish
> 
> On Apr 12, 2012, at 4:10 AM, Văn Đình Phúc wrote:
> 
>> Hi.
>> I'm using Openstack (2011.3 
>> (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) with 
>> 3 Compute nodes.
>> My scheduling option is default.
>> I have the problem as the following steps:
>> Compute node 1 is installed the modules: nova-api, nova-compute and 
>> nova-network.
>> The resource of node-1 max is fitfor 3 instances (m1.small): i-0001,   
>> i-0002, i-0003.
>> I turn off  i-0001.
>> and then I request new ínstance by the euca-run-instance command , My new 
>> instance ( i-0004) is located on the node 1.
>> The resources: RAM , CPU,.. of node-2 and node-3 is available.
>> finally, I reboot the i-0001.What is happen? the i-0001 is rebooted 
>> on the node-1 , and It's shutdown off after reboot.I checked the libvirt's 
>> log :
>> 
>> "Failed to allocate 2147483648 B: Cannot allocate memory
>> 2012-04-12 16:11:50.530: shutting down"
>> So , Do Openstack have the sheduling feature when rebooting the instance ?
>> 
>> Example: As above , My instance (i-0001) will be migrated to node-2 or 
>> node-3 when the resource of node-1 is not available.
>> 
>> 
>>  
>> Văn Đình Phúc
>> Manager - Cloud Team
>> Techno Department
>> Bkav Specialization Division
>>  
>> Office:  Bkav Building - Yen Hoa New Town, Cau Giay Dist, Hanoi
>> Tel:   (84-4) 3763.2552
>> Mobile: 0169 4702 388 
>> 
>> Do your best, the rest will come !  
>> Hãy làm việc hết mình, những điều tốt đẹp sẽ đến với bạn !
>>  
>> Disclaimer: This e-mail and any files transmitted with it are confidential 
>> and may contain privileged information. It is intended solely for the use of 
>> the individual to whom it is addressed and others authorized to receive it. 
>> If you are not the intended recipient you are notified that disclosing, 
>> copying, distributing or taking any action in reliance on the contents of 
>> this information is strictly prohibited. If you have received this message 
&

Re: [Openstack] minimal IaaS openstack installation FROM SOURCE on CentOS

2012-04-12 Thread Vishvananda Ishaya
Devstack just gained support for Fedora, so you could try using it. You might 
have to make some modifications, but it is just a shell script so it should be 
easy to read.

(From devstack.org) try:

git clone git://github.com/openstack-dev/devstack.git
cd devstack; ./stack.sh

On Apr 12, 2012, at 6:31 AM, Mauch, Viktor (SCC) wrote:

> Hi,
>  
> I need to install a minimal/simple Openstack IaaS Deployment framework on 
> CentOS 6.x from Source.
>  
> Is there anywhere a nice howto tutorial?
>  
> And yes: I know there are packeges, but I need it from source.
>  
> Cheers Viktor
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] The Scheduling feature when reboot the instance

2012-04-12 Thread Vishvananda Ishaya
No this does not exist.  If you are going to overprovision ram, you should 
probably make sure that you have swap enabled on your host.

Vish

On Apr 12, 2012, at 4:10 AM, Văn Đình Phúc wrote:

> Hi.
> I'm using Openstack (2011.3 
> (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2) with 3 
> Compute nodes.
> My scheduling option is default.
> I have the problem as the following steps:
> Compute node 1 is installed the modules: nova-api, nova-compute and 
> nova-network.
> The resource of node-1 max is fitfor 3 instances (m1.small): i-0001,   
> i-0002, i-0003.
> I turn off  i-0001.
> and then I request new ínstance by the euca-run-instance command , My new 
> instance ( i-0004) is located on the node 1.
> The resources: RAM , CPU,.. of node-2 and node-3 is available.
> finally, I reboot the i-0001.What is happen? the i-0001 is rebooted 
> on the node-1 , and It's shutdown off after reboot.I checked the libvirt's 
> log :
> 
> "Failed to allocate 2147483648 B: Cannot allocate memory
> 2012-04-12 16:11:50.530: shutting down"
> So , Do Openstack have the sheduling feature when rebooting the instance ?
> 
> Example: As above , My instance (i-0001) will be migrated to node-2 or 
> node-3 when the resource of node-1 is not available.
> 
> 
>  
> Văn Đình Phúc
> Manager - Cloud Team
> Techno Department
> Bkav Specialization Division
>  
> Office:  Bkav Building - Yen Hoa New Town, Cau Giay Dist, Hanoi
> Tel:   (84-4) 3763.2552
> Mobile: 0169 4702 388 
> 
> Do your best, the rest will come !  
> Hãy làm việc hết mình, những điều tốt đẹp sẽ đến với bạn !
>  
> Disclaimer: This e-mail and any files transmitted with it are confidential 
> and may contain privileged information. It is intended solely for the use of 
> the individual to whom it is addressed and others authorized to receive it. 
> If you are not the intended recipient you are notified that disclosing, 
> copying, distributing or taking any action in reliance on the contents of 
> this information is strictly prohibited. If you have received this message in 
> error, please notify the sender immediately by reply e-mail and delete 
> completely this e-mail from your system, without reproducing, distributing or 
> retaining copies.
>  
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DevStack stable/essex branch

2012-04-11 Thread Vishvananda Ishaya
Yay! Awesome work Dean!

Monty/Jim: Has the ci infrastructure been updated to use the stable/essex 
branch for integration tests on the stable/essex merges?

Vish

On Apr 11, 2012, at 1:57 PM, Dean Troyer wrote:

> The stable/essex branch of DevStack has been created in GitHub
> (https://github.com/openstack-dev/devstack/tree/stable/essex) with
> stackrc pre-configured to pull from stable/essex branches of the
> project repos as appropriate.
> 
> http://devstack.org has been updated to reflect the current state.
> 
> dt
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Management API Blueprint

2012-04-11 Thread Vishvananda Ishaya
FYI there were existing blueprints covering some of this functionality here:

https://blueprints.launchpad.net/nova/+spec/admin-cli
https://blueprints.launchpad.net/nova/+spec/admin-service-actions

I like the detailed features in the wiki.  A few notes:

a) managing one administrative api across multiple projects is going to be 
difficult, might be good to focus on each project individually, perhaps pushing 
some common code into openstack-common.
b) users/projects/roles is out of scope for nova, this can be administered 
through keystone + keystone-client
c) a lot of administrative features are already in the nova cli (you can delete 
networks but not create them yet)


I think there will be some serious weight behind the operational support team 
in this cycle.  Would love to see nova-manage + talking directly to the db go 
boom. For now i will target this blueprint to the nova-operations team to sort 
out vs other blueprints.

https://blueprints.launchpad.net/~nova-operations


On Apr 11, 2012, at 12:55 PM, Wilkinson, Lyle wrote:

> Hi folks,
>  
> We’ve got some significant interest in creating a pattern for OpenStack 
> management APIs.  We’ve created a blueprint to capture some of our thoughts 
> around how to do this for Nova.
>  
> https://blueprints.launchpad.net/nova/+spec/management-api
>  
> We’re hoping to discuss this at the design summit next week.  We’ve created 
> an etherpad page to get the discussion going, so if you have questions, 
> suggestions, etc., feel free to contribute there.
>  
> http://etherpad.openstack.org/Management-API
>  
> Thanks in advance!
>  
> Lyle Wilkinson
>  
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Metadata and File Injection (code summit session?)

2012-04-10 Thread Vishvananda Ishaya
On Apr 10, 2012, at 4:24 PM, Justin Santa Barbara wrote:

> One advantage of a network metadata channel is it allows for communication 
> with cloud provider services without having to put a key into the vm. In 
> other words, the vm can be authenticated via its ipv6 address.
> 
> Did you have a use case in mind here?  It seems that Keystone could use the 
> IPV6 address to authenticate an instance without having to upload 
> credentials, which would indeed be useful (e.g. for auto-scaling), but I 
> don't see why that needs any special metadata support (?)

Arbitrarily allowing keystone to authenticate ipv6 would be vulnerable to 
spoofing. You need a channel direct from guest-host-keystone to be sure..  I 
think authentication is the main concern, because if auth is over a secure 
channel, then you can do all of the other communication over the regular 
internet. The vm could connect to the control domain for a service by 
subscribing to a message queue (for example) via a public ip.

You could also secure the channel by having a private network attached to the 
vm and only putting the control domain for the service on the private network. 
Having keystone validate ipv6 only over that network might be an option.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Metadata and File Injection (code summit session?)

2012-04-10 Thread Vishvananda Ishaya

On Apr 10, 2012, at 3:04 PM, Justin Santa Barbara wrote:

> Having the ability to read config data from a runtime changeable
> metadata server (rather then a config file on an injected disk) serves a
> use case I am interested in.  The only problem is horizontal scalability
> of the metadata server in this model which may not be a problem with
> some tinkering.
> 
> Can you please share that use case?  I'm especially interested in finding use 
> cases that would not better be better served by e.g. SSH or Zookeeper or 
> Corosync..

One advantage of a network metadata channel is it allows for communication with 
cloud provider services without having to put a key into the vm. In other 
words, the vm can be authenticated via its ipv6 address.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Image API v2 Draft 4

2012-04-10 Thread Vishvananda Ishaya
On Apr 10, 2012, at 2:26 AM, Thierry Carrez wrote:

> Jay Pipes wrote:
>>> I take it you didn't attend the glorious JSON debate of a couple of
>>> summits ago :-)
>> 
>> Glorious it was indeed.
> 
> I think the key quote was something like:
> "Please don't bastardize my JSON with your XML crap"

According to my twitter, the actual quote was: "Don't bring your XML filth into 
my JSON"

Vish



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] preallocation

2012-04-10 Thread Vishvananda Ishaya
The limitation is due to what is supported by the qemu-img snapshot -c command
AFAIK this works with qcow only, but perhaps other formats are supported.

use_cow_image=true # always works

use_cow_image=false # only works if force_raw_images=false AND 
original_image_format = qcow 

Vish

On Apr 10, 2012, at 10:42 AM, Lorin Hochstein wrote:

> Vish:
> 
> For documentation purposes, if the user wants to be able to do snapshots, 
> what combinations of the following three variables are allowed?
> 
> 1. original image format (qcow2 | raw)
> 2. use_cow_image flag (true | false)
> 3. force_raw_images flag (true | false)
> 
> 
> Take care,
> 
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> 
> 
> 
> 
> On Apr 10, 2012, at 1:32 AM, Vishvananda Ishaya wrote:
> 
>> You can disable using backing files with the following config:
>> use_cow_images=false
>> 
>> You should be aware that you likely won't be able to snapshot images unless 
>> you make sure to upload them all in qcow format and also set:
>> force_raw_images=false
>> 
>> On Apr 9, 2012, at 9:37 PM, William Herry wrote:
>> 
>>> Hi
>>> 
>>> I read from an article that said use preallocation can improve disk I/O 
>>> performance in kvm, when I add it to openstack, suck error come to me
>>> 
>>> (nova.rpc.amqp): TRACE: Stderr: 'Backing file and preallocation cannot be 
>>> used at the same time\nqemu-img: 
>>> /usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/instances/instance-000e/disk:
>>>  error while creating qcow2: Invalid argument\n'
>>> 
>>> I was added it to utils.py in virt/libvirt directory (line 77)
>>> 
>>> def create_cow_image(backing_file, path):
>>> """Create COW image
>>> 
>>> Creates a COW image with the given backing file
>>> 
>>> :param backing_file: Existing image on which to base the COW image
>>> :param path: Desired location of the COW image
>>> """
>>> execute(FLAGS.qemu_img, 'create', '-f', 'qcow2', '-o',
>>>  'preallocation=metadata,cluster_size=2M,backing_file=%s' %
>>>   backing_file, path)
>>> 
>>> here is the article: 
>>> http://itscblog.tamu.edu/improve-disk-io-performance-in-kvm/
>>> 
>>> so what is Backing file for, can I disable it for use preallocation cause I 
>>> can't got both
>>> 
>>> Thanks
>>> 
>>> -- 
>>> 
>>> ===
>>> William Herry
>>> 
>>> williamherrych...@gmail.com
>>> 
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] preallocation

2012-04-09 Thread Vishvananda Ishaya
You can disable using backing files with the following config:
use_cow_images=false

You should be aware that you likely won't be able to snapshot images unless you 
make sure to upload them all in qcow format and also set:
force_raw_images=false

On Apr 9, 2012, at 9:37 PM, William Herry wrote:

> Hi
> 
> I read from an article that said use preallocation can improve disk I/O 
> performance in kvm, when I add it to openstack, suck error come to me
> 
> (nova.rpc.amqp): TRACE: Stderr: 'Backing file and preallocation cannot be 
> used at the same time\nqemu-img: 
> /usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/instances/instance-000e/disk:
>  error while creating qcow2: Invalid argument\n'
> 
> I was added it to utils.py in virt/libvirt directory (line 77)
> 
> def create_cow_image(backing_file, path):
> """Create COW image
> 
> Creates a COW image with the given backing file
> 
> :param backing_file: Existing image on which to base the COW image
> :param path: Desired location of the COW image
> """
> execute(FLAGS.qemu_img, 'create', '-f', 'qcow2', '-o',
>  'preallocation=metadata,cluster_size=2M,backing_file=%s' %
>   backing_file, path)
> 
> here is the article: 
> http://itscblog.tamu.edu/improve-disk-io-performance-in-kvm/
> 
> so what is Backing file for, can I disable it for use preallocation cause I 
> can't got both
> 
> Thanks
> 
> -- 
> 
> ===
> William Herry
> 
> williamherrych...@gmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] removing nova-direct-api

2012-04-09 Thread Vishvananda Ishaya
+1 to removal.  I just tested to see if it still works, and due to our policy 
checking and loading objects before sending them into compute.api, it no longer 
functions. Probably wouldn't be too hard to fix it, but clearly no one is using 
it so lets axe it.

Vish

On Apr 9, 2012, at 11:19 AM, Joe Gordon wrote:

> Hi All,
> 
> The other day I noticed that in addition to EC2 and OpenStack APIs there is a 
> third API type: "nova-direct-api."  As best I can tell, this was used early 
> on for development/testing before the EC2 and OpenStack APIs were mature.
> 
> My question is, since most of the code hasn't been touched in over a year and 
> we have two mature documented APIs, is anyone using this?  If not, I propose 
> to remove it.
> 
> 
> Proposed Change:  https://review.openstack.org/6375
> 
> 
> best,
> Joe
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] boot from iso image

2012-04-09 Thread Vishvananda Ishaya
Assuming you are using kvm, the iso replaces the root drive of the system, so 
the next disk will be the ephemeral drive.  Are you sure the ephemeral drive in 
the flavor/instance_type isn't 20G?

Vish

On Apr 8, 2012, at 10:37 PM, William Herry wrote:

> Hi
> 
> I am try the openstack's new feature of boot from iso image, and after 
> installation it boot up a working system, it is really awesome
> 
> but I have a little question is that, I give 10G disk when i create this 
> instance but during the installation and after boot up, I got almost 20G, 
> what's going on here? did i miss configure some, I am use essex rc1
> 
> Thanks 
> 
> -- 
> 
> ===
> William Herry
> 
> williamherrych...@gmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] can not start VM instance with specific image

2012-04-05 Thread Vishvananda Ishaya
I would try this again. 

1. Delete all instances on the host.
2. Clean out the _base directory.
3. Restart nova-compute
4. Try to run the instance again.

If that doesn't work, I would suspect a bad sector on your hard drive that is 
getting reused.

Vish
 
On Apr 5, 2012, at 2:43 AM, yuanke wei wrote:

> hi all,
> 
> prob1:
> I deployed openstack on a 1controller+Ncompute basis. All the compute node 
> seems work well and windows 2008 server image can be spawned successfully, 
> except on one compute node, I encountered the following errors, can someone 
> tell me what might be the problem??
> after deleting all the files under "_base" dir, the error still there.
> if needed, further info can be provided
> 
> 2012-04-05 09:30:03,874 DEBUG nova.rpc [-] Making asynchronous cast on 
> network... from (pid=3068) cast /var/lib/nova/nova/rpc/impl_kombu.py:756
> 2012-04-05 09:30:03,932 ERROR nova.rpc [-] Exception during message handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/rpc/impl_kombu.py", line 620, 
> in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/exception.py", line 100, in 
> wrapped
> (nova.rpc): TRACE: return f(*args, **kw)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 485, 
> in run_instance
> (nova.rpc): TRACE: self._run_instance(context, instance_id, **kwargs)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 481, 
> in _run_instance
> (nova.rpc): TRACE: _cleanup()
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 406, 
> in _cleanup
> (nova.rpc): TRACE: _deallocate_network()
> (nova.rpc): TRACE:   File "/usr/lib/python2.6/contextlib.py", line 23, in 
> __exit__
> (nova.rpc): TRACE: self.gen.next()
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 459, 
> in _run_instance
> (nova.rpc): TRACE: network_info, block_device_info)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/exception.py", line 100, in 
> wrapped
> (nova.rpc): TRACE: return f(*args, **kw)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/libvirt/connection.py", 
> line 629, in spawn
> (nova.rpc): TRACE: block_device_info=block_device_info)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/libvirt/connection.py", 
> line 896, in _create_image
> (nova.rpc): TRACE: size=size)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/libvirt/connection.py", 
> line 788, in _cache_image
> (nova.rpc): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/utils.py", line 687, in inner
> (nova.rpc): TRACE: retval = f(*args, **kwargs)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/libvirt/connection.py", 
> line 786, in call_if_not_exists
> (nova.rpc): TRACE: fn(target=base, *args, **kwargs)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/libvirt/connection.py", 
> line 800, in _fetch_image
> (nova.rpc): TRACE: images.fetch_to_raw(context, image_id, target, 
> user_id, project_id)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/virt/images.py", line 88, in 
> fetch_to_raw
> (nova.rpc): TRACE: path_tmp, staged)
> (nova.rpc): TRACE:   File "/var/lib/nova/nova/utils.py", line 190, in execute
> (nova.rpc): TRACE: cmd=' '.join(cmd))
> (nova.rpc): TRACE: ProcessExecutionError: Unexpected error while running 
> command.
> (nova.rpc): TRACE: Command: qemu-img convert -O raw 
> /var/lib/nova/instances/_base/bc33ea4e26e5e1af1408321416956113a4658763.part 
> /var/lib/nova/instances/_base/bc33ea4e26e5e1af1408321416956113a4658763.converted
> (nova.rpc): TRACE: Exit code: 1
> (nova.rpc): TRACE: Stdout: ''
> (nova.rpc): TRACE: Stderr: 'qemu-img: error while reading\n'
> (nova.rpc): TRACE:
> 
> prob2:
> since the problem may be on the fact that the cached image on the compute 
> node is broken, so how to force the compute node to abandon its local cached 
> images and get from the remote image server???
> simply deleting all the files under "_base" dir seems don't work, I see no 
> data transfer between the compute node and the image server.
> 
> thks in advance!
> 
> 
> 
> 
> 
> -
> 韦远科 
> wei yuanke(wei)
> gtalk: weiyuanke...@gmail.com
> msn: weiyuanke...@hotmail.com
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Dashboard VNC Console failed to connect to server

2012-04-03 Thread Vishvananda Ishaya
It is working!

You are in the bios screen, so you probably just need to wait (software mode 
booting can take a while)

If the vm doesn't ever actually boot, you may be attempting to boot a 
non-bootable image.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread Vishvananda Ishaya

On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:

> Hi John,
>  
> Maybe the problem with host aggregates is that it too quickly became 
> something that was linked to hypervisor capability, rather than being the 
> more general mechanism of which one form of aggregate could be linked to 
> hypervisor capabilities ?
>  
> Should we have a “host aggregates 2.0” session at the Design Summit ?

+ 1

I think the primary use case is associating metadata with groups of hosts that 
can be interpreted by the scheduler.  Obviously, this same metadata can be used 
to create pools/etc. in the hypervisor, but we can't forget about the 
scheduler.  Modifying flags on the hosts for capabilities is ugly.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-api start failed in multi_host compute nodes.

2012-04-03 Thread Vishvananda Ishaya
Your api-paste.ini is very out of date.  Here is the section from the current 
version:


# Metadata #

[composite:metadata]
use = egg:Paste#urlmap
/: metaversions
/latest: meta
/1.0: meta
/2007-01-19: meta
/2007-03-01: meta
/2007-08-29: meta
/2007-10-10: meta
/2007-12-15: meta
/2008-02-01: meta
/2008-09-01: meta
/2009-04-04: meta

[pipeline:metaversions]
pipeline = ec2faultwrap logrequest metaverapp

[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp

[app:metaverapp]
paste.app_factory = nova.api.metadata.handler:Versions.factory

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

-

Try updating to the paste included in etc/nova/api-paste.ini

FYI you can also run just the metadata server by using the binary:

nova-api-metadata

Instead of using nova-api and changing enabled_apis

Vish

On Apr 3, 2012, at 1:19 AM, 한승진 wrote:

> Hi all
> 
> I am trying to start nova-api in my compute node to use metadata.
> 
> I couldn't success yet. I found this log in nova-api log
> 
> 2012-04-03 15:18:43,908 CRITICAL nova [-] Could not load paste app 'metadata' 
> from /etc/nova/api-paste.ini
>  36 (nova): TRACE: Traceback (most recent call last):
>  37 (nova): TRACE:   File "/usr/local/bin/nova-api", line 51, in 
>  38 (nova): TRACE: servers.append(service.WSGIService(api))
>  39 (nova): TRACE:   File 
> "/usr/local/lib/python2.7/dist-packages/nova/service.py", line 299, in 
> __init__
>  40 (nova): TRACE: self.app = self.loader.load_app(name)
>  41 (nova): TRACE:   File 
> "/usr/local/lib/python2.7/dist-packages/nova/wsgi.py", line 414, in load_app
>  42 (nova): TRACE: raise exception.PasteAppNotFound(name=name, 
> path=self.config_path)
>  43 (nova): TRACE: PasteAppNotFound: Could not load paste app 'metadata' from 
> /etc/nova/api-paste.ini
>  44 (nova): TRACE:
>  45 2012-04-03 15:20:43,786 ERROR nova.wsgi [-] No section 'metadata' 
> (prefixed by 'app' or 'application' or 'composite' or 'composit' or 
> 'pipeline' or 'filter-app') found in config /etc/nova/api-paste.ini
> 
> I added the flag in my nova.conf
> 
> --enbled_apis=metadata
> 
> Here is my api-paste.ini
> 
> ###
> # EC2 #
> ###
> 
> [composite:ec2]
> use = egg:Paste#urlmap
> /: ec2versions
> /services/Cloud: ec2cloud
> /services/Admin: ec2admin
> /latest: ec2metadata
> /2007-01-19: ec2metadata
> /2007-03-01: ec2metadata
> /2007-08-29: ec2metadata
> /2007-10-10: ec2metadata
> /2007-12-15: ec2metadata
> /2008-02-01: ec2metadata
> /2008-09-01: ec2metadata
> /2009-04-04: ec2metadata
> 
> [pipeline:ec2cloud]
> pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor
> # NOTE(vish): use the following pipeline for deprecated auth
> #pipeline = logrequest authenticate cloudrequest authorizer ec2executor
> 
> [pipeline:ec2admin]
> pipeline = logrequest ec2noauth adminrequest authorizer ec2executor
> # NOTE(vish): use the following pipeline for deprecated auth
> #pipeline = logrequest authenticate adminrequest authorizer ec2executor
> 
> [pipeline:ec2metadata]
> pipeline = logrequest ec2md
> 
> [pipeline:ec2versions]
> pipeline = logrequest ec2ver
> 
> [filter:logrequest]
> paste.filter_factory = nova.api.ec2:RequestLogging.factory
> 
> [filter:ec2lockout]
> paste.filter_factory = nova.api.ec2:Lockout.factory
> 
> [filter:ec2noauth]
> paste.filter_factory = nova.api.ec2:NoAuth.factory
> 
> [filter:authenticate]
> paste.filter_factory = nova.api.ec2:Authenticate.factory
> 
> [filter:cloudrequest]
> controller = nova.api.ec2.cloud.CloudController
> paste.filter_factory = nova.api.ec2:Requestify.factory
> 
> [filter:adminrequest]
> controller = nova.api.ec2.admin.AdminController
> paste.filter_factory = nova.api.ec2:Requestify.factory
> 
> [filter:authorizer]
> paste.filter_factory = nova.api.ec2:Authorizer.factory
> 
> [app:ec2executor]
> paste.app_factory = nova.api.ec2:Executor.factory
> 
> [app:ec2ver]
> paste.app_factory = nova.api.ec2:Versions.factory
> 
> [app:ec2md]
> paste.app_factory = 
> nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory
> 
> #
> # Openstack #
> #
> 
> [composite:osapi]
> use = call:nova.api.openstack.urlmap:urlmap_factory
> /: osversions
> /v1.1: openstackapi11
> 
> [pipeline:openstackapi11]
> pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
> # NOTE(vish): use the following pipeline for deprecated auth
> # pipeline = faultwrap auth ratelimit serialize extensions osapiapp11
> 
> [filter:faultwrap]
> paste.filter_factory = nova.api.openstack:FaultWrapper.factory
> 
> [filter:auth]
> paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory
> 
> [filter:noauth]
> paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
> 
> [filter:ratelimit]
> paste.filter_factory = 
> nova.api.openstack.limits:RateLimitingMiddleware.factory
> 
> [filter:serialize]
> paste.filter_factory = 
> nova.api.openstack.wsgi:LazySerializationMiddleware.factory
> 
> [f

Re: [Openstack] multiple floating ip pools

2012-04-02 Thread Vishvananda Ishaya
Not yet. The extensions aren't in the api docs yet. Trying to address that
this week.
On Apr 2, 2012 8:40 AM, "Lorin Hochstein"  wrote:

> Vish:
>
> Are floating IP pools (--pool) documented anywhere? I did a quick look but
> couldn't find it in the main docs.
>
> Take care,
>
> Lorin
>
>
> Take care,
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
>
>
>
>
> On Mar 30, 2012, at 2:18 PM, Vishvananda Ishaya wrote:
>
> Floating ip pools allow you to specify a different ip range and bind
> interface for sets of ips, so it will work for segregation purposes.
>
> There isn't policy acl on which pool the ip comes from but it could be
> added. The policy wrapping in the network layer is very basic right now.
>  The underlying objects aren't passed in so we can't set policies based on
> (for example) pool name.  If/when the policy wrapping is improved to
> include more information that is a possibility.
>
> Vish
>
> On Mar 30, 2012, at 6:23 AM, Kevin Jackson wrote:
>
> I'm also interested in providing multiple floating IP pools.  Is this
> something that is achievable or conceived?
>
> My use case is as follows:
>
> Each tenant gets its own private VLAN and address space, so
> intercommunication between each tenant is able to be segregated.
> On assignment of public floating IPs though this segregation breaks down.
>
> To put this into context, I'd like to be able to have, say, a "Production"
> tenant and a "Development" tenant.  Inter-communication between the two
> should be prohibited.
> As soon as I assign a floating IP address, this model breaks down.
>
> I noticed that nova-manage floating create has a  ' --pool=
> Optional pool ' option.  How is this used?  Does this help solve my problem?
>
> Cheers,
>
> Kev
>
>
> On 6 February 2012 18:46, Xu (Simon) Chen  wrote:
>
>> Hi all,
>>
>> I am running devstack and got a dev instance of OpenStack running.
>>
>> I am happy to see the concept of multiple floating IP pools, and the
>> per-floating-ip interface in the trunk, which I consider a very good basis
>> for my blueprint proposal here:
>>
>> https://blueprints.launchpad.net/nova/+spec/multi-network-without-multi-nic
>>
>> I have a quick question. Is there a plan (or maybe it's already there)
>> for access control whether a project is allowed to take floating IPs from a
>> pool?
>>
>> Thanks!
>> -Simon
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Kevin Jackson
> @itarchitectkev
>  ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] multiple floating ip pools

2012-03-30 Thread Vishvananda Ishaya
Floating ip pools allow you to specify a different ip range and bind interface 
for sets of ips, so it will work for segregation purposes.

There isn't policy acl on which pool the ip comes from but it could be added. 
The policy wrapping in the network layer is very basic right now.  The 
underlying objects aren't passed in so we can't set policies based on (for 
example) pool name.  If/when the policy wrapping is improved to include more 
information that is a possibility.

Vish

On Mar 30, 2012, at 6:23 AM, Kevin Jackson wrote:

> I'm also interested in providing multiple floating IP pools.  Is this 
> something that is achievable or conceived?
> 
> My use case is as follows:
> 
> Each tenant gets its own private VLAN and address space, so 
> intercommunication between each tenant is able to be segregated.
> On assignment of public floating IPs though this segregation breaks down.
> 
> To put this into context, I'd like to be able to have, say, a "Production" 
> tenant and a "Development" tenant.  Inter-communication between the two 
> should be prohibited.
> As soon as I assign a floating IP address, this model breaks down.
> 
> I noticed that nova-manage floating create has a  ' --pool= 
> Optional pool ' option.  How is this used?  Does this help solve my problem?
> 
> Cheers,
> 
> Kev
> 
> 
> On 6 February 2012 18:46, Xu (Simon) Chen  wrote:
> Hi all,
> 
> I am running devstack and got a dev instance of OpenStack running. 
> 
> I am happy to see the concept of multiple floating IP pools, and the 
> per-floating-ip interface in the trunk, which I consider a very good basis 
> for my blueprint proposal here:
> https://blueprints.launchpad.net/nova/+spec/multi-network-without-multi-nic
> 
> I have a quick question. Is there a plan (or maybe it's already there) for 
> access control whether a project is allowed to take floating IPs from a pool?
> 
> Thanks!
> -Simon
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> 
> -- 
> Kevin Jackson
> @itarchitectkev
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] "Admin"-ness in Keystone, Nova, et. al.

2012-03-30 Thread Vishvananda Ishaya
On Mar 30, 2012, at 7:41 AM, Julien Danjou wrote:

> On Fri, Mar 30 2012, Gabriel Hurley wrote:
> 
>> In practice today, Keystone no longer has global roles, and RBAC
>> implementation isn't fully there yet across the ecosystem. So projects have
>> adopted inconsistent means of determining when and how to grant
>> "admin"-level privileges to that user. This isn't something individual
>> projects can decide, though. It has to be agreed upon and consistent.
>> 
>> I don't have a great solution for this problem since it's so very late in
>> the Essex release cycle. However, I'm hoping we can perhaps do *something*
>> other than to simply document that "users with admin-level permissions
>> should only ever be granted admin permissions on a single admin tenant, and
>> no other users should be granted an admin role anywhere."
>> 
>> All that said, I'm deeply concerned about the security implications of
>> real deployments being unaware of the unintended consequences of
>> granting what appears to be a scoped "admin" role.
> 
> Correct me if I'm wrong, but it seems to me that the problem is simply
> that the default policy used in keystone and nova says that "admin is
> anybody with role `admin' on any tenant", as you can see in their
> respective policy.json files.
> 
> I think that this rule should probably be set to something else by
> default, like the user is admin if "it has role admin on a specific
> tenant (like a tenant named `admin')". Tthat would allow to emulate the
> old "global" admin role, just by using a specific tenant.

I think this is a reasonable workaround.  Devstack creates service tenants, so 
that seems like a good place to put them. Unfortunately that means we have to 
keep track of an admin tenant id in nova, and it complicates things by having 
to create the tenant and put the id into a config. Perhaps we could use 
differently named roles to minimize confusion and keep the config simpler.

system_admin vs tenant_admin or some such


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Xen Hypervisor

2012-03-29 Thread Vishvananda Ishaya
Public interface is the interface used for adding floating (natted) ips.  If 
this is generally eth0 in the domU then ignore my previous message.

Vish

On Mar 29, 2012, at 11:20 AM, John Garbutt wrote:

> The public interface, I thought was the interface of the DomU running the 
> service, and it attached its own bridges inside the DomU?
>  
> I have tried to describe all this here:
> http://wiki.openstack.org/XenServer/NetworkingFlags
>  
> Would be cool if people can check that for me, and I can push it into the 
> manuals.
>  
> Cheers,
> John
>  
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com] 
> Sent: 29 March 2012 19:15
> To: John Garbutt
> Cc: Salvatore Orlando; Alexandre Leites; Ewan Mellor; 
> openstack@lists.launchpad.net; todd.desh...@xen.org
> Subject: Re: [Openstack] [OpenStack] Xen Hypervisor
>  
>  
> On Mar 29, 2012, at 7:40 AM, John Garbutt wrote:
> 
> 
> If you want all your traffic going through a single nic (Management, 
> Instance, Public), it might be possible using these settings:
>  
> public_interface=eth0
>  
> I don't think this will work unless the implementation is very different in 
> xen.  xenbr0 will be bridged into eth0, so you actually want to be adding ips 
> to the bridge not the raw eth device.  I would suggest
>  
> public_interface=xenbr0
>  
> Xen experts, please correct me if I'm wrong.
> 
> 
> flat_interface=eth0
> flat_network_bridge=xenbr0

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Xen Hypervisor

2012-03-29 Thread Vishvananda Ishaya

On Mar 29, 2012, at 7:40 AM, John Garbutt wrote:

> If you want all your traffic going through a single nic (Management, 
> Instance, Public), it might be possible using these settings:
>  
> public_interface=eth0

I don't think this will work unless the implementation is very different in 
xen.  xenbr0 will be bridged into eth0, so you actually want to be adding ips 
to the bridge not the raw eth device.  I would suggest

public_interface=xenbr0

Xen experts, please correct me if I'm wrong.

> flat_interface=eth0
> flat_network_bridge=xenbr0

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] wrong IP given by flat network dhcp

2012-03-29 Thread Vishvananda Ishaya
Your network and fixed ips in the database are from the wrong range.  I would 
suggest you delete your db and start over.  Make sure you specify the correct 
range when you create your network.

Vish

On Mar 29, 2012, at 12:59 AM, Michaël Van de Borne wrote:

> Well, I changed /etc/network/interfaces accordingly (see below) and didn't 
> make any change in nova.conf, but the problem still remains. New VM are given 
> IPs from 10.18.9.2 (and not 10.18.9.33). What else can I check?
> 
> 
> $ cat /etc/network/interfaces
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
> 
> # The loopback network interface
> auto lo
> iface lo inet loopback
> 
> # The primary network interface
> auto eth0
> #iface eth0 inet dhcp
> iface eth0 inet static
> address 172.22.22.1
> gateway 172.22.0.1
> netmask 255.255.0.0
> network 172.22.0.0
> broadcast 172.22.255.255
> 
> auto eth1
> iface eth1 inet static
> address 10.18.9.1
> netmask 255.255.255.224
> network 10.18.9.0
> broadcast 10.18.9.31
> 
> Here's the output of ifconfig:
> 
> localadmin@openstack1:~$ ifconfig
> br100 Link encap:Ethernet  HWaddr 5c:f3:fc:1d:4a:42
>  inet addr:10.18.9.1  Bcast:10.18.9.31  Mask:255.255.255.224
>  inet6 addr: fe80::bc06:73ff:fe85:2081/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:189404 errors:0 dropped:1001 overruns:0 frame:0
>  TX packets:6063 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:12010565 (12.0 MB)  TX bytes:1100566 (1.1 MB)
> 
> eth0  Link encap:Ethernet  HWaddr 5c:f3:fc:1d:4a:40
>  inet addr:172.22.22.1  Bcast:172.22.255.255  Mask:255.255.0.0
>  inet6 addr: fe80::5ef3:fcff:fe1d:4a40/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:473034 errors:0 dropped:1133 overruns:0 frame:0
>  TX packets:168957 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:1000
>  RX bytes:321962311 (321.9 MB)  TX bytes:14763893 (14.7 MB)
>  Interrupt:30 Memory:fa00-fa012800
> 
> eth1  Link encap:Ethernet  HWaddr 5c:f3:fc:1d:4a:42
>  inet6 addr: fe80::5ef3:fcff:fe1d:4a42/64 Scope:Link
>  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
>  RX packets:890 errors:0 dropped:132 overruns:0 frame:0
>  TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:1000
>  RX bytes:69871 (69.8 KB)  TX bytes:1980 (1.9 KB)
>  Interrupt:37 Memory:9200-92012800
> 
> loLink encap:Local Loopback
>  inet addr:127.0.0.1  Mask:255.0.0.0
>  inet6 addr: ::1/128 Scope:Host
>  UP LOOPBACK RUNNING  MTU:16436  Metric:1
>  RX packets:572149 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:572149 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:817303115 (817.3 MB)  TX bytes:817303115 (817.3 MB)
> 
> virbr0Link encap:Ethernet  HWaddr 8a:ae:91:b4:11:6b
>  inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
>  UP BROADCAST MULTICAST  MTU:1500  Metric:1
>  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> 
> vnet0 Link encap:Ethernet  HWaddr fe:16:3e:63:b4:35
>  inet6 addr: fe80::fc16:3eff:fe63:b435/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:150 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:792 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:500
>  RX bytes:15341 (15.3 KB)  TX bytes:61254 (61.2 KB)
> 
> 
> 
> Michaël Van de Borne
> R&D Engineer, SOA team, CETIC
> Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
> www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
> 
> 
> Le 28/03/12 18:50, Vishvananda Ishaya a écrit :
>> You have to create your network with the same range
>> it looks like you created with something like 10.18.9.0/24
>> 
>> On Mar 28, 2012, at 8:33 AM, Michaël Van de Borne wrote:
>> 
>>> Hello,
>>> 
>>> I installed Essex on Ubuntu 12.04 Server, and I had a problem with VM IP.
>>> 
>>> The IP given to the VM (10.18.9.2) isn't in the fixed_range. Here's a 
>>> portion of nova.conf:
>>> 
>>>

Re: [Openstack] Validation of floating IP opertaions in Essex codebase ?

2012-03-28 Thread Vishvananda Ishaya

On Mar 28, 2012, at 10:04 AM, Day, Phil wrote:

> Hi Folks,
>  
> At the risk of looking lazy in my first question by following up with a 
> second:
>  
> So I tracked this down in the code and can see that the validation has moved 
> into network/manager.py, and what was a validation/cast in network/api.py has 
> been replaced with a call – but that seems to make the system more tightly 
> coupled across components (i.e. if my there is a problem getting the message 
> to the Network Manager then even an invalid request will be blocked until the 
> call returns or times out).

This is a side effect of trying to decouple compute and network, see the 
explanation below.

>  
> It also looks as if the validation for disassociate_floating_ip has also been 
> moved to the manager, but this is still a cast from the api layer – so those 
> error messages never get back to the user.

Good point.  This probably needs to be a call with the current model.

>  
> Coming from Diablo it all feels kind of odd to me – I thought we were trying 
> to validate what we could of requests in the API server and return immediate 
> errors at that stage and then cast into the system (so that only internal 
> errors can stop something from working at this stage). Was there a 
> deliberate design policy around this at some stage ?

There are a few things going on here.

First we have spent a lot of time decoupling network and compute.  Ultimately 
network will be an external service, so we can't depend on having access to the 
network database on the compute api side. We can do a some checks in 
compute_api to make sure that it isn't attached to another instance that we 
know about, but ultimately the network service has to be responsible for saying 
what can happen with the ip address.

So the second part is about why it is happening in network_manager vs 
network_api.  This is a side-effect of the decision to plug in 
quantum/melange/etc. at the manager layer instead of the api layer.  The api 
layer is therefore being very dumb, just passing requests on to the manager.

So that explains where we are.  Here is the plan (as I understand) for the 
future:

a) move the quantum plugin to the api layer
(At this point we could move validation into the api if necessary.)

b) define a more complete network api which includes all of the necessary 
features that are currently compute extensions

c) make a client to talk to the api

d) make compute talk through the client to the api instead of using rabbit 
messages
(this decouples network completely, allowing us to deploy and run network as a 
completely separate service if need be.  At this point the quantum-api-plugin 
could be part of quantum or a new shared NaaS project.  More to decide at the 
summit here)

In general, we are hoping to switch to quantum as the default by Folsom, and 
not have to touch the legacy network code very much.  If there are serious 
performance issues we could make some optimizations by doing checks in 
network-api, but these will quickly become moot if we are moving towards using 
a client and talking through a rest interface.

So Looks like the following could be done in the meantime:

a) switch disassociate from a cast to a call -> i would consider this one a a 
bug and would appreciate someone verifying that it fails and reporting it

b) add some validation in compute api -> I'm not sure what we can assert here.  
Perhaps we could use the network_info cache and check for duplicates etc.

c) if we have serious performance issues, we could add another layer of checks 
in the compute_api, but we may have to make sure that we make sure it is 
ignored for quantum.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem ssh-ing into vms

2012-03-28 Thread Vishvananda Ishaya

On Mar 28, 2012, at 8:01 AM, Pierre Amadio wrote:

> Was my assumption wrong or is there something special to do to have the
> metadata service available without running nova-api ?

You can run the metadata service by itself using bin/nova-api-metadata.  For 
performance reasons, I prefer this option.

Alternatively you can leave it running on the api node but you have to make 
sure config is set on your compute and network hosts to tell the system where 
to forward to.  You do this via a config option in nova.conf

## (StrOpt) the ip for the metadata api server
# metadata_host="$my_ip"

Also you have to make sure that packets  are not snatted when they leave the 
network host if they are going to the metadata server. You can do this via a 
config option as well:

## (StrOpt) dmz range that should be accepted
# dmz_cidr="10.128.0.0/24"

So setting the following:
metadata_host=
dmz_cidr=/32

should work with nova-api running separately







___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] wrong IP given by flat network dhcp

2012-03-28 Thread Vishvananda Ishaya
You have to create your network with the same range
it looks like you created with something like 10.18.9.0/24

On Mar 28, 2012, at 8:33 AM, Michaël Van de Borne wrote:

> Hello,
> 
> I installed Essex on Ubuntu 12.04 Server, and I had a problem with VM IP.
> 
> The IP given to the VM (10.18.9.2) isn't in the fixed_range. Here's a portion 
> of nova.conf:
> 
> # cat /etc/nova/nova.conf
> [...]
> --network_manager=nova.network.manager.FlatDHCPManager
> --public_interface=eth0
> --flat_interface=eth1
> --flat_network_bridge=br100
> --fixed_range=10.18.9.32/27
> --floating_range=172.22.22.32/27
> --network_size=32
> --flat_network_dhcp_start=10.18.9.33
> 
> and here's the host network config:
> 
> # cat /etc/network/interfaces
> auto eth0
> iface eth0 inet static
> address 172.22.22.1
> gateway 172.22.0.1
> netmask 255.255.0.0
> network 172.22.0.0
> broadcast 172.22.255.255
> 
> auto eth1
> iface eth1 inet static
> address 10.18.9.1
> netmask 255.255.0.0
> network 10.18.0.0
> broadcast 10.18.255.255
> 
> Then I deployed another VM, and I was given IP 10.18.9.3. Here's an excerpt 
> of /var/log/nova/nova-network.log when I deployed this machine:
> 
> 2012-03-28 17:29:16 DEBUG nova.rpc.amqp [-] received {u'_context_roles': 
> [u'admin'], u'_context_request_id': 
> u'req-aa77b53c-be43-4b4e-bf82-98f62bff7f52', u'_context_read_deleted': u'no', 
> u'args': {u'address': u'10.18.9.3'}, u'_context_auth_token': None, 
> u'_context_is_admin': True, u'_context_project_id': None, 
> u'_context_timestamp': u'2012-03-28T15:29:16.569136', u'_context_user_id': 
> None, u'method': u'lease_fixed_ip', u'_context_remote_address': None} from 
> (pid=17249) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:144
> 2012-03-28 17:29:16 DEBUG nova.rpc.amqp 
> [req-aa77b53c-be43-4b4e-bf82-98f62bff7f52 None None] unpacked context: 
> {'request_id': u'req-aa77b53c-be43-4b4e-bf82-98f62bff7f52', 'user_id': None, 
> 'roles': [u'admin'], 'timestamp': '2012-03-28T15:29:16.569136', 'is_admin': 
> True, 'auth_token': None, 'project_id': None, 'remote_address': None, 
> 'read_deleted': u'no'} from (pid=17249) unpack_context 
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:188
> 2012-03-28 17:29:16 DEBUG nova.network.manager 
> [req-aa77b53c-be43-4b4e-bf82-98f62bff7f52 None None] Leased IP |10.18.9.3| 
> from (pid=17249) lease_fixed_ip 
> /usr/lib/python2.7/dist-packages/nova/network/manager.py:1222
> 2012-03-28 17:29:16 DEBUG nova.rpc.amqp [-] received {u'_context_roles': 
> [u'admin'], u'_msg_id': u'ba410b9f98c04023b68f6ea6e95d775e', 
> u'_context_read_deleted': u'no', u'_context_request_id': 
> u'req-11a728d5-d7f5-4c65-a2ce-6d958585e574', u'args': {u'address': 
> u'10.18.9.3'}, u'_context_auth_token': None, u'_context_is_admin': True, 
> u'_context_project_id': None, u'_context_timestamp': 
> u'2012-03-28T15:29:16.820602', u'_context_user_id': None, u'method': 
> u'get_fixed_ip_by_address', u'_context_remote_address': None} from 
> (pid=17249) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:144
> 2012-03-28 17:29:16 DEBUG nova.rpc.amqp 
> [req-11a728d5-d7f5-4c65-a2ce-6d958585e574 None None] unpacked context: 
> {'request_id': u'req-11a728d5-d7f5-4c65-a2ce-6d958585e574', 'user_id': None, 
> 'roles': [u'admin'], 'timestamp': '2012-03-28T15:29:16.820602', 'is_admin': 
> True, 'auth_token': None, 'project_id': None, 'remote_address': None, 
> 'read_deleted': u'no'} from (pid=17249) unpack_context 
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:188
> 2012-03-28 17:29:40 DEBUG nova.rpc.amqp [-] received {u'_context_roles': 
> [u'admin'], u'_msg_id': u'50999ff995cd4f7e8e441885d3ada543', 
> u'_context_read_deleted': u'no', u'_context_request_id': 
> u'req-563547e8-6f6f-43b1-9449-681ff43dad33', u'args': {u'instance_id': 3, 
> u'instance_uuid': u'b7ec8623-e002-4682-a37b-50e1d047c134', u'host': 
> u'openstack1', u'project_id': u'0a9e24ccc1ac46eea75154c85bb4052e', 
> u'rxtx_factor': 1.0}, u'_context_auth_token': None, u'_context_is_admin': 
> True, u'_context_project_id': None, u'_context_timestamp': 
> u'2012-03-28T15:29:31.151885', u'_context_user_id': None, u'method': 
> u'get_instance_nw_info', u'_context_remote_address': None} from (pid=17249) 
> _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:144
> 
> 
> what am I doing wrong?
> 
> 
> -- 
> Michaël Van de Borne
> R&D Engineer, SOA team, CETIC
> Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
> www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/List

Re: [Openstack] KVM crash.

2012-03-28 Thread Vishvananda Ishaya
This was discussed on the mailing list earlier I believe:

http://www.mail-archive.com/openstack@lists.launchpad.net/msg08475.html

The solution appears to be to upgrade to a newer libvirt.

Vish

On Mar 28, 2012, at 6:01 AM, Guilherme Birk wrote:

> No one is having this issue?
> 
> From: guib...@hotmail.com
> To: openstack@lists.launchpad.net
> Date: Fri, 23 Mar 2012 19:48:04 +
> Subject: [Openstack] KVM crash.
> 
> I'm having problems with KVM on a single node installation. My problem is 
> like these ones: 
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=42703
> http://www.spinics.net/lists/kvm/msg67635.html
> 
> The KVM normally crashes when I'm doing a load test with a large number of 
> connections on the VM's.
> 
> This appears to not happen using a dual node installation, with the 
> nova-compute running with a dedicated machine.
> 
> ___ Mailing list: 
> https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
> Unsubscribe : https://launchpad.net/~openstack More help : 
> https://help.launchpad.net/ListHelp
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Generic method to make the OpenStack services daemonize / run as different user

2012-03-27 Thread Vishvananda Ishaya

On Mar 27, 2012, at 11:57 AM, Martin Gerhard Loschwitz wrote:

> Hi Folks,
> 
> i'm looking for a generic way to make the OpenStack components (keystone-all,
> glance-api / glance-registry, nova-*) daemonize. I had expected the scripts
> to have such an option out of the box, but apparently that isn't so. I don't
> want to use upstart (as that is Ubuntu-specific) or start-stop-daemon (as 
> that is mostly Debian-specific). Is there any recommended official method for
> this?

Most components do not support daemonization. There are countless tools for 
daemonizing and monitoring processes, so we didn't think it was useful to 
include our own with our own special syntax.  Swift is an exception: it has 
swift-init. There was also glance-control, but I think most packages are 
running it via some other process management tool.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Confusing about the nova authentication and keystone authentication

2012-03-26 Thread Vishvananda Ishaya
The commands in nova are deprecated and will be removed.  They are still there 
to allow people to upgrade from old internal auth to keystone.

Vish

On Mar 26, 2012, at 2:52 AM, 下一个傻子 wrote:

> 
> Hello ,every one:
> Here is a question that confusing me for days.
> 
> I saw there is an authentication mechanism in keystone,that is responsible 
> for creating users ,projects,roles ,role-user relationship management 
> etcand also,I found there is such a
>  
> mechanism in nova,which also can create users,projects,roles...things that 
> keystone do.
> 
> so ,what's the relationship between the two?..Can anyone please help me out 
> of here :) Thank you.
> -- 
> 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] MySQL connection gone away handling in OpenStack projects

2012-03-22 Thread Vishvananda Ishaya
This looks like a much better solution than MySQLPingListener.  It would be 
good to get this into common / nova, especially if we can verify that it works 
with postgres as well.

Vish

On Mar 22, 2012, at 7:48 AM, Unmesh Gurjar wrote:

> Hi,
>  
> The current handling of the ‘SQL server has gone away’ is different across 
> OpenStack projects (eg. Nova uses MySQLPingListener, whereas Glance retries 
> the db operation to recover connection).
> I am curious to know if the fix implemented in 
> https://review.openstack.org/5552 will be used across all projects (through 
> openstack-common) ?
>  
> Thanks & Regards,
> Unmesh G.
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-22 Thread Vishvananda Ishaya

On Mar 22, 2012, at 8:06 AM, Sandy Walsh wrote:

> o/
> 
> Vek and myself are looking into caching strategies in and around Nova.
> 
> There are essentially two approaches: in-process and external (proxy).
> The in-process schemes sit in with the python code while the external
> ones basically proxy the the HTTP requests.

We may need http caches as well in some cases, but we already use memcached in 
a few places, so I think we need internal caching as well.

> 
> There are some obvious pro's and con's to each approach. The external is
> easier for operations to manage, but in-process allows us greater
> control over the caching (for things like caching db calls and not just
> HTTP calls). But, in-memory also means more code, more memory usage on
> the servers, monolithic services, limited to python based solutions,
> etc. In-process also gives us access to tools like Tach
> https://github.com/ohthree/tach for profiling performance.
> 
> I see Jesse recently landed a branch that touches on the in-process
> approach:
> https://github.com/openstack/nova/commit/1bcf5f5431d3c9620596f5329d7654872235c7ee#nova/common/memorycache.py
> 
> I don't know if people think putting caching code inside nova is a good
> or bad idea. If we do continue down this road, it would be nice to make
> it a little more modular/plug-in-based (YAPI .. yet another plug-in).
> Perhaps a hybrid solution is required?

openstack-common is where jesse was planning on putting memorycache

> 
> We're looking at tools like memcache, beaker, varnish, etc.
> 

I kind of like keeping our caching simple, just talking to something that is 
replicating the python-memcached api so that we can change out an in memory 
cache or actual memcached or db cache or etc...


This has a bit of promise:

http://code.google.com/p/python-cache/

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-manage quota --headroom

2012-03-22 Thread Vishvananda Ishaya
Seems like this would be much more useful as part of the quotas extension.  
This feature is small enough for a bug I think.

Vish

On Mar 22, 2012, at 6:36 AM, Eoghan Glynn wrote:

> 
> Folks,
> 
> One thing that's been on my wishlist since hitting a bunch of
> quota exceeded issues when first running Tempest and also on
> the Fedora17 openstack test day.
> 
> It's ability to easily see the remaining headroom for each
> per-project quota, e.g.
> 
> $ nova-manage quota --headroom --project=admin
> ...
> instances:10  (of which 1 remaining)
> floating_ips: 10  (of which 8 remaining)
> ...
> 
> This would give an immediate indication of an impending resource
> starvation issue - "shoot, I'll have to clean out some of those
> old instances before spinning up two more, else increase the quota".
> 
> It would only really be useful for quotas that represent a
> threshold on overall resource usage that may grow or shrink over
> time, as opposed to some fixed limit (think, max instances versus
> max injected files per instance).
> 
> So the question is whether there's already a means to acheive this
> in one fell swoop? 
> 
> And if not, would it be best tracked with a small-scale nova blueprint,
> or as an enhancement request expressed in a launchpad bug?
> 
> Cheers,
> Eoghan
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Scalability issue in nova-dhcpbridge

2012-03-18 Thread Vishvananda Ishaya
I believe it is safe to ignore the old leases.  If nova-network has been down 
for a while it could potentially be nice to refresh all of the leases that it 
knows about, but I don't think it will harm anything if you remove it.

Are you running flatdhcp with a single network host on a large install?  I 
would think that multi_host would be a better choice in that case.

There is also a potentially nasty performance issue in linux_net where it 
creates all of the leases.  It is a very expensive operation and needs to be 
reoptimized after the foreign keys were removed from the network tables. 
Currently it is doing 2 database for every active instance in the db.

Vish

On Mar 18, 2012, at 10:42 PM, Anton Blanchard wrote:

> 
> Hi,
> 
> We are seeing severe boot and delete performance issues with
> FlatDHPManager and a lot of instances.
> 
> If I have 200 instances running and boot 1 new instance a kill -HUP
> dnsmasq calls nova-dhcpbridge 201 times, 200 for the existing leases and
> once for the new one.
> 
> The 200 events for existing leases end up here in nova-dhcpbridge:
> 
> def old_lease(mac, ip_address):
>"""Update just as add lease."""
>LOG.debug(_("Adopted old lease or got a change of mac"))
>add_lease(mac, ip_address)
> 
> I'm not sure why we need to do this at all. The comment mentions
> tracking a change of MAC and yet add_lease doesn't seem to do anything
> with the MAC parameter.
> 
> I can fix my performance problem by ignoring any old leases, but I was
> hoping someone could explain what the purpose is here.
> 
> Anton
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


<    1   2   3   4   5   6   7   8   >