Re: [Openstack] Problem when Scheduling across zones

2011-10-04 Thread Pedro Navarro Pérez
the nova zone-list output:

nova zone-list
++--+---+--+---+--+
| ID | Name | Is Active | API URL  | Weight
Offset | Weight Scale |
++--+---+--+---+--+
| 1  | h1   | True  | http://192.168.124.53:8774/v1.1/ |
|  |
++--+---+--+---+--+

Thanks for you help!

On Mon, Oct 3, 2011 at 8:44 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 You seem to doing things correctly.

 Can you paste the output from 'nova zone-list' in the parent zone please?

 -Sandy
 
 From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
 [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf 
 of Pedro Navarro Pérez [pedn...@gmail.com]
 Sent: Monday, October 03, 2011 8:30 AM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] Problem when Scheduling across zones

 Hi all,

 I'm about to test the scheduling across zones functionality in diablo,
 but the run instance command does not propagate correctly across the
 child zones.

 My environment:

 3 VM's with diablo installed.

 PARENT ZONE: Europe1 [192.168.124.47]
                               |
                               |
       CHILD ZONE: Huddle1 [192.168.124.53]
                               |
                               |
               HOST: Machine1 [192.168.124.44]

 Configuration and commands in Machine1:

 --dhcpbridge_flagfile=/etc/nova/nova.conf
 --dhcpbridge=/usr/bin/nova-dhcpbridge
 --logdir=/var/log/nova
 --state_path=/var/lib/nova
 --lock_path=/var/lock/nova
 --flagfile=/etc/nova/nova-compute.conf
 --force_dhcp_release=True
 --use_deprecated_auth
 --verbose
 --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.53/nova
 --network_manager=nova.network.manager.FlatDHCPManager
 --flat_network_bridge=br100
 --flat_injected=False
 --flat_interface=eth3
 --public_interface=eth3
 --vncproxy_url=http://192.168.124.53:6080
 --daemonize=1
 --rabbit_host=192.168.124.53
 --osapi_host=192.168.124.53
 --ec2_host=192.168.124.53
 --image_service=nova.image.glance.GlanceImageService
 --glance_api_servers=192.168.124.53:9292
 --use_syslog
 --libvirt_type=qemu

 Configuration and commands in Huddle1:

 --dhcpbridge_flagfile=/etc/nova/nova.conf
 --dhcpbridge=/usr/bin/nova-dhcpbridge
 --logdir=/var/log/nova
 --state_path=/var/lib/nova
 --lock_path=/var/lock/nova
 --flagfile=/etc/nova/nova-compute.conf
 --force_dhcp_release=True
 --use_deprecated_auth
 --verbose
 --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.53/nova
 --network_manager=nova.network.manager.FlatDHCPManager
 --flat_network_bridge=br100
 --flat_injected=False
 --flat_interface=eth3
 --public_interface=eth3
 --vncproxy_url=http://192.168.124.53:6080
 --daemonize=1
 --rabbit_host=192.168.124.53
 --osapi_host=192.168.124.53
 --ec2_host=192.168.124.53
 --image_service=nova.image.glance.GlanceImageService
 --glance_api_servers=192.168.124.53:9292
 --use_syslog
 --libvirt_type=qemu
 --allow_admin_api=true
 --enable_zone_routing=true
 --zone_name=h1
 --build_plan_encryption_key=c286696d887c9aa0611bbb3e2025a478
 --scheduler_driver=nova.scheduler.base_scheduler.BaseScheduler
 --default_host_filter=nova.scheduler.filters.AllHostsFilter

 sudo nova-manage service disable h1.ostack.ds nova-compute

 Configuration and commands in Europe1:

 --dhcpbridge_flagfile=/etc/nova/nova.conf
 --dhcpbridge=/usr/bin/nova-dhcpbridge
 --logdir=/var/log/nova
 --state_path=/var/lib/nova
 --lock_path=/var/lock/nova
 --flagfile=/etc/nova/nova-compute.conf
 --force_dhcp_release=True
 --use_deprecated_auth
 --verbose
 --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.47/nova
 --network_manager=nova.network.manager.FlatDHCPManager
 --flat_network_bridge=br100
 --flat_injected=False
 --flat_interface=eth2
 --public_interface=eth2
 --vncproxy_url=http://192.168.124.47:6080
 --daemonize=1
 --rabbit_host=192.168.124.47
 --osapi_host=192.168.124.47
 --ec2_host=192.168.124.47
 --image_service=nova.image.glance.GlanceImageService
 --glance_api_servers=192.168.124.47:9292
 --use_syslog
 --libvirt_type=qemu
 --allow_admin_api=true
 --enable_zone_routing=true
 --zone_name=Europe1
 --build_plan_encryption_key=on3u4jvvbtnpkvi075vmcu88wzgpgnyp
 --scheduler_driver=nova.scheduler.base_scheduler.BaseScheduler

 nova zone-add --zone_username cloudroot --password 
 bf22b691-2581-4b2c-80e3-808fdd5dad4c http://192.168.124.53:8774/v1.1/

 nova zone-boot --image 3 --flavor 1 test

 The nova-scheduler.log shows that:

 1. The zone has been succesfully detected:

 2011-10-03 13:16:02,009 DEBUG nova [-] Polling zone:
 http://192.168.124.53:8774/v1.1/ from (pid=1118) _poll_zone
 /usr/lib/python2.7/dist-packages/nova/scheduler/zone_manager.py:100
 2011-10-03 13:16:02,047 DEBUG novaclient.client [-] REQ: curl 

[Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Khairul Aizat Kamarudzzaman
Hi,
is it possible that im using this script :
wget https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
to setup openstack in 1 single box with 16GB of RAM , i5 CPU , 500GB HD

or is there any other easier/good script that i can use to build openstack 


 

Regards,

Khairul Aizat Kamarudzzaman
http://launchpad.net/~fenris
fen...@ubuntu.com
+6012.659.5675

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Tomasz 'Zen' Napierała

On 4 paź 2011, at 10:45, Khairul Aizat Kamarudzzaman wrote:

 Hi,
 is it possible that im using this script :
   •
   • wget 
 https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
 to setup openstack in 1 single box with 16GB of RAM , i5 CPU , 500GB HD
 
 or is there any other easier/good script that i can use to build openstack 
 

Lokks like it's doing good job, but I never tried it. You may always contact 
the author, should be helpful.

Regards,
-- 
Tomasz 'Zen' Napierała






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] access to openstack swift cluster

2011-10-04 Thread Khaled Ben Bahri

Hi,

Thanks for your help
I succeed to make a client for swift :)

Best regards
Khaled

 Subject: Re: [Openstack] access to openstack swift cluster
 From: tom...@napierala.org
 Date: Mon, 3 Oct 2011 19:02:35 +0200
 CC: btorch...@zeroaccess.org; openstack@lists.launchpad.net
 To: khaled-...@hotmail.com
 
 
 On 3 paź 2011, at 17:49, Khaled Ben Bahri wrote:
 
  Hi,
  
  these command are executed when i'm logged on the proxy server, but i want 
  to manage files from outside the proxy
  and these  commands don't work from another computer
  
  I don't know if there are any commands or configuration to make it to 
  manage files from outside the proxy
 
 FIrst of all, you have to install swift tool. Then you have to change your 
 request to use IP address instead of variable PROXY_LOCAL_NET_IP, so it looks 
 like:
 swift -A https://157.159.103.40:8080/auth/v1.0 -U system:root -K testpass list
 
 Regards,
 -- 
 Tomasz 'Zen' Napierała
 
 
 
 
 
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Khairul Aizat Kamarudzzaman
thanks for the advise ... btw .. with the hware spec ... is it ok that i build 
it in 1 machine ?

Regards,

Khairul Aizat Kamarudzzaman
http://launchpad.net/~fenris
fen...@ubuntu.com
+6012.659.5675

On Oct 4, 2011, at 5:10 PM, Tomasz 'Zen' Napierała wrote:

 
 On 4 paź 2011, at 10:45, Khairul Aizat Kamarudzzaman wrote:
 
 Hi,
 is it possible that im using this script :
  •
  • wget 
 https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
 to setup openstack in 1 single box with 16GB of RAM , i5 CPU , 500GB HD
 
 or is there any other easier/good script that i can use to build openstack 
 
 
 Lokks like it's doing good job, but I never tried it. You may always contact 
 the author, should be helpful.
 
 Regards,
 -- 
 Tomasz 'Zen' Napierała
 
 
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Tomasz 'Zen' Napierała

On 4 paź 2011, at 11:26, Khairul Aizat Kamarudzzaman wrote:

 thanks for the advise ... btw .. with the hware spec ... is it ok that i 
 build it in 1 machine ?


It's OK for testing/development, but naturally not for production. 

Regards,
-- 
Tomasz 'Zen' Napierała






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack + RDMA + Infiniband

2011-10-04 Thread Masanori ITOH
Hi,

I was thinking about the exactly the same thing which Catlin pointed out.

Also, there is an implementation enabling RDMA handling from python:
  https://github.com/jgunthorpe/python-rdma
Thus, I was wondring if I can do something more using the above.


BTW, I'm in Boston attending at OpenStack Design Summit/Conference,
are any of you in Boston?  Catelin, Narayan, Nick


-Masanori

From: Caitlin Bestler caitlin.best...@nexenta.com
Subject: Re: [Openstack] OpenStack + RDMA + Infiniband
Date: Mon, 3 Oct 2011 14:21:23 -0700

 
 
 Narayan Desai wrote:
 
 
  I suspect that the original poster was looking for instance access
 (mediated in some way) to IB gear.
  When we were trying to figure out how to best use our IB gear inside
 of openstack, we decided that
  it was too risky to try exposing IB at the verbs layer to instances
 directly, since the security model
  doesn't appear to have a good way to prevent administrative commands
 from being issued from
  untrusted instances.
 
  We decided to use to IB as fast plumbing for data movement (using
 IPoIB) and have ended up with
  pretty nice I/O performance to the volume service, etc. We haven't
 managed to use it for much more
  than that at this point.
 
 There's no reason to expect use of IPoIB to end up providing better
 TCP/IP service for large bulk data
 transfer than you would get from a quality Ethernet NIC. But if you have
 an existing IB infrastructure
 it is certainly worth considering. You should experiment to see whether
 you get better performance
 under load form IPoIB in connected mode as opposed to trying SDP.
 
 Either IPoIB or SDP should be accessible via a standard sockets
 interface,meaning they could be
 plugged in without modifying the Python code or Python libraries.
 
 The response to congestion by an IB network is different than the
 response from a TCP network,
 and the response of a TCP network simulated over IPoIB is something else
 entirely. So you'd want
 to do your evaluation with realistic traffic patterns.
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Error in live-migration

2011-10-04 Thread Roman Sokolkov
Hi! I use Diablo relese with glance. And when I want to migrate my VM with

# nova-manage vm live_migration --ec2_id=i-0003 --dest=cloud-n1

i receive error in compute.log:

(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 628, in
_process_data
(nova.rpc): TRACE: ctxt.reply(None, None)
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 673, in
reply
(nova.rpc): TRACE: msg_reply(self.msg_id, *args, **kwargs)
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 781, in
msg_reply
(nova.rpc): TRACE: conn.direct_send(msg_id, msg)
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 562, in
__exit__
(nova.rpc): TRACE: self._done()
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 547, in
_done
(nova.rpc): TRACE: self.connection.reset()
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/nova/rpc/impl_kombu.py, line 382, in
reset
(nova.rpc): TRACE: self.channel.close()
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/kombu/transport/pyamqplib.py, line 196,
in close
(nova.rpc): TRACE: super(Channel, self).close()
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/amqplib/client_0_8/channel.py, line 194,
in close
(nova.rpc): TRACE: (20, 41),# Channel.close_ok
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/amqplib/client_0_8/abstract_channel.py,
line 105, in wait
(nova.rpc): TRACE: return amqp_method(self, args)
(nova.rpc): TRACE:   File
/usr/lib/python2.7/site-packages/amqplib/client_0_8/channel.py, line 273,
in _close
(nova.rpc): TRACE: (class_id, method_id))
(nova.rpc): TRACE: AMQPChannelException: (404, uNOT_FOUND - no exchange
'5f9fb614443640b28285a4eacfcaa88e' in vhost '/', (60, 40),
'Channel.basic_publish')

Regards
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Khairul Aizat Kamarudzzaman
Sorry,
maybe i didn't make it clear ... 

Purpose build the cloud is to use instances for apps development.
is the cloud consider dev or production ? 

but at the same time, its my 1st time building/test the cloud , reason why im 
using 1 hardware because currently have that machine for apps development. do 
plz advise.


Regards,

Khairul Aizat Kamarudzzaman
http://launchpad.net/~fenris
fen...@ubuntu.com
+6012.659.5675

On Oct 4, 2011, at 5:48 PM, Tomasz 'Zen' Napierała wrote:

 
 On 4 paź 2011, at 11:26, Khairul Aizat Kamarudzzaman wrote:
 
 thanks for the advise ... btw .. with the hware spec ... is it ok that i 
 build it in 1 machine ?
 
 
 It's OK for testing/development, but naturally not for production. 
 
 Regards,
 -- 
 Tomasz 'Zen' Napierała
 
 
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread shake chen
http://cloudbuilders.github.com/devstack/

the script is better. you can try it.



On Tue, Oct 4, 2011 at 4:45 PM, Khairul Aizat Kamarudzzaman 
fen...@ubuntu.com wrote:

 Hi,
 is it possible that im using this script :

-
   -

   wget 
 https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh


 to setup openstack in 1 single box with 16GB of RAM , i5 CPU , 500GB HD

 or is there any other easier/good script that i can use to build openstack




 Regards,

 Khairul Aizat Kamarudzzaman
 http://launchpad.net/~fenris
 fen...@ubuntu.com
 +6012.659.5675


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
陈沙克
手机:13661187180
msn:shake.c...@hotmail.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Testing / Development Private Cloud in 1 single Box

2011-10-04 Thread Tomasz 'Zen' Napierała

On 4 paź 2011, at 11:58, Khairul Aizat Kamarudzzaman wrote:

 Sorry,
 maybe i didn't make it clear ... 
 
 Purpose build the cloud is to use instances for apps development.
 is the cloud consider dev or production ? 
 
 but at the same time, its my 1st time building/test the cloud , reason why im 
 using 1 hardware because currently have that machine for apps development. do 
 plz advise.


It's rather production purpose. Generally one of the biggest pros for cloud 
computing, it's spreading load and point of failures on many machines. You will 
loose that, but still get agility of launching new servers. It might work for 
you anyway, but it's your risk to take. 

Regards,
-- 
Tomasz 'Zen' Napierała






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Yum repositories for diablo openstack

2011-10-04 Thread Dmitry Maslennikov
On Mon, Oct 3, 2011 at 8:05 PM, Dmitry Maslennikov
dmaslenni...@griddynamics.com wrote:
 On Mon, Oct 3, 2011 at 5:46 PM, Fabrice Bacchella
 fbacche...@spamcop.net wrote:
 I hope it's not too late, but a lot of configuration files are not tagged as 
 such in the spec files.

 So if I try a yum erase, they juste vanished. Or they can be overridden with 
 a yum update. That a big problem for production servers.
 Thank you for the note. We will fix it.
Sorry, but we did not find any configuration file, which are not
marked properly. Could you clarify this? Give us examples of such
files for e. g.

-- 
Dmitry Maslennikov
Principal Software Engineer, Grid Dynamics
SkypeID: maslennikovdm
E-mail: dmaslenni...@griddynamics.com
www.griddynamics.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] vnc on diablo

2011-10-04 Thread Carlo Impagliazzo
Hi guys
I have a working diablo stack.

Using the dashboard I've tried to launch the vnc console, the results is the 
NO VNC image within server disconnected.

In nova-vnc.log I have

(nova.rpc): TRACE: AMQPChannelException: (404, uNOT_FOUND - no 
exchange '9077e1d93d3e41ed91d0a551afd3013f' in vhost '/', (60, 
40), 'Channel.basic_publish')
(nova.rpc): TRACE:
2011-10-04 16:21:05,422 nova.rpc: Returning exception (404, uNOT_FOUND - no 
exchange '9077e1d93d3e41ed91d0a551afd3013f' in vhost '/', (60, 
40), 'Channel.basic_publish') to caller

Any suggestions?

The full log trace is here:
http://paste.openstack.org/show/2628/

Thanks!
Carlo


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack + RDMA + Infiniband

2011-10-04 Thread Narayan Desai
On Mon, Oct 3, 2011 at 4:21 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:


 Narayan Desai wrote:


 I suspect that the original poster was looking for instance access
 (mediated in some way) to IB gear.
 When we were trying to figure out how to best use our IB gear inside
 of openstack, we decided that
 it was too risky to try exposing IB at the verbs layer to instances
 directly, since the security model
 doesn't appear to have a good way to prevent administrative commands
 from being issued from
 untrusted instances.

 We decided to use to IB as fast plumbing for data movement (using
 IPoIB) and have ended up with
 pretty nice I/O performance to the volume service, etc. We haven't
 managed to use it for much more
 than that at this point.

 There's no reason to expect use of IPoIB to end up providing better
 TCP/IP service for large bulk data
 transfer than you would get from a quality Ethernet NIC. But if you have
 an existing IB infrastructure
 it is certainly worth considering. You should experiment to see whether
 you get better performance
 under load form IPoIB in connected mode as opposed to trying SDP.

I suppose that is true, if your link speeds are the same. We're
getting (without much effort) 3 GB/s over IPoIB (connected mode, etc).

 Either IPoIB or SDP should be accessible via a standard sockets
 interface,meaning they could be
 plugged in without modifying the Python code or Python libraries.

Yeah, that is exactly what we did. We used addresses on the IPoIB
layer 3 network to get all of our I/O traffic going over that instead
of ethernet.

 The response to congestion by an IB network is different than the
 response from a TCP network,
 and the response of a TCP network simulated over IPoIB is something else
 entirely. So you'd want
 to do your evaluation with realistic traffic patterns.

Yeah, in our case, the system was specced like an HPC cluster, so the
management network is pretty anemic compared with QDR.
 -nld

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] CentOS build

2011-10-04 Thread Dmitry Maslennikov
We have prepared CentOS packages of the latest OpenStack release:

http://openstackgd.wordpress.com/2011/10/04/diablo-centos-build/

RHEL build was published a couple of days ago, but was not mentioned
in this list:

http://openstackgd.wordpress.com/2011/10/03/openstack-2011-3-release/

-- 
Dmitry Maslennikov
Principal Software Engineer, Grid Dynamics
SkypeID: maslennikovdm
E-mail: dmaslenni...@griddynamics.com
www.griddynamics.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Xen image starts Kernel Panic in Diablo

2011-10-04 Thread Rogério Vinhal Nunes
That pretty much solved the disk image problem, thanks. You really should
put that in the official documentation, that is no trace of that option in
it.

But it's still not working. After that I had to change the
libvirt.xml.template to use sda instead of xvda in the root option. There
should be sda in the root option or xvda in the disk option, mixing both of
them, as of is happening right now, will never work.

Even after this small setup, it won't work. ttylinux image boots and
complains about not being able to reach IP 169.254.169.254 (I've done the
PREROUTING iptables configuration in the compute nodes), and stalls just
after the setting shared object cache with no console opened:

---
startup crond  [  OK  ]
wget: can't connect to remote host (169.254.169.254): Network is unreachable
cloud-userdata: failed to read instance id
= cloud-final: system completely up in 31.15 seconds 
wget: can't connect to remote host (169.254.169.254): Network is unreachable
wget: can't connect to remote host (169.254.169.254): Network is unreachable
wget: can't connect to remote host (169.254.169.254): Network is unreachable
  instance-id:
  public-ipv4:
  local-ipv4 :
= First-Boot Sequence:
setting shared object cache [running ldconfig]  [  OK  ]
---

The ubuntu image does almost the same, but nothing to do with networking, it
mounts the ext4 filesystem and then just hangs with no console:

---
[0.160477] md: ... autorun DONE.
[0.160659] EXT3-fs (sda): error: couldn't mount because of unsupported
optional features (240)
[0.160909] EXT2-fs (sda): error: couldn't mount because of unsupported
optional features (240)
[0.161908] EXT4-fs (sda): mounted filesystem with ordered data mode.
Opts: (null)
[0.161930] VFS: Mounted root (ext4 filesystem) readonly on device 202:0.
[0.178088] devtmpfs: mounted
[0.178195] Freeing unused kernel memory: 828k freed
[0.178425] Write protecting the kernel read-only data: 10240k
[0.183386] Freeing unused kernel memory: 308k freed
[0.184074] Freeing unused kernel memory: 1612k freed
mountall: Disconnected from Plymouth


Both of them are assigned IPs for the configured nova-network in the
euca-describe-instances, but none of them ping back.

I feel I'm getting really close to get this working. If you guys could lend
me a little more help I would be very much appreciated.

Em 4 de outubro de 2011 09:12, Vishvananda Ishaya
vishvana...@gmail.comescreveu:

 You may need to set --nouse_cow_images
 Sounds like your image might be a copy on write qcow2 with a backing file.
 You can verify that with qemu-img info /var/lib/nova/instances/disk
 That kind of image won't work with xen.
 On Oct 3, 2011 9:44 AM, Rogério Vinhal Nunes roge...@dcc.ufmg.br
 wrote:
  Hey guys, I'm still trying to get this working, but I still don't
 understand
  what's happening.
 
  In the ttylinux busybox I do a fdisk -l and it says the disk is only 18
 MB
  large and doesn't have a valid partition table:
 
  
  / # fdisk -l
 
  Disk /dev/sda: 18 MB, 18874368 bytes
  255 heads, 63 sectors/track, 2 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Disk /dev/sda doesn't contain a valid partition table
  
 
  When looking in the instance directory as I said before, the image is
 only
  18 MB large (while I think ttylinux should be 24 MB), this may be a
 problem.
  I'm using glance as a image server and mounted the /var/lib/instances
 using
  NFS from the cloud controller.
 
  What can I do to get more information? I need to get this configuration
  working.
 
  Em 28 de setembro de 2011 15:50, Rogério Vinhal Nunes
  roge...@dcc.ufmg.brescreveu:
 
  Yes, I've tried the ttylinux right now, it starts the instance, but it
  booted up a busybox, probably a recover from initrd (see output in the
 end
  of this e-maill). I can access the instance by doing a xl console in the
  host, describe-instances shows the status running test.
 
  I've successfully booted a separated vm with an old image I used with
 Xen
  with Xen + libvirt just changing the openstack's libvirt.xml. It just
 works
  fine.
 
  The instance dir in /var/lib/nova/instances files look like this:
 
  0 -rw-r- 1 nova nogroup 0 2011-09-28 15:43 console.log
  15M -rw-r--r-- 1 nova nogroup 18M 2011-09-28 15:43 disk
  4,3M -rw-r--r-- 1 nova nogroup 4,3M 2011-09-28 15:43 kernel
  4,0K -rw-r--r-- 1 nova nogroup 1,3K 2011-09-28 15:43 libvirt.xml
  5,7M -rw-r--r-- 1 nova nogroup 5,7M 2011-09-28 15:43 ramdisk
 
  this is the last output I get when I get into the instance console:
 
  [ 0.078066] blkfront: sda: barriers enabled
  [ 0.078394] sda: unknown partition table
  [ 0.170040] XENBUS: Device with no driver: device/vkbd/0
  [ 0.170051] XENBUS: Device with no driver: device/vfb/0
  [ 0.170056] XENBUS: Device with no driver: device/console/0
  [ 0.170074] Magic number: 1:252:3141
  [ 0.170114] /build/buildd/linux-2.6.35/drivers/rtc/hctosys.c: unable 

Re: [Openstack] Xen image starts Kernel Panic in Diablo

2011-10-04 Thread Vishvananda Ishaya
It looks like your dhcp is failing for some reason.  There are a number of 
things that could theoretically cause this.   You might start using tcpdump to 
find out if the dhcp request packet is coming out of the vm and if it is being 
responded to by dnsmasq on the nova-network host.  I'm not sure about the 
ubuntu image, is it expecting xvda there?

On Oct 4, 2011, at 12:02 PM, Rogério Vinhal Nunes wrote:

 That pretty much solved the disk image problem, thanks. You really should put 
 that in the official documentation, that is no trace of that option in it.
 
 But it's still not working. After that I had to change the 
 libvirt.xml.template to use sda instead of xvda in the root option. There 
 should be sda in the root option or xvda in the disk option, mixing both of 
 them, as of is happening right now, will never work.
 
 Even after this small setup, it won't work. ttylinux image boots and 
 complains about not being able to reach IP 169.254.169.254 (I've done the 
 PREROUTING iptables configuration in the compute nodes), and stalls just 
 after the setting shared object cache with no console opened:
 
 ---
 startup crond  [  OK  ]
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 cloud-userdata: failed to read instance id
 = cloud-final: system completely up in 31.15 seconds 
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
   instance-id: 
   public-ipv4: 
   local-ipv4 : 
 = First-Boot Sequence:
 setting shared object cache [running ldconfig]  [  OK  ]
 ---
 
 The ubuntu image does almost the same, but nothing to do with networking, it 
 mounts the ext4 filesystem and then just hangs with no console:
 
 ---
 [0.160477] md: ... autorun DONE.
 [0.160659] EXT3-fs (sda): error: couldn't mount because of unsupported 
 optional features (240)
 [0.160909] EXT2-fs (sda): error: couldn't mount because of unsupported 
 optional features (240)
 [0.161908] EXT4-fs (sda): mounted filesystem with ordered data mode. 
 Opts: (null)
 [0.161930] VFS: Mounted root (ext4 filesystem) readonly on device 202:0.
 [0.178088] devtmpfs: mounted
 [0.178195] Freeing unused kernel memory: 828k freed
 [0.178425] Write protecting the kernel read-only data: 10240k
 [0.183386] Freeing unused kernel memory: 308k freed
 [0.184074] Freeing unused kernel memory: 1612k freed
 mountall: Disconnected from Plymouth
 
 
 Both of them are assigned IPs for the configured nova-network in the 
 euca-describe-instances, but none of them ping back.
 
 I feel I'm getting really close to get this working. If you guys could lend 
 me a little more help I would be very much appreciated.
 
 Em 4 de outubro de 2011 09:12, Vishvananda Ishaya vishvana...@gmail.com 
 escreveu:
 You may need to set --nouse_cow_images
 Sounds like your image might be a copy on write qcow2 with a backing file. 
 You can verify that with qemu-img info /var/lib/nova/instances/disk 
 That kind of image won't work with xen.
 
 On Oct 3, 2011 9:44 AM, Rogério Vinhal Nunes roge...@dcc.ufmg.br wrote:
  Hey guys, I'm still trying to get this working, but I still don't understand
  what's happening.
  
  In the ttylinux busybox I do a fdisk -l and it says the disk is only 18 MB
  large and doesn't have a valid partition table:
  
  
  / # fdisk -l
  
  Disk /dev/sda: 18 MB, 18874368 bytes
  255 heads, 63 sectors/track, 2 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  
  Disk /dev/sda doesn't contain a valid partition table
  
  
  When looking in the instance directory as I said before, the image is only
  18 MB large (while I think ttylinux should be 24 MB), this may be a problem.
  I'm using glance as a image server and mounted the /var/lib/instances using
  NFS from the cloud controller.
  
  What can I do to get more information? I need to get this configuration
  working.
  
  Em 28 de setembro de 2011 15:50, Rogério Vinhal Nunes
  roge...@dcc.ufmg.brescreveu:
  
  Yes, I've tried the ttylinux right now, it starts the instance, but it
  booted up a busybox, probably a recover from initrd (see output in the end
  of this e-maill). I can access the instance by doing a xl console in the
  host, describe-instances shows the status running test.
 
  I've successfully booted a separated vm with an old image I used with Xen
  with Xen + libvirt just changing the openstack's libvirt.xml. It just works
  fine.
 
  The instance dir in /var/lib/nova/instances files look like this:
 
  0 -rw-r- 1 nova nogroup 0 2011-09-28 15:43 console.log
  15M -rw-r--r-- 1 nova nogroup 18M 2011-09-28 15:43 disk
  4,3M -rw-r--r-- 1 nova nogroup 4,3M 2011-09-28 15:43 kernel
  4,0K -rw-r--r-- 1 nova nogroup 1,3K 2011-09-28 15:43 libvirt.xml
  5,7M -rw-r--r-- 1 nova 

Re: [Openstack] Xen image starts Kernel Panic in Diablo

2011-10-04 Thread Rogério Vinhal Nunes
I don't think the ubuntu image is expecting xvda anywhere. The disk is sda
and after I've changed the libvirt.xml.template, so is root= option. The
image is the ubuntu localimage mentioned in the documentation.

I've tried to telnet 169.254.169.254 32 in the compute node and it didn't
find anything while telnet 10.0.254.6 8773 does respond. This is the part
of the iptables-save that messes with the routing of nova, did the Diablo
configuration also change regarding to this?

-A PREROUTING -j nova-compute-PREROUTING
-A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT
--to-destination 10.0.254.6:8773
-A POSTROUTING -j nova-compute-POSTROUTING

I understand that it should forward anything destined to
169.254.169.254/32port 80 to
10.0.254.6:8773. But that isn't happening even with telnet. Does the
nova-compute-PREROUTING rule have a part in this? The configuration I've
done is just running iptables -A PREROUTING -d 169.254.169.254/32 -p tcp -m
tcp --dport 80 -j DNAT --to-destination 10.0.254.6:8773 .

Em 4 de outubro de 2011 16:22, Vishvananda Ishaya
vishvana...@gmail.comescreveu:

 It looks like your dhcp is failing for some reason.  There are a number of
 things that could theoretically cause this.   You might start using tcpdump
 to find out if the dhcp request packet is coming out of the vm and if it is
 being responded to by dnsmasq on the nova-network host.  I'm not sure about
 the ubuntu image, is it expecting xvda there?

 On Oct 4, 2011, at 12:02 PM, Rogério Vinhal Nunes wrote:

 That pretty much solved the disk image problem, thanks. You really should
 put that in the official documentation, that is no trace of that option in
 it.

 But it's still not working. After that I had to change the
 libvirt.xml.template to use sda instead of xvda in the root option. There
 should be sda in the root option or xvda in the disk option, mixing both of
 them, as of is happening right now, will never work.

 Even after this small setup, it won't work. ttylinux image boots and
 complains about not being able to reach IP 169.254.169.254 (I've done the
 PREROUTING iptables configuration in the compute nodes), and stalls just
 after the setting shared object cache with no console opened:

 ---
 startup crond  [  OK  ]
 wget: can't connect to remote host (169.254.169.254): Network is
 unreachable
 cloud-userdata: failed to read instance id
 = cloud-final: system completely up in 31.15 seconds 
 wget: can't connect to remote host (169.254.169.254): Network is
 unreachable
 wget: can't connect to remote host (169.254.169.254): Network is
 unreachable
 wget: can't connect to remote host (169.254.169.254): Network is
 unreachable
   instance-id:
   public-ipv4:
   local-ipv4 :
 = First-Boot Sequence:
 setting shared object cache [running ldconfig]  [  OK  ]
 ---

 The ubuntu image does almost the same, but nothing to do with networking,
 it mounts the ext4 filesystem and then just hangs with no console:

 ---
 [0.160477] md: ... autorun DONE.
 [0.160659] EXT3-fs (sda): error: couldn't mount because of unsupported
 optional features (240)
 [0.160909] EXT2-fs (sda): error: couldn't mount because of unsupported
 optional features (240)
 [0.161908] EXT4-fs (sda): mounted filesystem with ordered data mode.
 Opts: (null)
 [0.161930] VFS: Mounted root (ext4 filesystem) readonly on device
 202:0.
 [0.178088] devtmpfs: mounted
 [0.178195] Freeing unused kernel memory: 828k freed
 [0.178425] Write protecting the kernel read-only data: 10240k
 [0.183386] Freeing unused kernel memory: 308k freed
 [0.184074] Freeing unused kernel memory: 1612k freed
 mountall: Disconnected from Plymouth
 

 Both of them are assigned IPs for the configured nova-network in the
 euca-describe-instances, but none of them ping back.

 I feel I'm getting really close to get this working. If you guys could lend
 me a little more help I would be very much appreciated.

 Em 4 de outubro de 2011 09:12, Vishvananda Ishaya 
 vishvana...@gmail.comescreveu:

 You may need to set --nouse_cow_images
 Sounds like your image might be a copy on write qcow2 with a backing file.
 You can verify that with qemu-img info /var/lib/nova/instances/disk
 That kind of image won't work with xen.
 On Oct 3, 2011 9:44 AM, Rogério Vinhal Nunes roge...@dcc.ufmg.br
 wrote:
  Hey guys, I'm still trying to get this working, but I still don't
 understand
  what's happening.
 
  In the ttylinux busybox I do a fdisk -l and it says the disk is only 18
 MB
  large and doesn't have a valid partition table:
 
  
  / # fdisk -l
 
  Disk /dev/sda: 18 MB, 18874368 bytes
  255 heads, 63 sectors/track, 2 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Disk /dev/sda doesn't contain a valid partition table
  
 
  When looking in the instance directory as I said before, the image is
 only
  18 MB large (while I think ttylinux should be 24 MB), this may be a
 

Re: [Openstack] Xen image starts Kernel Panic in Diablo

2011-10-04 Thread Vishvananda Ishaya
Yes that is the rule.  But that rule is not going to work if you don't receive 
an ip address via dhcp.  So you need to make sure the dhcp piece is working.  
My guess is that once you get dhcp working, the metadata rule will work since 
it looks like it is being created correctly.

Vish

On Oct 4, 2011, at 12:38 PM, Rogério Vinhal Nunes wrote:

 I don't think the ubuntu image is expecting xvda anywhere. The disk is sda 
 and after I've changed the libvirt.xml.template, so is root= option. The 
 image is the ubuntu localimage mentioned in the documentation.
 
 I've tried to telnet 169.254.169.254 32 in the compute node and it didn't 
 find anything while telnet 10.0.254.6 8773 does respond. This is the part 
 of the iptables-save that messes with the routing of nova, did the Diablo 
 configuration also change regarding to this?
 
 -A PREROUTING -j nova-compute-PREROUTING 
 -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT 
 --to-destination 10.0.254.6:8773 
 -A POSTROUTING -j nova-compute-POSTROUTING
 
 I understand that it should forward anything destined to 169.254.169.254/32 
 port 80 to 10.0.254.6:8773. But that isn't happening even with telnet. Does 
 the nova-compute-PREROUTING rule have a part in this? The configuration I've 
 done is just running iptables -A PREROUTING -d 169.254.169.254/32 -p tcp -m 
 tcp --dport 80 -j DNAT --to-destination 10.0.254.6:8773 .
 
 Em 4 de outubro de 2011 16:22, Vishvananda Ishaya vishvana...@gmail.com 
 escreveu:
 It looks like your dhcp is failing for some reason.  There are a number of 
 things that could theoretically cause this.   You might start using tcpdump 
 to find out if the dhcp request packet is coming out of the vm and if it is 
 being responded to by dnsmasq on the nova-network host.  I'm not sure about 
 the ubuntu image, is it expecting xvda there?
 
 On Oct 4, 2011, at 12:02 PM, Rogério Vinhal Nunes wrote:
 
 That pretty much solved the disk image problem, thanks. You really should 
 put that in the official documentation, that is no trace of that option in 
 it.
 
 But it's still not working. After that I had to change the 
 libvirt.xml.template to use sda instead of xvda in the root option. There 
 should be sda in the root option or xvda in the disk option, mixing both of 
 them, as of is happening right now, will never work.
 
 Even after this small setup, it won't work. ttylinux image boots and 
 complains about not being able to reach IP 169.254.169.254 (I've done the 
 PREROUTING iptables configuration in the compute nodes), and stalls just 
 after the setting shared object cache with no console opened:
 
 ---
 startup crond  [  OK  ]
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 cloud-userdata: failed to read instance id
 = cloud-final: system completely up in 31.15 seconds 
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
 wget: can't connect to remote host (169.254.169.254): Network is unreachable
   instance-id: 
   public-ipv4: 
   local-ipv4 : 
 = First-Boot Sequence:
 setting shared object cache [running ldconfig]  [  OK  ]
 ---
 
 The ubuntu image does almost the same, but nothing to do with networking, it 
 mounts the ext4 filesystem and then just hangs with no console:
 
 ---
 [0.160477] md: ... autorun DONE.
 [0.160659] EXT3-fs (sda): error: couldn't mount because of unsupported 
 optional features (240)
 [0.160909] EXT2-fs (sda): error: couldn't mount because of unsupported 
 optional features (240)
 [0.161908] EXT4-fs (sda): mounted filesystem with ordered data mode. 
 Opts: (null)
 [0.161930] VFS: Mounted root (ext4 filesystem) readonly on device 202:0.
 [0.178088] devtmpfs: mounted
 [0.178195] Freeing unused kernel memory: 828k freed
 [0.178425] Write protecting the kernel read-only data: 10240k
 [0.183386] Freeing unused kernel memory: 308k freed
 [0.184074] Freeing unused kernel memory: 1612k freed
 mountall: Disconnected from Plymouth
 
 
 Both of them are assigned IPs for the configured nova-network in the 
 euca-describe-instances, but none of them ping back.
 
 I feel I'm getting really close to get this working. If you guys could lend 
 me a little more help I would be very much appreciated.
 
 Em 4 de outubro de 2011 09:12, Vishvananda Ishaya vishvana...@gmail.com 
 escreveu:
 You may need to set --nouse_cow_images
 Sounds like your image might be a copy on write qcow2 with a backing file. 
 You can verify that with qemu-img info /var/lib/nova/instances/disk 
 That kind of image won't work with xen.
 
 On Oct 3, 2011 9:44 AM, Rogério Vinhal Nunes roge...@dcc.ufmg.br wrote:
  Hey guys, I'm still trying to get this working, but I still don't 
  understand
  what's happening.
  
  In the ttylinux busybox I do a fdisk -l and it says the disk is only 18 MB
  large and doesn't have a 

Re: [Openstack-poc] Meeting tomorrow

2011-10-04 Thread John Purrier
OK.

-Original Message-
From: openstack-poc-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-poc-bounces+john=openstack@lists.launchpad.net] On
Behalf Of Jonathan Bryce
Sent: Monday, October 03, 2011 1:38 PM
To: openstack-poc@lists.launchpad.net
Subject: [Openstack-poc] Meeting tomorrow

How do you all feel about doing our meeting tomorrow over lunch from
1:00-2:00? This seems to be about the only time that's clear. John Dickinson
had the idea of putting it on sched as well so anyone interested can attend.

Let me know if this works for you all. Thanks,

Jonathan.
___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp