Re: [Openstack] Cant ping private or floating IP

2013-02-21 Thread Sylvain Bauza

Le 20/02/2013 23:04, Chathura M. Sarathchandra Magurawalage a écrit :

Thanks.

I would be more concerned about the SIOCDELRT error above. Do you
try to manually remove a network route at bootup ? Seems like the
'route del' is failing because the route is not already existing.

I am not doing doing anything that I am aware of.


As already said, you absolutely need VNC support for
investigating. Could you please fix your VNC setup which is
incorrect ?


But VNC works fine. Its just that it VM hangs on the boot up it wont 
come to the log in prompt, I can't log into it.  :(





Ahuh. Sorry for bugging you, I haven't understand : is your VM failing 
to boot up ?

Which distro is your VM based on ?

From my POV, your console.log is fine : your VM is booting, getting 
DHCP lease, trying to contact metadata server, failing to contact it, 
and that's it. You should get a prompt when logging in thru VNC.


Please clarify : either the VM is failing to boot up properly (and then 
try to find a small and cloud-out-of-the-box distro like CirrOS for 
testing, or try with runlevel 3 or interactive startup), or your VM is 
properly started without network connectivity (and then you have to 
login and try more diagnostic tools like ping/tcpdump).



Thanks,
-Sylvain
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-20 Thread Sylvain Bauza

Le 20/02/2013 14:04, Chathura M. Sarathchandra Magurawalage a écrit :
There are apparently two instances running in the compute node but 
nova just see only one. Probably when I have deleted an instance 
earlier it had not deleted the instance properly.


root@controller:~# nova list
+--+++---+
| ID   | Name   | Status | Networks   
   |

+--+++---+
| 42e18cd5-de6f-4181-b238-320fe37ef6f1 | master | ACTIVE | 
demo-net=10.5.5.3 |

+--+++---+


virsh -c qemu+ssh://root@computenode/system list
root@computenode's password:
 IdName   State

 14instance-002c  running
 18instance-001e  running




You should have seen at 'sudo virsh list --all', plus looking at 
/etc/libvirt/qemu/*.xml to check how many instances were defined.
I do suspect also that for some reason (probably nova-compute down), a 
clean-up of 2c probably didn't work. Anyway, this is fixed as you 
mention.



Then I have deleted all instances and created a new one. But still 
cant ping or ssh the new VM.


interface type='bridge'
  mac address='fa:16:3e:a2:6e:02'/
  source bridge='qbrff8933bf-ba'/
  model type='virtio'/
  filterref filter='nova-instance-instance-0035-fa163ea26e02'
parameter name='DHCPSERVER' value='10.5.5.2'/
parameter name='IP' value='10.5.5.3'/
parameter name='PROJMASK' value='255.255.255.0'/
parameter name='PROJNET' value='10.5.5.0'/
  /filterref
  address type='pci' domain='0x' bus='0x00' slot='0x03' 
function='0x0'\

/
/interface

Starting network...
udhcpc (v1.18.5) started
Sending discover...
Sending select for 10.5.5.3...
Lease of 10.5.5.3 obtained, lease time 120
deleting routers
route: SIOCDELRT: No such process
adding dns 8.8.8.8




The DHCP reply is correctly received by the instance from the network 
node to the compute node. This is not a network issue (at least for IP 
assignation).
I would be more concerned about the SIOCDELRT error above. Do you try to 
manually remove a network route at bootup ? Seems like the 'route del' 
is failing because the route is not already existing.



As already said, you absolutely need VNC support for investigating. 
Could you please fix your VNC setup which is incorrect ?


graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' 
keymap='en-u\

s'
  listen type='address' address='0.0.0.0'/
/graphics


Try in nova-compute.conf :
vncserver_proxyclient_address=compute node mgmt IP
vncserver_listen=compute node mgmt IP
and in nova.conf :
novncproxy_base_url=http://controler node mgmt IP:6080/vnc_auto.html

and restart nova-compute.



On 20 February 2013 11:57, Sylvain Bauza sylvain.ba...@digimind.com 
mailto:sylvain.ba...@digimind.com wrote:


Could you please paste :
 - /etc/libvirt/qemu/your_instance_id.xml
 - ip a show vnet0
 - brctl show

Sounds like your virtual device is not created. Could you please
launch a new VM and paste /var/log/nova/nova-compute.log ?

Thanks,
-Sylvain



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-20 Thread Chathura M. Sarathchandra Magurawalage
Thanks.

I would be more concerned about the SIOCDELRT error above. Do you try to
 manually remove a network route at bootup ? Seems like the 'route del' is
 failing because the route is not already existing.

 I am not doing doing anything that I am aware of.


 As already said, you absolutely need VNC support for investigating. Could
 you please fix your VNC setup which is incorrect ?


But VNC works fine. Its just that it VM hangs on the boot up it wont come
to the log in prompt, I can't log into it.  :(

On 20 February 2013 13:46, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Le 20/02/2013 14:04, Chathura M. Sarathchandra Magurawalage a écrit :

  There are apparently two instances running in the compute node but nova
 just see only one. Probably when I have deleted an instance earlier it had
 not deleted the instance properly.

  root@controller:~# nova list

 +--+++---+
 | ID   | Name   | Status | Networks
|

 +--+++---+
 | 42e18cd5-de6f-4181-b238-320fe37ef6f1 | master | ACTIVE |
 demo-net=10.5.5.3 |

 +--+++---+


  virsh -c qemu+ssh://root@computenode/system list
 root@computenode's password:
  IdName   State
 
  14instance-002c  running
  18instance-001e  running



 You should have seen at 'sudo virsh list --all', plus looking at
 /etc/libvirt/qemu/*.xml to check how many instances were defined.
 I do suspect also that for some reason (probably nova-compute down), a
 clean-up of 2c probably didn't work. Anyway, this is fixed as you
 mention.


  Then I have deleted all instances and created a new one. But still cant
 ping or ssh the new VM.

  interface type='bridge'
   mac address='fa:16:3e:a2:6e:02'/
   source bridge='qbrff8933bf-ba'/
   model type='virtio'/
   filterref filter='nova-instance-instance-0035-fa163ea26e02'
 parameter name='DHCPSERVER' value='10.5.5.2'/
 parameter name='IP' value='10.5.5.3'/
 parameter name='PROJMASK' value='255.255.255.0'/
 parameter name='PROJNET' value='10.5.5.0'/
   /filterref
   address type='pci' domain='0x' bus='0x00' slot='0x03'
 function='0x0'\
 /
 /interface

 Starting network...
  udhcpc (v1.18.5) started
 Sending discover...
 Sending select for 10.5.5.3...
 Lease of 10.5.5.3 obtained, lease time 120
 deleting routers
  route: SIOCDELRT: No such process
 adding dns 8.8.8.8



 The DHCP reply is correctly received by the instance from the network node
 to the compute node. This is not a network issue (at least for IP
 assignation).
 I would be more concerned about the SIOCDELRT error above. Do you try to
 manually remove a network route at bootup ? Seems like the 'route del' is
 failing because the route is not already existing.


 As already said, you absolutely need VNC support for investigating. Could
 you please fix your VNC setup which is incorrect ?


   graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'
 keymap='en-u\
 s'
   listen type='address' address='0.0.0.0'/
 /graphics


 Try in nova-compute.conf :
 vncserver_proxyclient_address=compute node mgmt IP
 vncserver_listen=compute node mgmt IP
 and in nova.conf :
 novncproxy_base_url=http://controler node mgmt IP:6080/vnc_auto.html

 and restart nova-compute.



   On 20 February 2013 11:57, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Could you please paste :
  - /etc/libvirt/qemu/your_instance_id.xml
  - ip a show vnet0
  - brctl show

 Sounds like your virtual device is not created. Could you please launch a
 new VM and paste /var/log/nova/nova-compute.log ?

 Thanks,
 -Sylvain



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Guilherme Russi
Hello Chathura,

 Have succeeded with your network? I'm having problems with mine too.

Thanks.

Guilherme.


2013/2/17 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (192.168.2.0/24
 )
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the VM in
 the first place?. Because as far as I know metadata takes care of ssh keys.
 But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254 from
 your compute and controller nodes : Same thing on my side, but my VM get
 their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 192.168.2.225tcp dpt:8775

  Chain nova-api-OUTPUT (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-local (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-filter-top (2 references)
  pkts bytes target prot opt in out source
 destination
 56729   22M nova-api-local  all  --  *  *   

Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Guilherme Russi
How did you install your controller node? I mean, mine I have 2 NICs and I
installed the network node at the same physical machine.


2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 192.168.2.225tcp dpt:8775

  Chain 

Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Chathura M. Sarathchandra Magurawalage
I have only got 1 NIC but got two virtual interface for two different
networks. I have got network node in the same physical machine too.


On 18 February 2013 13:15, Guilherme Russi luisguilherme...@gmail.comwrote:

 How did you install your controller node? I mean, mine I have 2 NICs and I
 installed the network node at the same physical machine.


 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   
 UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out

Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Guilherme Russi
Got it, I have one virtual interface too, to make the management and VM
conf part. If you find anything, let me know, please.

Thanks.


2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 I have only got 1 NIC but got two virtual interface for two different
 networks. I have got network node in the same physical machine too.


 On 18 February 2013 13:15, Guilherme Russi luisguilherme...@gmail.comwrote:

 How did you install your controller node? I mean, mine I have 2 NICs and
 I installed the network node at the same physical machine.


 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi 
 luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com
 

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the
 VM in the first place?. Because as far as I know metadata takes care of 
 ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   
 UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   
 U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 
 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my 
 VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 56022   22M 

Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Chathura M. Sarathchandra Magurawalage
Yes definitely I will post it here for future reference for anybody.

On 18 February 2013 13:28, Guilherme Russi luisguilherme...@gmail.comwrote:

 Got it, I have one virtual interface too, to make the management and VM
 conf part. If you find anything, let me know, please.

 Thanks.



 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 I have only got 1 NIC but got two virtual interface for two different
 networks. I have got network node in the same physical machine too.


 On 18 February 2013 13:15, Guilherme Russi luisguilherme...@gmail.comwrote:

 How did you install your controller node? I mean, mine I have 2 NICs and
 I installed the network node at the same physical machine.


 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi 
 luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the
 VM in the first place?. Because as far as I know metadata takes care of 
 ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0  
  UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0  
  U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 
 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my 
 VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage
 wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  

Re: [Openstack] Cant ping private or floating IP

2013-02-17 Thread Jean-Baptiste RANSY
ping

Are you on IRC ?

JB


On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:
 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)

 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:
 oh that's weird.

 I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of
 ssh keys. But what if you cant reach the VM in the first place?

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G 
 [74G[ OK ]


 On 17 February 2013 02:41, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side,
 but my VM get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source  
 destination 
   707 47819 quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 56022   22M quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source  
 destination 

 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
Hello guys,

The problem still exists. Any ideas?

Thanks

On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance is
 waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80
 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am not
 sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 --**--**
 --**--**-
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   
 chathura.sarathchandra@gmail.**comchathura.sarathchan...@gmail.commailto:
 77.chath...@gmail.com
 77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com** wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com 
 mailto:sylvain.bauza@**digimind.comsylvain.ba...@digimind.com
 

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 

 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 -- Thanks  Regards
 --Anil Kumar Vishnoi




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Hello Chathura,

Are you using Folsom with Network Namespaces ?

If yes, have a look here :
http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


Regards,

Jean-Baptsite RANSY


On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello guys,

 The problem still exists. Any ideas?

 Thanks 

 On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey
 I mean).
 If a VM is unable to reach metadata service, then it won't be able
 to get its public key, so you won't be able to connect, unless you
 specifically go thru a Password authentication (provided password
 auth is enabled in /etc/ssh/sshd_config, which is not the case
 with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the
 instance is waiting for the curl timeout (60sec.) to finish
 booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node
 to the Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0 http://0.0.0.0/0  
  169.254.169.254  tcp dpt:80 to:172.16.0.1:8775
 http://172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute
 node. I am not sure if this is related to not being able to
 PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 
 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
 mailto:csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com
 mailto:chathura.sarathchan...@gmail.com
 mailto:77.chath...@gmail.com mailto:77.chath...@gmail.com
 77.chath...@gmail.com mailto:77.chath...@gmail.com
 mailto:77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com
 mailto:vishnoia...@gmail.com mailto:vishnoia...@gmail.com
 mailto:vishnoia...@gmail.com wrote:

 If you are using ubuntu cloud image then the only way to
 log-in is
 to do ssh with the public key. For that you have to create
 ssh key
 pair and download the ssh key. You can create this ssh
 pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra
 Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the
 credentials?


 You have multiple ways to get VNC access. The easiest
 one is
 thru Horizon. Other can be looking at the KVM
 command-line for
 the desired instance (on the compute node) and check
 the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on
 metadata
 port.




 ___
 Mailing list: https://launchpad.net/~openstack
 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
Hello Jean,

Thanks for your reply.

I followed the instructions in
http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
And my Controller and the Network-node is installed in the same physical
node.

I am using Folsom but without Network namespaces.

But in the website you have provided it states that If you run both L3 +
DHCP services on the same node, you should enable namespaces to avoid
conflicts with routes :

But currently quantum-dhcp-agent and quantum-l3-agent are running in the
same node?

Additionally the control node serves as a DHCP server for the local network
( Don't know if that would make and difference)

Any idea what the problem could be?


On 16 February 2013 16:21, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  Hello Chathura,

 Are you using Folsom with Network Namespaces ?

 If yes, have a look here :
 http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


 Regards,

 Jean-Baptsite RANSY



 On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:

 Hello guys,

  The problem still exists. Any ideas?

  Thanks

   On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.comwrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance
 is waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
  DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp
 dpt:80 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

  Hello Guys,

 Not sure if this is the right port but these are the results:

  *Compute node:*


 root@computenode:~# netstat -an | grep 8775
  tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
  tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
   LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am
 not sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help



 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

  Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com mailto:77.chath...@gmail.com
 
 77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
  sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 ___
 Mailing list: https://launchpad.net/~openstack
  https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack

 More help   : 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Please provide files listed bellow :

Controller Node :
/etc/nova/nova.conf
/etc/nova/api-paste.ini
/etc/quantum/l3_agent.ini
/etc/quantum/quantum.conf
/etc/quantum/dhcp_agent.ini
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
/etc/quantum/api-paste.ini
/var/log/nova/*.log
/var/log/quantum/*.log

Compute Node :
/etc/nova/nova.conf
/etc/nova/nova-compute.conf
/etc/nova/api-paste.ini
/etc/quantum/quantum.conf
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
/var/log/nova/*.log
/var/log/quantum/*.log

Plus, complete output of the following commands :

Controller Node :
$ keystone endpoint-list
$ ip link show
$ ip route show
$ ip netns show
$ ovs-vsctl show

Compute Node :
$ ip link show
$ ip route show
$ ovs-vsctl show

Regards,

Jean-Baptiste RANSY


On 02/16/2013 05:32 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello Jean,

 Thanks for your reply.

 I followed the instructions
 in 
 http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
 And my Controller and the Network-node is installed in the same
 physical node.

 I am using Folsom but without Network namespaces. 

 But in the website you have provided it states that If you run both
 L3 + DHCP services on the same node, you should enable namespaces to
 avoid conflicts with routes :

 But currently quantum-dhcp-agent and quantum-l3-agent are running in
 the same node? 

 Additionally the control node serves as a DHCP server for the local
 network ( Don't know if that would make and difference)

 Any idea what the problem could be?


 On 16 February 2013 16:21, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 Hello Chathura,

 Are you using Folsom with Network Namespaces ?

 If yes, have a look here :
 
 http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


 Regards,

 Jean-Baptsite RANSY



 On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello guys,

 The problem still exists. Any ideas?

 Thanks 

 On 15 February 2013 14:37, Sylvain Bauza
 sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com
 wrote:

 Metadata API allows to fetch SSH credentials when booting
 (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be
 able to get its public key, so you won't be able to connect,
 unless you specifically go thru a Password authentication
 (provided password auth is enabled in /etc/ssh/sshd_config,
 which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as
 the instance is waiting for the curl timeout (60sec.) to
 finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network
 node to the Nova-api node (here 172.16.0.1 as internal
 management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0 http://0.0.0.0/0  
  169.254.169.254  tcp dpt:80 to:172.16.0.1:8775
 http://172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a
 écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Additionally I cant curl 169.254.169.254 from the
 compute node. I am not sure if this is related to not
 being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 
 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
 mailto:csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
Thanks Ransy,

I have created a tar file with the configuration and log files in it.
Please download it using the following URL. I have pasted the output of the
commands below.

https://www.dropbox.com/s/qyfcsn50060y304/confilesnlogs.tar

*Controller node:*
*root@controller:~# keystone endpoint-list*
+--+---+-+-++Controller
node
|id|   region  |
 publicurl|   internalurl
|  adminurl  |
+--+---+-+-++
| 2c9a1cb0fe8247d9b7716432cf459fe5 | RegionOne |
http://192.168.2.225:8774/v2/$(tenant_id)s   |
http://192.168.2.225:8774/v2/$(tenant_id)s   |
http://192.168.2.225:8774/v2/$(tenant_id)s |
| 2d306903ed3342a8c7c5680c116f | RegionOne |
http://192.168.2.225:9696/   |
http://192.168.2.225:9696/  |
http://192.168.2.225:9696/ |
| 3848114f120f42bf819bc2443b28ac9e | RegionOne |
http://192.168.2.225:8080/v1/AUTH_$(tenant_id)s |
http://192.168.2.225:8080/v1/AUTH_$(tenant_id)s |
http://192.168.2.225:8080/v1|
| 4955173b8d9e4d33ae4a5b29dc12c74d | RegionOne |
http://192.168.2.225:8776/v1/$(tenant_id)s   |
http://192.168.2.225:8776/v1/$(tenant_id)s   |
http://192.168.2.225:8776/v1/$(tenant_id)s |
| d313aa76bf854dde94f33a49a9f0c8ac | RegionOne |
http://192.168.2.225:9292/v2  |
http://192.168.2.225:9292/v2  |
http://192.168.2.225:9292/v2   |
| e5aa4ecf3cbe4dd5aba9b204c74fee6a | RegionOne |
http://192.168.2.225:5000/v2.0 |
http://192.168.2.225:5000/v2.0 |
http://192.168.2.225:35357/v2.0   |
| fba6f790e3b444c890d114f13cd32b37 | RegionOne |
http://192.168.2.225:8773/services/Cloud|
http://192.168.2.225:8773/services/Cloud|
http://192.168.2.225:8773/services/Admin  |
+--+---+-+-++

*root@controller:~# ip link show*
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
3: eth1: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
link/ether d4:ae:52:bb:aa:21 brd ff:ff:ff:ff:ff:ff
4: eth0.2@eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UP
link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
5: br-int: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN
link/ether ba:7a:e9:dc:2b:41 brd ff:ff:ff:ff:ff:ff
7: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 9a:41:c8:8a:9e:49 brd ff:ff:ff:ff:ff:ff
8: tapf71b5b86-5c: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UNKNOWN
link/ether 2a:44:a3:d1:7d:f3 brd ff:ff:ff:ff:ff:ff
9: qr-4d088f3a-78: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UNKNOWN
link/ether ca:5b:8d:4d:6d:fb brd ff:ff:ff:ff:ff:ff
10: qg-6f8374cb-cb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether 0e:7f:dd:3a:80:bc brd ff:ff:ff:ff:ff:ff
27: br-tun: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether 8a:cf:ec:7c:15:40 brd ff:ff:ff:ff:ff:ff

*root@controller:~# ip route show*
default via 192.168.2.253 dev eth0.2
default via 192.168.2.253 dev eth0.2  metric 100
10.5.5.0/24 dev tapf71b5b86-5c  proto kernel  scope link  src 10.5.5.2
10.5.5.0/24 dev qr-4d088f3a-78  proto kernel  scope link  src 10.5.5.1
10.10.10.0/24 dev eth0  proto kernel  scope link  src 10.10.10.1
192.168.2.0/24 dev eth0.2  proto kernel  scope link  src 192.168.2.225
192.168.2.0/24 dev qg-6f8374cb-cb  proto kernel  scope link  src
192.168.2.151
192.168.2.0/24 dev br-ex  proto kernel  scope link  src 192.168.2.225

*$ ip netns show (Did not return anything)*

*root@controller:~# ovs-vsctl show*
a566afae-d7a8-42a9-aefe-8b0f2f7054a3
Bridge br-tun
Port gre-4
Interface gre-4
type: gre
options: {in_key=flow, out_key=flow,
remote_ip=10.10.10.12}
Port gre-3
Interface gre-3
type: gre
options: {in_key=flow, out_key=flow, remote_ip=127.0.0.1}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Controller node :
# iptables -L -n -v
# iptables -L -n -v -t nat


On 02/17/2013 03:18 AM, Chathura M. Sarathchandra Magurawalage wrote:
 You should be able to curl 169.254.169.254 from compute node, which I
 cant at the moment.

 I have got the bridge set up in the l3_agent.ini

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
root@computernode:~# curl -v  http://169.254.169.254
* About to connect() to 169.254.169.254 port 80 (#0)
*   Trying 169.254.169.254...

root@controller:~# curl -v  http://169.254.169.254
* About to connect() to 169.254.169.254 port 80 (#0)
*   Trying 169.254.169.254...


root@athena:~# iptables -L -n -v
Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
 pkts bytes target prot opt in out source
destination
59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
  484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
0.0.0.0/0

Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
 pkts bytes target prot opt in out source
destination
  707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
  707 47819 quantum-l3-agent-FORWARD  all  --  *  *   0.0.0.0/0
   0.0.0.0/0
  707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
  707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
 pkts bytes target prot opt in out source
destination
56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
   0.0.0.0/0
56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain nova-api-FORWARD (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-INPUT (1 references)
 pkts bytes target prot opt in out source
destination
0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 192.168.2.225tcp dpt:8775

Chain nova-api-OUTPUT (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-local (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-filter-top (2 references)
 pkts bytes target prot opt in out source
destination
56729   22M nova-api-local  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain quantum-filter-top (2 references)
 pkts bytes target prot opt in out source
destination
56729   22M quantum-l3-agent-local  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

Chain quantum-l3-agent-FORWARD (1 references)
 pkts bytes target prot opt in out source
destination

Chain quantum-l3-agent-INPUT (1 references)
 pkts bytes target prot opt in out source
destination
0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 192.168.2.225tcp dpt:8775

Chain quantum-l3-agent-OUTPUT (1 references)
 pkts bytes target prot opt in out source
destination

Chain quantum-l3-agent-local (1 references)
 pkts bytes target prot opt in out source
destination

root@athena:~# iptables -L -n -v -t nat
Chain PREROUTING (policy ACCEPT 3212 packets, 347K bytes)
 pkts bytes target prot opt in out source
destination
 3212  347K quantum-l3-agent-PREROUTING  all  --  *  *
0.0.0.0/0
0.0.0.0/0
 3212  347K nova-api-PREROUTING  all  --  *  *   0.0.0.0/0
   0.0.0.0/0

Chain INPUT (policy ACCEPT 639 packets, 84948 bytes)
 pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 3180 packets, 213K bytes)
 pkts bytes target prot opt in out source
destination
 3180  213K quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
   0.0.0.0/0
 3180  213K nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source
destination
 3726  247K quantum-l3-agent-POSTROUTING  all  --  *  *
0.0.0.0/0
0.0.0.0/0
0 0 nova-api-POSTROUTING  all  --  *  *   0.0.0.0/0
   0.0.0.0/0
0 0 quantum-postrouting-bottom  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
0 0 nova-postrouting-bottom  all  --  *  *   0.0.0.0/0
   0.0.0.0/0

Chain nova-api-OUTPUT (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-POSTROUTING (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-PREROUTING (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-float-snat (1 references)
 pkts bytes target prot opt in out source
destination

Chain nova-api-snat (1 references)
 pkts bytes target prot opt in out source
destination
0 0 nova-api-float-snat  all  --  *  *   0.0.0.0/0
   0.0.0.0/0

Chain nova-postrouting-bottom (1 references)
 pkts bytes target prot opt in out source
destination
0 0 nova-api-snat  all  --  *  *   0.0.0.0/0
0.0.0.0/0

Chain quantum-l3-agent-OUTPUT (1 references)
 pkts bytes target prot opt in 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
For me, it's normal that you are not able to curl 169.254.169.254 from
your compute and controller nodes : Same thing on my side, but my VM get
their metadata.

Try to lunch an instance.

JB


On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source  
 destination 
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 http://0.0.0.0/0192.168.2.225tcp dpt:8775

 Chain nova-api-OUTPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-local (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-filter-top (2 references)
  pkts bytes target prot opt in out source  
 destination 
 56729   22M nova-api-local  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain quantum-filter-top (2 references)
  pkts bytes target prot opt in out source  
 destination 
 56729   22M quantum-l3-agent-local  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain quantum-l3-agent-FORWARD (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain quantum-l3-agent-INPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
 http://0.0.0.0/0192.168.2.225tcp dpt:8775

 Chain quantum-l3-agent-OUTPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain quantum-l3-agent-local (1 references)
  pkts bytes target prot opt in out source  
 destination

 root@athena:~# iptables -L -n -v -t nat
 Chain PREROUTING (policy ACCEPT 3212 packets, 347K bytes)
  pkts bytes target prot opt in out source  
 destination 
  3212  347K quantum-l3-agent-PREROUTING  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
  3212  347K nova-api-PREROUTING  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain INPUT (policy ACCEPT 639 packets, 84948 bytes)
  pkts bytes target prot opt in out source  
 destination 

 Chain OUTPUT (policy ACCEPT 3180 packets, 213K bytes)
  pkts 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
oh that's weird.

I still get this error. couldnt this be because I cannot ping the VM in the
first place?. Because as far as I know metadata takes care of ssh keys. But
what if you cant reach the VM in the first place?

no instance data found in start-local

ci-info: lo: 1 127.0.0.1   255.0.0.0   .

ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

2013-02-17 02:48:25,840 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[50/120s]: url error [timed out]

2013-02-17 02:49:16,893 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[101/120s]: url error [timed out]

2013-02-17 02:49:34,912 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[119/120s]: url error [timed out]

2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md
after 120 seconds



no instance data found in start

Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

 * Starting AppArmor profiles   [80G
[74G[ OK ]



On 17 February 2013 02:41, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254 from
 your compute and controller nodes : Same thing on my side, but my VM get
 their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
  192.168.2.225tcp dpt:8775

  Chain nova-api-OUTPUT (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-local (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-filter-top (2 references)
  pkts bytes target prot opt in out source
 destination
 56729   22M nova-api-local  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain quantum-filter-top (2 references)
  pkts bytes target prot opt in out source
 destination
 56729   22M quantum-l3-agent-local  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain quantum-l3-agent-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain quantum-l3-agent-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
  192.168.2.225tcp dpt:8775

  Chain quantum-l3-agent-OUTPUT (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain quantum-l3-agent-local (1 references)
  pkts bytes target prot opt in out source
 destination

  root@athena:~# iptables -L -n -v -t nat
 Chain PREROUTING (policy ACCEPT 3212 packets, 347K bytes)
  pkts bytes target prot opt in out source
 destination
  3212  347K quantum-l3-agent-PREROUTING  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
  3212  347K nova-api-PREROUTING  all  --  *  *   

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Add Cirros Image to Glance :)

Username: cirros
Password: cubswin:)

http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

to join your VM, it's a bit dirty but you can :
- put your computer in the same subnet as your controller (192.168.2.0/24)
- then adds a static route to the subnet of your VM. (ip route add
10.5.5.0/24 gw 192.168.2.151)
(192.168.2.151 is the quantum gateway)

I'm going to sleep, we will continue tomorrow.

JB

PS : You also should get some sleep :)


On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:
 oh that's weird.

 I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of
 ssh keys. But what if you cant reach the VM in the first place?

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G 
 [74G[ OK ]


 On 17 February 2013 02:41, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side,
 but my VM get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
   destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
   destination 
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
   destination 
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
   destination 

 Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
   destination 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Hello Anil,

I can not ssh into the VM so I cant do ifconfig from vm.

I am using quantum and, quantum-plugin-openvswitch-agent,
quantum-dhcp-agent, quantum-l3-agent as described in the guide.

Thanks.

-
Chathura Madhusanka Sarathchandra Magurawalage.
1NW.2.1, Desk 2
School of Computer Science and Electronic Engineering
University Of Essex
United Kingdom.

Email: csar...@essex.ac.uk
  chathura.sarathchan...@gmail.com 77.chath...@gmail.com
  77.chath...@gmail.com


On 15 February 2013 07:34, Anil Vishnoi vishnoia...@gmail.com wrote:

 Did your VM got ip address ? Can you paste the output of ifconfig from
 your vm. Are you using nova-network or quantum ? If quantum - which plugin
 are you using ?


  On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 
 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread JuanFra Rodriguez Cardoso
Have you tried to do ping from own host to vm?
Have you enabled PING and SSH in 'Access and security policies'?

Regards!
JuanFra


2013/2/15 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Anil,

 I can not ssh into the VM so I cant do ifconfig from vm.

 I am using quantum and, quantum-plugin-openvswitch-agent,
 quantum-dhcp-agent, quantum-l3-agent as described in the guide.

 Thanks.


 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com 77.chath...@gmail.com
   77.chath...@gmail.com


 On 15 February 2013 07:34, Anil Vishnoi vishnoia...@gmail.com wrote:

 Did your VM got ip address ? Can you paste the output of ifconfig from
 your vm. Are you using nova-network or quantum ? If quantum - which plugin
 are you using ?


  On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage
 77.chath...@gmail.com wrote:

  Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc 
 version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 
 15:48:03 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 
 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Jean-Baptiste RANSY
Hello Chathura;

It's normal that your compute node have no route to the tenant network.
Quantum and openvswitch provide Layer2 link and as i can see, the VM
obtain a IP address.
So we can assume that quantum and openvswitch are setup correctly.

Same question as JuanFra : Have you enabled PING and SSH in 'Access and
security policies'?

Other things :

Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
is running on your Compute Node ?
If yes, check you nova.conf.

Regards,

Jean-Baptiste RANSY


On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello,

 I followed the folsom basic install instructions
 in 
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug CPUs
 [0.00] PM: Registered nosave memory: 0009b000 - 
 0009c000
 [0.00] PM: Registered nosave memory: 0009c000 - 
 000a
 [0.00] PM: Registered nosave memory: 000a - 
 000f
 [0.00] PM: Registered nosave 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Guilherme Russi
Hello guys,

 I got the same problem, I have enabled the SSH and Ping policies, when I
type sudo ifconfig -a inside my VM (Through VNC) the only IP showed is
the lo IP.
 What am I missing?

Regards.

Guilherme.


2013/2/15 Jean-Baptiste RANSY jean-baptiste.ra...@alyseo.com

  Hello Chathura;

 It's normal that your compute node have no route to the tenant network.
 Quantum and openvswitch provide Layer2 link and as i can see, the VM
 obtain a IP address.
 So we can assume that quantum and openvswitch are setup correctly.

 Same question as JuanFra : Have you enabled PING and SSH in 'Access and
 security policies'?

 Other things :

 Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
 is running on your Compute Node ?
 If yes, check you nova.conf.

 Regards,

 Jean-Baptiste RANSY



 On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:

  Hello,

  I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

  But now I am not able to ping either the private or the floating ip of
 the instances.

  Can someone please help?

  Instance log:

  [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread JuanFra Rodriguez Cardoso
Hi Guilherme:

Try to issue: 'dhclient eth1' in your VM (from VNC console). It could be
problem with net rules in udev.


Regards,
JuanFra


2013/2/15 Guilherme Russi luisguilherme...@gmail.com

 Hello guys,

  I got the same problem, I have enabled the SSH and Ping policies, when I
 type sudo ifconfig -a inside my VM (Through VNC) the only IP showed is
 the lo IP.
  What am I missing?

 Regards.

 Guilherme.



 2013/2/15 Jean-Baptiste RANSY jean-baptiste.ra...@alyseo.com

  Hello Chathura;

 It's normal that your compute node have no route to the tenant network.
 Quantum and openvswitch provide Layer2 link and as i can see, the VM
 obtain a IP address.
 So we can assume that quantum and openvswitch are setup correctly.

 Same question as JuanFra : Have you enabled PING and SSH in 'Access and
 security policies'?

 Other things :

 Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
 is running on your Compute Node ?
 If yes, check you nova.conf.

 Regards,

 Jean-Baptiste RANSY



 On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:

  Hello,

  I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

  But now I am not able to ping either the private or the floating ip of
 the instances.

  Can someone please help?

  Instance log:

  [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 
 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Sylvain Bauza


Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a écrit :


How can I log into the VM from VNC? What are the credentials?



You have multiple ways to get VNC access. The easiest one is thru 
Horizon. Other can be looking at the KVM command-line for the desired 
instance (on the compute node) and check the vnc port in use (assuming 
KVM as hypervisor).

This is basic knowledge of Nova.



nova-api-metadata is running fine in the compute node.



Make sure the metadata port is avaible thanks to telnet or netstat, 
nova-api can be running without listening on metadata port.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Anil Vishnoi
If you are using ubuntu cloud image then the only way to log-in is to do
ssh with the public key. For that you have to create ssh key pair and
download the ssh key. You can create this ssh pair using horizon/cli.


On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
sylvain.ba...@digimind.comwrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is thru Horizon.
 Other can be looking at the KVM command-line for the desired instance (on
 the compute node) and check the vnc port in use (assuming KVM as
 hypervisor).
 This is basic knowledge of Nova.



  nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or netstat,
 nova-api can be running without listening on metadata port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




-- 
Thanks  Regards
--Anil Kumar Vishnoi
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Sylvain Bauza

Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
If a VM is unable to reach metadata service, then it won't be able to 
get its public key, so you won't be able to connect, unless you 
specifically go thru a Password authentication (provided password auth 
is enabled in /etc/ssh/sshd_config, which is not the case with Ubuntu 
cloud archive).
There is also a side effect, the boot process is longer as the instance 
is waiting for the curl timeout (60sec.) to finish booting up.


Re: Quantum, the metadata API is actually DNAT'd from Network node to 
the Nova-api node (here 172.16.0.1 as internal management IP) :

Chain quantum-l3-agent-PREROUTING (1 references)
target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 
to:172.16.0.1:8775



Anyway, the first step is to :
1. grab the console.log
2. access thru VNC to the desired instance

Troubleshooting will be easier once that done.

-Sylvain



Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

Hello Guys,

Not sure if this is the right port but these are the results:

*Compute node:*

root@computenode:~# netstat -an | grep 8775
tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:* 
  LISTEN


*Controller: *

root@controller:~# netstat -an | grep 8775
tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:* 
  LISTEN


*Additionally I cant curl 169.254.169.254 from the compute node. I am 
not sure if this is related to not being able to PING the VM.*


curl -v http://169.254.169.254
* About to connect() to 169.254.169.254 port 80 (#0)
*   Trying 169.254.169.254...

Thanks for your help


-
Chathura Madhusanka Sarathchandra Magurawalage.
1NW.2.1, Desk 2
School of Computer Science and Electronic Engineering
University Of Essex
United Kingdom.

Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
  chathura.sarathchan...@gmail.com mailto:77.chath...@gmail.com
77.chath...@gmail.com mailto:77.chath...@gmail.com


On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com 
mailto:vishnoia...@gmail.com wrote:


If you are using ubuntu cloud image then the only way to log-in is
to do ssh with the public key. For that you have to create ssh key
pair and download the ssh key. You can create this ssh pair using
horizon/cli.


On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com
wrote:


Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
écrit :


How can I log into the VM from VNC? What are the credentials?


You have multiple ways to get VNC access. The easiest one is
thru Horizon. Other can be looking at the KVM command-line for
the desired instance (on the compute node) and check the vnc
port in use (assuming KVM as hypervisor).
This is basic knowledge of Nova.



nova-api-metadata is running fine in the compute node.


Make sure the metadata port is avaible thanks to telnet or
netstat, nova-api can be running without listening on metadata
port.




___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp




-- 
Thanks  Regards

--Anil Kumar Vishnoi





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Thanks for your reply.

first of all I do not see the following rule in my iptables

target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 to:
x.x.x.x:8775 http://172.16.0.1:8775/

Please find the console log at the beginning of the post.

Since I am using Ubuntu cloud image I am not able to log in to it through
VNC console. I can't even ping. This is the main problem.

Any help will greatly appreciated.



On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance is
 waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80
 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am not
 sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help



  On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com** wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com 
 mailto:sylvain.bauza@**digimind.comsylvain.ba...@digimind.com
 

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 

 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 -- Thanks  Regards
 --Anil Kumar Vishnoi




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-14 Thread Anil Vishnoi
Did your VM got ip address ? Can you paste the output of ifconfig from your
vm. Are you using nova-network or quantum ? If quantum - which plugin are
you using ?


On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of the
 instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug CPUs
 [0.00] PM: Registered nosave memory: 0009b000 - 
 0009c000
 [0.00] PM: Registered nosave memory: 0009c000 - 
 000a
 [0.00] PM: Registered nosave memory: 000a - 
 000f
 [0.00] PM: Registered nosave memory: 000f - 
 0010
 [0.00] Allocating PCI resources starting at 8000 (gap: 
 8000:7effc000)
 [0.00] Booting paravirtualized kernel on KVM
 [0.00] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 
 nr_node_ids:1
 [0.00] PERCPU: Embedded 28 pages/cpu