Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-03-21 Thread Jean-Baptiste RANSY
Hello Thiago,

I think it's better to use rootwrap in sudoers :

nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf *
cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap
/etc/cinder/rootwrap.conf *
quantum ALL = (root) NOPASSWD: /usr/bin/quantum-rootwrap
/etc/quantum/rootwrap.conf *

NOTE : with quantum (l3, dhcp, etc ..) you can encounter issue with
rootwrap, especially with namespaces (i don't know if this is still the
case)
To fix that, just add 'root_helper = sudo /usr/bin/quantum-rootwrap
/etc/quantum/rootwrap.conf' in the .ini file of each quantum service.

I don't know why root_helper isn't in each quantum service sample files
if it must be configured ... is it normal or not ?
If this addition (to add root_helper in each ini file) should not be
necessary, I think i identified the root problem.
In the dhcp_agent for example, just need to replace each occurrences of
'self.conf.root_helper' by 'self.root_helper'

If someone has the answer, let me know if I should open a bug or not.

Regards,


jbr_


On 03/21/2013 01:19 AM, Martinx - ジェームズ wrote:
 1 problem fixed with:

 visudo

 ---
 quantum ALL=NOPASSWD: ALL
 cinder ALL=NOPASSWD: ALL
 nova ALL=NOPASSWD: ALL
 ---

 Guide updated...


 On 20 March 2013 19:51, Martinx - ジェームズ
 thiagocmarti...@gmail.com mailto:thiagocmarti...@gmail.com wrote:

 Hi!

  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here
 is the guide I wrote:

  Ultimate OpenStack Grizzly Guide
 https://gist.github.com/tmartinx/d36536b7b62a48f859c2

  It covers:

  * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

  It is still a draft but, every time I deploy Ubuntu and Grizzly,
 I follow this little guide...

  I would like some help to improve this guide... If I'm doing
 something wrong, tell me! Please!

  Probably I'm doing something wrong, I don't know yet, but I'm
 seeing some errors on the logs, already reported here on this
 list. Like for example: nova-novncproxy conflicts with novnc (no
 VNC console for now), dhcp-agent.log / auth.log points to some
 problems with `sudo' or the `rootwarp' subsystem when dealing with
 metadata (so it isn't working)...

  But in general, it works great!!

 Best!
 Thiago




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Create Sheepdog Volume by Openstack Dashboard error

2013-02-20 Thread Jean-Baptiste RANSY
Hi,

$ screen -r

than navigate to c-vol with Ctrl+A then N

To detach from screen : Ctrl+A then D

Regards,

Jean-Baptiste RANSY


Envoyé de mon ASUS Pad

harryxiyou harryxi...@gmail.com a écrit :

Hi all,

I have tested OpenStack with Sheepdog like follwoing

1, Install Ubuntu 12.04 (Precise) or Fedora 16
2, Install Sheepdog to the appropriate location on your system
3, Start Sheepdog and format it (See Getting Started)
4, Download DevStack

$ git clone git://github.com/openstack-dev/devstack.git

5, Start the install

$ cd devstack; CINDER_DRIVER=sheepdog ./stack.sh

After up steps, i create volume by Openstack Dashboard but
the Volume *Status* is Error(See attach picture for details).

How could i get the logs of creaeing a volume by Openstack
dashboard?

I also wanna create volume by Cinder command. Has anyone
tried this way?

Could anyone give me some suggestions? Thanks in advance.


-- 
Thanks
Harry Wei

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-17 Thread Jean-Baptiste RANSY
ping

Are you on IRC ?

JB


On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:
 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)

 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:
 oh that's weird.

 I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of
 ssh keys. But what if you cant reach the VM in the first place?

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G 
 [74G[ OK ]


 On 17 February 2013 02:41, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side,
 but my VM get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source  
 destination 
   707 47819 quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 56022   22M quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source  
 destination

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Hello Chathura,

Are you using Folsom with Network Namespaces ?

If yes, have a look here :
http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


Regards,

Jean-Baptsite RANSY


On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello guys,

 The problem still exists. Any ideas?

 Thanks 

 On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey
 I mean).
 If a VM is unable to reach metadata service, then it won't be able
 to get its public key, so you won't be able to connect, unless you
 specifically go thru a Password authentication (provided password
 auth is enabled in /etc/ssh/sshd_config, which is not the case
 with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the
 instance is waiting for the curl timeout (60sec.) to finish
 booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node
 to the Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0 http://0.0.0.0/0  
  169.254.169.254  tcp dpt:80 to:172.16.0.1:8775
 http://172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute
 node. I am not sure if this is related to not being able to
 PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 
 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
 mailto:csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com
 mailto:chathura.sarathchan...@gmail.com
 mailto:77.chath...@gmail.com mailto:77.chath...@gmail.com
 77.chath...@gmail.com mailto:77.chath...@gmail.com
 mailto:77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com
 mailto:vishnoia...@gmail.com mailto:vishnoia...@gmail.com
 mailto:vishnoia...@gmail.com wrote:

 If you are using ubuntu cloud image then the only way to
 log-in is
 to do ssh with the public key. For that you have to create
 ssh key
 pair and download the ssh key. You can create this ssh
 pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com
 mailto:sylvain.ba...@digimind.com

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra
 Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the
 credentials?


 You have multiple ways to get VNC access. The easiest
 one is
 thru Horizon. Other can be looking at the KVM
 command-line for
 the desired instance (on the compute node) and check
 the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on
 metadata
 port.




 ___
 Mailing list: https://launchpad.net/~openstack
 

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Please provide files listed bellow :

Controller Node :
/etc/nova/nova.conf
/etc/nova/api-paste.ini
/etc/quantum/l3_agent.ini
/etc/quantum/quantum.conf
/etc/quantum/dhcp_agent.ini
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
/etc/quantum/api-paste.ini
/var/log/nova/*.log
/var/log/quantum/*.log

Compute Node :
/etc/nova/nova.conf
/etc/nova/nova-compute.conf
/etc/nova/api-paste.ini
/etc/quantum/quantum.conf
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
/var/log/nova/*.log
/var/log/quantum/*.log

Plus, complete output of the following commands :

Controller Node :
$ keystone endpoint-list
$ ip link show
$ ip route show
$ ip netns show
$ ovs-vsctl show

Compute Node :
$ ip link show
$ ip route show
$ ovs-vsctl show

Regards,

Jean-Baptiste RANSY


On 02/16/2013 05:32 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello Jean,

 Thanks for your reply.

 I followed the instructions
 in 
 http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
 And my Controller and the Network-node is installed in the same
 physical node.

 I am using Folsom but without Network namespaces. 

 But in the website you have provided it states that If you run both
 L3 + DHCP services on the same node, you should enable namespaces to
 avoid conflicts with routes :

 But currently quantum-dhcp-agent and quantum-l3-agent are running in
 the same node? 

 Additionally the control node serves as a DHCP server for the local
 network ( Don't know if that would make and difference)

 Any idea what the problem could be?


 On 16 February 2013 16:21, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 Hello Chathura,

 Are you using Folsom with Network Namespaces ?

 If yes, have a look here :
 
 http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


 Regards,

 Jean-Baptsite RANSY



 On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello guys,

 The problem still exists. Any ideas?

 Thanks 

 On 15 February 2013 14:37, Sylvain Bauza
 sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com
 wrote:

 Metadata API allows to fetch SSH credentials when booting
 (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be
 able to get its public key, so you won't be able to connect,
 unless you specifically go thru a Password authentication
 (provided password auth is enabled in /etc/ssh/sshd_config,
 which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as
 the instance is waiting for the curl timeout (60sec.) to
 finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network
 node to the Nova-api node (here 172.16.0.1 as internal
 management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0 http://0.0.0.0/0  
  169.254.169.254  tcp dpt:80 to:172.16.0.1:8775
 http://172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a
 écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775
 http://0.0.0.0:8775  0.0.0.0:*   LISTEN

 *Additionally I cant curl 169.254.169.254 from the
 compute node. I am not sure if this is related to not
 being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 
 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
 mailto:csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Controller node :
# iptables -L -n -v
# iptables -L -n -v -t nat


On 02/17/2013 03:18 AM, Chathura M. Sarathchandra Magurawalage wrote:
 You should be able to curl 169.254.169.254 from compute node, which I
 cant at the moment.

 I have got the bridge set up in the l3_agent.ini

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
 bytes target prot opt in out source  
 destination 
  3180  213K quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
  3180  213K nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target prot opt in out source  
 destination 
  3726  247K quantum-l3-agent-POSTROUTING  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 0 0 nova-api-POSTROUTING  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 0 0 quantum-postrouting-bottom  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 0 0 nova-postrouting-bottom  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain nova-api-OUTPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-POSTROUTING (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-PREROUTING (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-float-snat (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain nova-api-snat (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 nova-api-float-snat  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain nova-postrouting-bottom (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 nova-api-snat  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain quantum-l3-agent-OUTPUT (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain quantum-l3-agent-POSTROUTING (1 references)
  pkts bytes target prot opt in out source  
 destination 
  3726  247K ACCEPT all  --  !qg-6f8374cb-cb !qg-6f8374cb-cb
  0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0
! ctstate DNAT
 0 0 ACCEPT all  --  *  *   10.5.5.0/24
 http://10.5.5.0/24  192.168.2.225   

 Chain quantum-l3-agent-PREROUTING (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 DNAT   tcp  --  *  *   0.0.0.0/0
 http://0.0.0.0/0169.254.169.254  tcp dpt:80
 to:192.168.2.225:8775 http://192.168.2.225:8775

 Chain quantum-l3-agent-float-snat (1 references)
  pkts bytes target prot opt in out source  
 destination 

 Chain quantum-l3-agent-snat (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 quantum-l3-agent-float-snat  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 0 0 SNAT   all  --  *  *   10.5.5.0/24
 http://10.5.5.0/24  0.0.0.0/0 http://0.0.0.0/0  
  to:192.168.2.151

 Chain quantum-postrouting-bottom (1 references)
  pkts bytes target prot opt in out source  
 destination 
 0 0 quantum-l3-agent-snat  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0 

 thanks.


 On 17 February 2013 02:25, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 Controller node :
 # iptables -L -n -v
 # iptables -L -n -v -t nat



 On 02/17/2013 03:18 AM, Chathura M. Sarathchandra Magurawalage wrote:
 You should be able to curl 169.254.169.254 from compute node,
 which I cant at the moment.

 I have got the bridge set up in the l3_agent.ini



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Jean-Baptiste RANSY
Add Cirros Image to Glance :)

Username: cirros
Password: cubswin:)

http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

to join your VM, it's a bit dirty but you can :
- put your computer in the same subnet as your controller (192.168.2.0/24)
- then adds a static route to the subnet of your VM. (ip route add
10.5.5.0/24 gw 192.168.2.151)
(192.168.2.151 is the quantum gateway)

I'm going to sleep, we will continue tomorrow.

JB

PS : You also should get some sleep :)


On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:
 oh that's weird.

 I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of
 ssh keys. But what if you cant reach the VM in the first place?

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G 
 [74G[ OK ]


 On 17 February 2013 02:41, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side,
 but my VM get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
   destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
   destination 
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
   destination 
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0   

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
   destination 

 Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
   destination

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Jean-Baptiste RANSY
Hello Chathura;

It's normal that your compute node have no route to the tenant network.
Quantum and openvswitch provide Layer2 link and as i can see, the VM
obtain a IP address.
So we can assume that quantum and openvswitch are setup correctly.

Same question as JuanFra : Have you enabled PING and SSH in 'Access and
security policies'?

Other things :

Cloud-init (in VM) is unable to retrive metadata, does nova-api-metadata
is running on your Compute Node ?
If yes, check you nova.conf.

Regards,

Jean-Baptiste RANSY


On 02/14/2013 11:58 PM, Chathura M. Sarathchandra Magurawalage wrote:
 Hello,

 I followed the folsom basic install instructions
 in 
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 
 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
 [0.00] SMP: Allowing 1 CPUs, 0 hotplug CPUs
 [0.00] PM: Registered nosave memory: 0009b000 - 
 0009c000
 [0.00] PM: Registered nosave memory: 0009c000 - 
 000a
 [0.00] PM: Registered nosave memory: 000a - 
 000f
 [0.00] PM: Registered nosave

Re: [Openstack] Install devstack in Ubuntu 10.04

2013-02-14 Thread Jean-Baptiste RANSY
Hi Harry,

I use Ubuntu 12.04 LTS, it works well with devstack.

Jean-Baptiste RANSY


On 02/14/2013 01:55 PM, harryxiyou wrote:
 Hi all,

 Has anyone installed devstack in Ubuntu 10.04? Following are
 my distro's detail infomations.

 $ lsb_release -a
 No LSB modules are available.
 Distributor ID:   Ubuntu
 Description:  Ubuntu 10.04.4 LTS
 Release:  10.04
 Codename: lucid

 Has anyone ever configured devstack for this Ubuntu distro?
 Do i have to install devstack in Ubuntu 12.04 (Precise) or Fedora 16?
 Could anyone give me some suggestions? Thanks in advance ;-)




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] n-api installation problem with devstack (on Ubuntu )

2013-02-13 Thread Jean-Baptiste RANSY
Hi Swapnil,

Your problem is : Address already in use (Socket already in use)

I think you have another process that listening on the same port as
nova-api.

Try to find the PID of this process. (netstat -tanpe)


If it's not a fresh install, did you run unstack.sh before stack.sh ?

Regards,

Jean-Baptiste RANSY


On 02/13/2013 11:36 AM, swapnil khanapurkar wrote:
 Hi All,

 I posted below bug on launchpad, but i didn't get any response from
 the team, may be its not that active as openstack mailing list.

 I am facing an issue detailed here [
 https://bugs.launchpad.net/devstack/+bug/1122764 ].


 Thanks
 Swapnil

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp