Hi, Problem is solved , there was a wrong IP address within keystone.conf file.
Regards, J.P. -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: vendredi 27 mai 2016 14:00 To: [email protected] Subject: Openstack Digest, Vol 35, Issue 25 Send Openstack mailing list submissions to [email protected] To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack or, via email, send a message with subject or body 'help' to [email protected] You can reach the person managing the list at [email protected] When replying, please edit your Subject line so it is more specific than "Re: Contents of Openstack digest..." Today's Topics: 1. Re: ceilometer missing meters?? (Florian Rommel) 2. Re: How to expand /boot size in Fuel deployment? (Eddie Yen) 3. Openstack console Failed to connect to server (code: 1006) (Jean-Pierre Ribeauville) 4. [openstack] [manila] manila-share reporting HDFS share as NOT HEALTHY (Jeff Markley) 5. Unable to log in to the VM instance?s console using openstack-mitaka release (Chinmaya Dwibedy) ---------------------------------------------------------------------- Message: 1 Date: Thu, 26 May 2016 15:14:54 +0300 From: Florian Rommel <[email protected]> To: [email protected] Subject: Re: [Openstack] ceilometer missing meters?? Message-ID: <[email protected]> Content-Type: text/plain; charset=us-ascii Hi, I got to the same conclusion. Once I installed polling and restarted agent-compute things were working perfectly smooth. now all meters show up and things are as they should. Thanks everyone for helping out. //florian ------------------------------ Message: 2 Date: Thu, 26 May 2016 20:17:32 +0800 From: Eddie Yen <[email protected]> To: Vladimir Kozhukalov <[email protected]> Cc: [email protected] Subject: Re: [Openstack] How to expand /boot size in Fuel deployment? Message-ID: <cahzfsbpv9mzxzpqr075oonjuxkkctapge+eg3sfgfu5x5sk...@mail.gmail.com> Content-Type: text/plain; charset="utf-8" Hi Vladimir, Thanks for reply & answer! I'll try it! 2016-05-26 16:43 GMT+08:00 Vladimir Kozhukalov <[email protected]>: > Eddie, > > Currently boot size is hard coded [1] (for most cases 200M is enough), > but it is definitely possible to change it. The following procedure > could help > > dockerctl shell nailgun > cd /usr/lib/python2.7/site-packages/nailgun > sed "s;'calc_boot_size': lambda: 200,;'calc_boot_size': lambda: 300,;" > extensions/volume_manager/manager.py > find -name "*.pyc" -delete > supervisorctl nailgun restart > > Then add another node to the cluster, it should have 300M /boot partition. > > > [1] > https://github.com/openstack/fuel-web/blob/stable/7.0/nailgun/nailgun/ > extensions/volume_manager/manager.py#L810 > > Vladimir Kozhukalov > > On Wed, May 25, 2016 at 1:03 PM, Eddie Yen <[email protected]> wrote: > >> Hi everyone, this is my first time ask questions to this mailing list. >> Apologize first that I'm not good at English. I'll try to ask the >> question on English clearly. >> Welcome to ask if I got nay details that I'm not write it clear. >> >> >> I'm using Fuel 7.0 to deploy my OpenStack environment (Kilo, of course). >> >> Now I'm trying to add one of node into my exist environment. >> Problem is, this node will going to test kernel after the deployment. >> But the default /boot size is too small for me (which is 200MB only.) >> >> I searched Fuel documents and found that it can set on Fuel >> provisioning setting (by edit YAML file.) But I'm not sure how to >> edit it, because this node had 2 hard drives (without RAID). >> And I can see two /boot size options in YAML file. >> >> Does anyone know how to expand kernel partition? >> >> Many thanks, >> Eddie. >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : [email protected] >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160526/153bae78/attachment-0001.html> ------------------------------ Message: 3 Date: Thu, 26 May 2016 16:08:47 +0000 From: Jean-Pierre Ribeauville <[email protected]> To: "[email protected]" <[email protected]> Subject: [Openstack] Openstack console Failed to connect to server (code: 1006) Message-ID: <[email protected]> Content-Type: text/plain; charset="us-ascii" Hi, When trying tooepn console I got following error message : Openstack console Failed to connect to server (code: 1006) As found within /var/log/nova/nova-novncproxy.log : 2016-05-26 17:42:51.282 6544 INFO oslo.messaging._drivers.impl_rabbit [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] Connecting to AMQP server on 10.128.10.0:5672 2016-05-26 17:42:51.369 6544 INFO oslo.messaging._drivers.impl_rabbit [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] Connected to AMQP server on 10.128.10.0:5672 2016-05-26 17:42:51.376 6544 INFO oslo.messaging._drivers.impl_rabbit [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] Connecting to AMQP server on 10.128.10.0:5672 2016-05-26 17:42:51.389 6544 INFO oslo.messaging._drivers.impl_rabbit [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] Connected to AMQP server on 10.128.10.0:5672 2016-05-26 17:42:51.469 6544 INFO nova.console.websocketproxy [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] 9: connect info: {u'instance_uuid': u'a3000ec4-cb44-476c-aa42-68fd4da8b46d', u'internal_access_path': None, u'last_activity_at': 1464277370.736584, u'console_type': u'novnc', u'host': u'dhcp-10-128-10-0.wks.ptx.axway.int', u'token': u'4573583a-cd03-4c82-92e2-fb8d4b05d9b8', u'access_url': u'http://10.128.10.0:6080/vnc_auto.html?token=4573583a-cd03-4c82-92e2-fb8d4b05d9b8', u'port': u'5901'} 2016-05-26 17:42:51.470 6544 INFO nova.console.websocketproxy [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] 9: connecting to: dhcp-10-128-10-0.wks.ptx.axway.int:5901 2016-05-26 17:42:51.480 6544 INFO nova.console.websocketproxy [req-362aa2c2-3860-494b-b065-e33272841d2a - - - - -] handler exception: [Errno -2] Name or service not known It seems that there is a misunderstanding somewhere between port 6080 and port 5901 ... Am I right ? I didn't find where the 5901 port is defined . Or did I miss something ? Thx for help. Regards, Jean-Pierre RIBEAUVILLE +33 1 4717 2049 [axway_logo_tagline_87px] -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160526/90074189/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 11720 bytes Desc: image001.png URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160526/90074189/attachment-0001.png> ------------------------------ Message: 4 Date: Thu, 26 May 2016 13:49:23 -0600 From: Jeff Markley <[email protected]> To: [email protected] Subject: [Openstack] [openstack] [manila] manila-share reporting HDFS share as NOT HEALTHY Message-ID: <cak7219a7kyzc2rhq4zg+dqnmpnxwk4bvv0mt99o7s-rraqe...@mail.gmail.com> Content-Type: text/plain; charset=UTF-8 Would anyone be able to tell me why manila-share is reporting an HDFS share as not healthy when 'hdfs fsck /' reports HEALTHY? This is a 2 node Makita install. Log entry below: 2016-05-25 22:54:04.913 21330 DEBUG oslo_concurrency.processutils [req-949a467e-245d-4421-b980-ab8dd1e2ebce - - - - -] Running cmd (SSH): hdfs fsck / ssh_execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:458 2016-05-25 22:54:04.914 21330 DEBUG paramiko.transport [req-949a467e-245d-4421-b980-ab8dd1e2ebce - - - - -] [chan 0] Max packet in: 32768 bytes _log /usr/lib/python2.7/dist-packages/paramiko/channel.py:1072 2016-05-25 22:54:05.341 21330 DEBUG paramiko.transport [-] [chan 0] Max packet out: 32768 bytes _log /usr/lib/python2.7/dist-packages/paramiko/channel.py:1072 2016-05-25 22:54:05.342 21330 DEBUG paramiko.transport [-] Secsh channel 0 opened. _log /usr/lib/python2.7/dist-packages/paramiko/transport.py:1545 2016-05-25 22:54:05.347 21330 DEBUG paramiko.transport [-] [chan 0] Sesch channel 0 request ok _log /usr/lib/python2.7/dist-packages/paramiko/channel.py:1072 2016-05-25 22:54:05.349 21330 DEBUG paramiko.transport [-] [chan 0] EOF received (0) _log /usr/lib/python2.7/dist-packages/paramiko/channel.py:1072 2016-05-25 22:54:05.349 21330 DEBUG paramiko.transport [-] [chan 0] EOF sent (0) _log /usr/lib/python2.7/dist-packages/paramiko/channel.py:1072 2016-05-25 22:54:05.350 21330 DEBUG oslo_concurrency.processutils [req-949a467e-245d-4421-b980-ab8dd1e2ebce - - - - -] Result was 127 ssh_execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:489 2016-05-25 22:54:05.351 21330 ERROR manila.share.drivers.hdfs.hdfs_native [req-949a467e-245d-4421-b980-ab8dd1e2ebce - - - - -] HDFS is not in healthy state. 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager [req-949a467e-245d-4421-b980-ab8dd1e2ebce - - - - -] Error encountered during initialization of driver 'HDFSNativeShareDriver' on 'controller@hdfs' host. HDFS is not in healthy state. 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager Traceback (most recent call last): 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager File "/usr/lib/python2.7/dist-packages/manila/share/manager.py", line 249, in init_host 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager self.driver.check_for_setup_error() 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager File "/usr/lib/python2.7/dist-packages/manila/share/drivers/hdfs/hdfs_native.py", line 398, in check_for_setup_error 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager raise exception.HDFSException(msg) 2016-05-25 22:54:05.352 21330 ERROR manila.share.manager HDFSException: HDFS is not in healthy state. manila.conf [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = manila password = ***** [DEFAULT] rpc_backend = rabbit #manila_service_keypair_name = manila-service enabled_share_backends = hdfs,generic replica_state_update_interval = 300 #lvm_share_volume_group = lvm-shares wsgi_keep_alive = False enabled_share_protocols = HDFS default_share_type = hdfs_share state_path = /var/lock/manila osapi_share_extension = manila.api.contrib.standard_extensions rootwrap_config = /etc/manila/rootwrap.conf api_paste_config = /etc/manila/api-paste.ini share_name_template = share-%s scheduler_driver = manila.scheduler.drivers.filter.FilterScheduler debug = True verbose = True auth_strategy = keystone [DATABASE] max_pool_size = 40 connection = mysql+pymysql://manila:*****@controller/manila [oslo_concurrency] #lock_path = /var/lock/manila [neutron] memcached_servers = localhost:11211 #signing_dir = /var/lib/manila #cafile = /etc/keystone/ssl/certs/ca.pem auth_uri = http://controller:5000 #project_domain_name = Default #project_name = service #user_domain_name = Default password = ***** username = neutron auth_url = http://controller:35357 auth_type = password [nova] memcached_servers = localhost:11211 #signing_dir = /var/lib/manila #cafile = /etc/keystone/ssl/certs/ca.pem auth_uri = http://controller:5000 #project_domain_name = Default #project_name = service #user_domain_name = Default password = ***** username = nova auth_url = http://controller:35357 auth_type = password [cinder] memcached_servers = localhost:11211 #signing_dir = /var/lib/manila #cafile = /etc/keystone/ssl/certs/ca.pem auth_uri = http://controller:5000 #project_domain_name = Default #project_name = service #user_domain_name = Default password = ***** username = cinder auth_url = http://controller:35357 auth_type = password [generic] share_backend_name = GENERIC share_driver = manila.share.drivers.generic.GenericShareDriver driver_handles_share_servers = True service_instance_user = manila service_image_name = manila-service-image password = ***** username = manila #path_to_private_key = /root/.ssh/id_rsa #path_to_public_key = /root.ssh/id_rsa.pub [oslo_messaging_rabbit] rabbit_userid = openstack rabbit_password = ***** rabbit_hosts = controller [hdfs] share_backend_name = HDFS share_driver = manila.share.drivers.hdfs.hdfs_native.HDFSNativeShareDriver hdfs_namenode_port = 9000 hdfs_namenode_ip = controller driver_handles_share_servers = False hdfs_ssh_name = hadoop hdfs_ssh_port = 22 #hdfs_ssh_private_key = /home/hadoop/.ssh/id_rsa hdfs_ssh_pw = ***** ------------------------------ Message: 5 Date: Fri, 27 May 2016 11:55:32 +0530 From: Chinmaya Dwibedy <[email protected]> To: [email protected] Subject: [Openstack] Unable to log in to the VM instance?s console using openstack-mitaka release Message-ID: <camhs8rgwoykcrx4+qaf+fdjjmtvhet4txxulfnv6ygs2ka2...@mail.gmail.com> Content-Type: text/plain; charset="utf-8" Hi All, I have installed OpenStack (i.e., openstack-mitaka release) on CentOS7.2 . Used Fedora20 qcow2 cloud image for creating a VM using Dashboard. 1) Installed ?libguestfs? on Nova compute node. 2) Updated these lines in ?/etc/nova/nova.conf ? inject_password=true inject_key=true inject_partition=-1 3) Restarted nove-compute: # service openstack-nova-compute restart 4) Enabled setting root password in /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py OPENSTACK_HYPERVISOR_FEATURES = { ?.. ?can_set_password?: True, } 5) Placed the below code in ?Customization Script? section of the Launch Instance dialog box in OpenStack. #cloud-config ssh_pwauth: True chpasswd: list: | root: root expire: False runcmd: - [ sh, -c, echo "=========hello world'=========" ] It appears that, when the instance was launched, cloud-init did not change the password for root user, and I was not able to log in to the instance?s console (Dashboard) using username (root) and password (root). it says ?Log in incorrect?. Upon checking the boot log found that, cloud-init has executed /var/lib/cloud/instance/scripts/runcmd and printed hello world. Can anyone please let me know where I am wrong ? Thanks in advance for your support and time. Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160527/4e28e69f/attachment-0001.html> ------------------------------ _______________________________________________ Openstack mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack End of Openstack Digest, Vol 35, Issue 25 ***************************************** _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
