On 01/15/2015 06:22 AM, Geo Varghese wrote:
Hi Jay/Abel,

Thanks for your help.

Just fixed issue by changing following line in nova.conf

cinder_catalog_info=volumev2:cinderv2:publicURL

to

cinder_catalog_info=volume:cinder:publicURL

Now attachment successfuly done.

Do guys know how this fixed the issue?

Cool, good to hear you fixed your issue.

Cause of your issue was that your Keystone services table has "regionOne" for the cinderv2 service, instead of "RegionOne", and the catalog was being generated for both the volume and volumev2 endpoints with "RegionOne".

Best,
-jay

On Thu, Jan 15, 2015 at 12:01 PM, Geo Varghese <[email protected]
<mailto:[email protected]>> wrote:

    Hi Abel,

    Oh okay.  yes sure compute nod ecan access the controller. I have
    added it in /etc/hosts

    The current error is,  it couldn't find an endpoint.  is this
    related with anything aboe you mentioned?

    On Thu, Jan 15, 2015 at 11:56 AM, Abel Lopez <[email protected]
    <mailto:[email protected]>> wrote:

        I know it's "Available ", however that doesn't imply attachment.
        Cinder uses iSCSI or NFS, to attach the volume to a running
        instance on a compute node. If you're missing the required
        protocol packages, the attachment will fail. You can have
        "Available " volumes, and lack tgtadm (or nfs-utils if that's
        your protocol).

        Secondly, Is your compute node able to resolve "controller"?


        On Wednesday, January 14, 2015, Geo Varghese
        <[email protected] <mailto:[email protected]>> wrote:

            Hi Abel,

            Thanks for the reply.

            I have created volume and its in available state. Please
            check attached screenshot.



            On Thu, Jan 15, 2015 at 11:34 AM, Abel Lopez
            <[email protected]> wrote:

                Do your compute nodes have the required iSCSI packages
                installed?


                On Wednesday, January 14, 2015, Geo Varghese
                <[email protected]> wrote:

                    Hi Jay,

                    Thanks for the reply. Just pasting the details below

                    keystone catalog
                    ================================================
                    Service: compute
                    
+-------------+------------------------------------------------------------+
                    |   Property  |
                    Value                            |
                    
+-------------+------------------------------------------------------------+
                    |   adminURL  |
                    http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |      id     |
                    02028b1f4c0849c68eb79f5887516299              |
                    | internalURL |
                    http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |  publicURL  |
                    http://controller:8774/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |    region   |
                    RegionOne                          |
                    
+-------------+------------------------------------------------------------+
                    Service: network
                    +-------------+----------------------------------+
                    |   Property  |              Value               |
                    +-------------+----------------------------------+
                    |   adminURL  | http://controller:9696      |
                    |      id     | 32f687d4f7474769852d88932288b893 |
                    | internalURL | http://controller:9696      |
                    |  publicURL  | http://controller:9696      |
                    |    region   |            RegionOne             |
                    +-------------+----------------------------------+
                    Service: volumev2
                    
+-------------+------------------------------------------------------------+
                    |   Property  |
                    Value                            |
                    
+-------------+------------------------------------------------------------+
                    |   adminURL  |
                    http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |      id     |
                    5bca493cdde2439887d54fb805c4d2d4              |
                    | internalURL |
                    http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |  publicURL  |
                    http://controller:8776/v2/e600ba9727924a3b97ede34aea8279c1
                    |
                    |    region   |
                    RegionOne                          |
                    
+-------------+------------------------------------------------------------+
                    Service: image
                    +-------------+----------------------------------+
                    |   Property  |              Value               |
                    +-------------+----------------------------------+
                    |   adminURL  | http://controller:9292      |
                    |      id     | 2e2294b9151e4fb9b6efccf33c62181b |
                    | internalURL | http://controller:9292      |
                    |  publicURL  | http://controller:9292      |
                    |    region   |            RegionOne             |
                    +-------------+----------------------------------+
                    Service: volume
                    
+-------------+------------------------------------------------------------+
                    |   Property  |
                    Value                            |
                    
+-------------+------------------------------------------------------------+
                    |   adminURL  |
                    http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1
                    |
                    |      id     |
                    0e29cfaa785e4e148c57601b182a5e26              |
                    | internalURL |
                    http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1
                    |
                    |  publicURL  |
                    http://controller:8776/v1/e600ba9727924a3b97ede34aea8279c1
                    |
                    |    region   |
                    RegionOne                          |
                    
+-------------+------------------------------------------------------------+
                    Service: ec2
                    +-------------+---------------------------------------+
                    |   Property  |                 Value                 |
                    +-------------+---------------------------------------+
                    |   adminURL  | http://controller:8773/services/Admin |
                    |      id     |    8f4957d98cd04130b055b8b80b051833   |
                    | internalURL | http://controller:8773/services/Cloud |
                    |  publicURL  | http://controller:8773/services/Cloud |
                    |    region   |               RegionOne               |
                    +-------------+---------------------------------------+
                    Service: identity
                    +-------------+----------------------------------+
                    |   Property  |              Value               |
                    +-------------+----------------------------------+
                    |   adminURL  | http://controller:35357/v2.0   |
                    |      id     | 146f7bbb0ad54740b95f8499f04b2ee2 |
                    | internalURL | http://controller:5000/v2.0    |
                    |  publicURL  | http://controller:5000/v2.0    |
                    |    region   |            RegionOne             |
                    +-------------+----------------------------------+
                    ==============================================

                    Nova.conf

                    ================================================
                    # This file autogenerated by Chef
                    # Do not edit, changes will be overwritten


                    [DEFAULT]

                    # LOGS/STATE
                    debug=False
                    verbose=False
                    auth_strategy=keystone
                    dhcpbridge_flagfile=/etc/nova/nova.conf
                    dhcpbridge=/usr/bin/nova-dhcpbridge
                    log_dir=/var/log/nova
                    state_path=/var/lib/nova
                    instances_path=/var/lib/nova/instances
                    instance_name_template=instance-%08x
                    network_allocate_retries=0
                    lock_path=/var/lib/nova/lock

                    ssl_only=false
                    cert=self.pem
                    key=

                    # Command prefix to use for running commands as root
                    (default: sudo)
                    rootwrap_config=/etc/nova/rootwrap.conf

                    # Should unused base images be removed? (default: false)
                    remove_unused_base_images=true

                    # Unused unresized base images younger than this
                    will not be removed (default: 86400)
                    remove_unused_original_minimum_age_seconds=3600

                    # Options defined in nova.openstack.common.rpc
                    rpc_thread_pool_size=64
                    rpc_conn_pool_size=30
                    rpc_response_timeout=60
                    rpc_backend=nova.openstack.common.rpc.impl_kombu
                    amqp_durable_queues=false
                    amqp_auto_delete=false

                    ##### RABBITMQ #####
                    rabbit_userid=guest
                    rabbit_password=guest
                    rabbit_virtual_host=/
                    rabbit_hosts=rabbit1:5672,rabbit2:5672
                    rabbit_retry_interval=1
                    rabbit_retry_backoff=2
                    rabbit_max_retries=0
                    rabbit_durable_queues=false
                    rabbit_ha_queues=True



                    ##### SCHEDULER #####
                    scheduler_manager=nova.scheduler.manager.SchedulerManager
                    
scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
                    
scheduler_available_filters=nova.scheduler.filters.all_filters
                    # which filter class names to use for filtering
                    hosts when not specified in the request.
                    
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,CoreFilter,SameHostFilter,DifferentHostFilter
                    default_availability_zone=nova
                    default_schedule_zone=nova

                    ##### NETWORK #####



                    # N.B. due to
                    https://bugs.launchpad.net/nova/+bug/1206330
                    # we override the endpoint scheme below, ignore the port
                    # and essentially force http
                    neutron_url=http://controller:9696
                    neutron_api_insecure=false
                    network_api_class=nova.network.neutronv2.api.API
                    neutron_auth_strategy=keystone
                    neutron_admin_tenant_name=service
                    neutron_admin_username=neutron
                    neutron_admin_password=openstack-network
                    neutron_admin_auth_url=http://controller:5000/v2.0
                    neutron_url_timeout=30
                    neutron_region_name=
                    neutron_ovs_bridge=br-int
                    neutron_extension_sync_interval=600
                    neutron_ca_certificates_file=
                    
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
                    firewall_driver = nova.virt.firewall.NoopFirewallDriver
                    security_group_api=neutron
                    service_neutron_metadata_proxy=true
                    neutron_metadata_proxy_shared_secret=secret123
                    default_floating_pool=public
                    dns_server=8.8.8.8

                    use_ipv6=false

                    ##### GLANCE #####
                    image_service=nova.image.glance.GlanceImageService
                    glance_api_servers=http://controller:9292
                    glance_api_insecure=false

                    ##### Cinder #####
                    # Location of ca certificates file to use for cinder
                    client requests
                    cinder_ca_certificates_file=

                    # Allow to perform insecure SSL requests to cinder
                    cinder_api_insecure=false

                    # Info to match when looking for cinder in the
                    service catalog
                    cinder_catalog_info=volumev2:cinderv2:publicURL

                    ##### COMPUTE #####
                    compute_driver=libvirt.LibvirtDriver
                    preallocate_images=none
                    use_cow_images=true
                    vif_plugging_is_fatal=false
                    vif_plugging_timeout=0
                    compute_manager=nova.compute.manager.ComputeManager
                    
sql_connection=mysql://nova:nova@loadbalancer:3306/nova?charset=utf8
                    connection_type=libvirt

                    ##### NOTIFICATIONS #####
                    # Driver or drivers to handle sending notifications
                    (multi valued)

                    # AMQP topic used for OpenStack notifications. (list
                    value)
                    # Deprecated group/name - [rpc_notifier2]/topics
                    notification_topics=notifications

                    # Generate periodic compute.instance.exists
                    notifications
                    instance_usage_audit=False

                    # Time period to generate instance usages for.  Time
                    period
                    # must be hour, day, month or year (string value)
                    instance_usage_audit_period=month


                    # The IP address on which the OpenStack API will
                    listen. (string value)
                    osapi_compute_listen=0.0.0.0
                    # The port on which the OpenStack API will listen.
                    (integer value)
                    osapi_compute_listen_port=8774

                    # The IP address on which the metadata will listen.
                    (string value)
                    metadata_listen=0.0.0.0
                    # The port on which the metadata will listen.
                    (integer value)
                    metadata_listen_port=8775

                    ##### VNCPROXY #####
                    novncproxy_base_url=http://controller:6080/vnc_auto.html
                    xvpvncproxy_base_url=http://controller:6081/console

                    # This is only required on the server running
                    xvpvncproxy
                    xvpvncproxy_host=0.0.0.0
                    xvpvncproxy_port=6081

                    # This is only required on the server running novncproxy
                    novncproxy_host=0.0.0.0
                    novncproxy_port=6080

                    vncserver_listen=0.0.0.0
                    vncserver_proxyclient_address=0.0.0.0

                    vnc_keymap=en-us

                    # store consoleauth tokens in memcached

                    ##### MISC #####
                    # force backing images to raw format
                    force_raw_images=false
                    allow_same_net_traffic=true
                    osapi_max_limit=1000
                    # If you terminate SSL with a load balancer, the
                    HTTP_HOST environ
                    # variable that generates the request_uri in
                    webob.Request will lack
                    # the HTTPS scheme. Setting this overrides the
                    default and allows
                    # URIs returned in the various links collections to
                    contain the proper
                    # HTTPS endpoint.
                    osapi_compute_link_prefix =
                    http://controller:8774/v2/%(tenant_id)s
                    start_guests_on_host_boot=false
                    resume_guests_state_on_host_boot=true
                    allow_resize_to_same_host=false
                    resize_confirm_window=0
                    live_migration_retry_count=30

                    ##### QUOTAS #####
                    # (StrOpt) default driver to use for quota checks
                    (default: nova.quota.DbQuotaDriver)
                    quota_driver=nova.quota.DbQuotaDriver
                    # number of security groups per project (default: 10)
                    quota_security_groups=50
                    # number of security rules per security group
                    (default: 20)
                    quota_security_group_rules=20
                    # number of instance cores allowed per project
                    (default: 20)
                    quota_cores=20
                    # number of fixed ips allowed per project (this
                    should be at least the number of instances allowed)
                    (default: -1)
                    quota_fixed_ips=-1
                    # number of floating ips allowed per project
                    (default: 10)
                    quota_floating_ips=10
                    # number of bytes allowed per injected file
                    (default: 10240)
                    quota_injected_file_content_bytes=10240
                    # number of bytes allowed per injected file path
                    (default: 255)
                    quota_injected_file_path_length=255
                    # number of injected files allowed (default: 5)
                    quota_injected_files=5
                    # number of instances allowed per project (defailt: 10)
                    quota_instances=10
                    # number of key pairs per user (default: 100)
                    quota_key_pairs=100
                    # number of metadata items allowed per instance
                    (default: 128)
                    quota_metadata_items=128
                    # megabytes of instance ram allowed per project
                    (default: 51200)
                    quota_ram=51200

                    # virtual CPU to Physical CPU allocation ratio
                    (default: 16.0)
                    cpu_allocation_ratio=16.0
                    # virtual ram to physical ram allocation ratio
                    (default: 1.5)
                    ram_allocation_ratio=1.5

                    mkisofs_cmd=genisoimage
                    
injected_network_template=$pybasedir/nova/virt/interfaces.template
                    flat_injected=false

                    # The IP address on which the EC2 API will listen.
                    (string value)
                    ec2_listen=0.0.0.0
                    # The port on which the EC2 API will listen.
                    (integer value)
                    ec2_listen_port=8773


                    ##### WORKERS ######

                    ##### KEYSTONE #####
                    keystone_ec2_url=http://controller:5000/v2.0/ec2tokens

                    # a list of APIs to enable by default (list value)
                    enabled_apis=ec2,osapi_compute,metadata

                    ##### WORKERS ######

                    ##### MONITORS ######
                    # Monitor classes available to the compute which may be
                    # specified more than once. (multi valued)
                    
compute_available_monitors=nova.compute.monitors.all_monitors

                    # A list of monitors that can be used for getting
                    compute
                    # metrics. (list value)
                    compute_monitors=

                    ##### VOLUMES #####
                    # iscsi target user-land tool to use
                    iscsi_helper=tgtadm
                    volume_api_class=nova.volume.cinder.API
                    # Region name of this node (string value)
                    os_region_name=RegionOne

                    # Override the default dnsmasq settings with this
                    file (String value)
                    dnsmasq_config_file=

                    ##### THIRD PARTY ADDITIONS #####


                    [ssl]

                    # CA certificate file to use to verify connecting
                    clients
                    ca_file=

                    # Certificate file to use when starting the server
                    securely
                    cert_file=

                    # Private key file to use when starting the server
                    securely
                    key_file=

                    [conductor]

                    use_local=False


                    [libvirt]

                    #
                    # Options defined in nova.virt.libvirt.driver
                    #

                    # Rescue ami image (string value)
                    #rescue_image_id=<None>

                    # Rescue aki image (string value)
                    #rescue_kernel_id=<None>

                    # Rescue ari image (string value)
                    #rescue_ramdisk_id=<None>

                    # Libvirt domain type (valid options are: kvm, lxc,
                    qemu, uml,
                    # xen) (string value)
                    # Deprecated group/name - [DEFAULT]/libvirt_type
                    virt_type=kvm

                    # Override the default libvirt URI (which is
                    dependent on
                    # virt_type) (string value)
                    # Deprecated group/name - [DEFAULT]/libvirt_uri
                    #connection_uri=

                    # Inject the admin password at boot time, without an
                    agent.
                    # (boolean value)
                    # Deprecated group/name -
                    [DEFAULT]/libvirt_inject_password
                    inject_password=false

                    # Inject the ssh public key at boot time (boolean value)
                    # Deprecated group/name - [DEFAULT]/libvirt_inject_key
                    inject_key=true

                    # The partition to inject to : -2 => disable, -1 =>
                    inspect
                    # (libguestfs only), 0 => not partitioned, >0 =>
                    partition
                    # number (integer value)
                    # Deprecated group/name -
                    [DEFAULT]/libvirt_inject_partition
                    inject_partition=-2

                    # Sync virtual and real mouse cursors in Windows VMs
                    (boolean
                    # value)
                    #use_usb_tablet=true

                    # Migration target URI (any included "%s" is
                    replaced with the
                    # migration target hostname) (string value)
                    live_migration_uri=qemu+tcp://%s/system

                    # Migration flags to be set for live migration
                    (string value)
                    live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,
                    VIR_MIGRATE_PEER2PEER

                    # Migration flags to be set for block migration
                    (string value)
                    block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,
                    VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC

                    # Maximum bandwidth to be used during migration, in Mbps
                    # (integer value)
                    live_migration_bandwidth=0

                    # Snapshot image format (valid options are : raw,
                    qcow2, vmdk,
                    # vdi). Defaults to same as source image (string value)
                    snapshot_image_format=qcow2

                    # The libvirt VIF driver to configure the VIFs.
                    (string value)
                    # Deprecated group/name - [DEFAULT]/libvirt_vif_driver
                    vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

                    # Libvirt handlers for remote volumes. (list value)
                    # Deprecated group/name -
                    [DEFAULT]/libvirt_volume_drivers
                    
#volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver,aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver,glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver,fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver,scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver

                    # Override the default disk prefix for the devices
                    attached to
                    # a server, which is dependent on virt_type. (valid
                    options
                    # are: sd, xvd, uvd, vd) (string value)
                    # Deprecated group/name - [DEFAULT]/libvirt_disk_prefix
                    #disk_prefix=<None>

                    # Number of seconds to wait for instance to shut
                    down after
                    # soft reboot request is made. We fall back to hard
                    reboot if
                    # instance does not shutdown within this window.
                    (integer
                    # value)
                    # Deprecated group/name -
                    [DEFAULT]/libvirt_wait_soft_reboot_seconds
                    #wait_soft_reboot_seconds=120

                    # Set to "host-model" to clone the host CPU feature
                    flags; to
                    # "host-passthrough" to use the host CPU model
                    exactly; to
                    # "custom" to use a named CPU model; to "none" to
                    not set any
                    # CPU model. If virt_type="kvm|qemu", it will default to
                    # "host-model", otherwise it will default to "none"
                    (string
                    # value)
                    # Deprecated group/name - [DEFAULT]/libvirt_cpu_mode

                    # Set to a named libvirt CPU model (see names listed in
                    # /usr/share/libvirt/cpu_map.xml). Only has effect if
                    # cpu_mode="custom" and virt_type="kvm|qemu" (string
                    value)
                    # Deprecated group/name - [DEFAULT]/libvirt_cpu_model
                    #cpu_model=<none>

                    # Location where libvirt driver will store snapshots
                    before
                    # uploading them to image service (string value)
                    # Deprecated group/name -
                    [DEFAULT]/libvirt_snapshots_directory
                    #snapshots_directory=$instances_path/snapshots

                    # Location where the Xen hvmloader is kept (string
                    value)
                    #xen_hvmloader_path=/usr/lib/xen/boot/hvmloader

                    # Specific cachemodes to use for different disk
                    types e.g:
                    # file=directsync,block=none (list value)

                    # A path to a device that will be used as source of
                    entropy on
                    # the host. Permitted options are: /dev/random or
                    /dev/hwrng
                    # (string value)

                    #
                    # Options defined in nova.virt.libvirt.imagecache
                    #

                    # Unused resized base images younger than this will
                    not be removed (default: 3600)
                    remove_unused_resized_minimum_age_seconds=3600

                    # Write a checksum for files in _base to disk
                    (default: false)
                    checksum_base_images=false

                    #
                    # Options defined in nova.virt.libvirt.vif
                    #

                    use_virtio_for_bridges=true

                    #
                    # Options defined in nova.virt.libvirt.imagebackend
                    #

                    # VM Images format. Acceptable values are: raw,
                    qcow2, lvm, rbd, default. If default is specified,
                    # then use_cow_images flag is used instead of this one.
                    images_type=default


                    [keystone_authtoken]
                    auth_uri = http://controller:5000/v2.0
                    auth_host = controller
                    auth_port = 35357
                    auth_protocol = http
                    auth_version = v2.0
                    admin_tenant_name = service
                    admin_user = nova
                    admin_password = openstack-compute
                    signing_dir = /var/cache/nova/api
                    hash_algorithms = md5
                    insecure = false
                    ========================================


                    Please check it.


                    On Wed, Jan 14, 2015 at 8:23 PM, Jay Pipes
                    <[email protected]> wrote:

                        Could you pastebin the output of:

                          keystone catalog

                        and also pastebin your nova.conf for the node
                        running the Nova API service?

                        Thanks!
                        -jay


                        On 01/14/2015 02:25 AM, Geo Varghese wrote:

                            Hi Team,

                            I need a help with cinder volume attachment
                            with an instance.

                            I have succesfully created cinder volume and
                            it is in available state.
                            Please check attached screenshot.

                            Later I tried to attach volume to an
                            instance but attachment failed.
                            While checking logs in
                            /var/log/nova/nova-api-os-__compute.log

                            ========

                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack     res =
                            method(self, ctx, volume_id, *args, **kwargs)
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack   File
                            
"/usr/lib/python2.7/dist-__packages/nova/volume/cinder.__py",
                            line 206, in get
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack     item =
                            cinderclient(context).volumes.__get(volume_id)
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack   File
                            
"/usr/lib/python2.7/dist-__packages/nova/volume/cinder.__py",
                            line 91, in
                            cinderclient
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack
                            endpoint_type=endpoint_type)
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack   File
                            
"/usr/lib/python2.7/dist-__packages/cinderclient/service___catalog.py",
                            line
                            80, in url_for
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack     raise
                            cinderclient.exceptions.__EndpointNotFound()
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack EndpointNotFound
                            2015-01-13 19:14:46.563 1736 TRACE
                            nova.api.openstack
                            ==============================__===========


                            I have already created endpints in v1 and v2.
                            Please check endpints I have created for
                            cinder below

                            
==============================__==============================__=============
                            root@controller:/home/geo# keystone
                            endpoint-list | grep 8776
                            | 5c7bcc79daa74532ac9ca19949e0d8__72 |
                            regionOne |
                            http://controller:8776/v1/%(__tenant_id)s
                            <http://controller:8776/v1/%(tenant_id)s> |
                            http://controller:8776/v1/%(__tenant_id)s
                            <http://controller:8776/v1/%(tenant_id)s> |
                            http://controller:8776/v1/%(__tenant_id)s
                            <http://controller:8776/v1/%(tenant_id)s> |
                            8ce0898aa7c84fec9b011823d34b55__cb |
                            | 5d71e0a1237c483990b84c36346602__b4 |
                            RegionOne |
                            http://controller:8776/v2/%(__tenant_id)s
                            <http://controller:8776/v2/%(tenant_id)s> |
                            http://controller:8776/v2/%(__tenant_id)s
                            <http://controller:8776/v2/%(tenant_id)s> |
                            http://controller:8776/v2/%(__tenant_id)s
                            <http://controller:8776/v2/%(tenant_id)s> |
                            251eca5fdb6b4550a9f521c10fa9f2__ca |
                            
==============================__==============================__===================

                            Anyone please help me. Thanks for the
                            support guys.

                            --
                            Regards,
                            Geo Varghese


                            _________________________________________________
                            OpenStack-operators mailing list
                            OpenStack-operators@lists.__openstack.org
                            
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
                            
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>


                        _________________________________________________
                        OpenStack-operators mailing list
                        OpenStack-operators@lists.__openstack.org
                        
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
                        
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>




                    --
                    --
                    Regards,
                    Geo Varghese




            --
            --
            Regards,
            Geo Varghese




    --
    --
    Regards,
    Geo Varghese




--
--
Regards,
Geo Varghese

_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to