Re: [openstack-dev] [neutron][sfc][release] stable/ocata version

2017-03-05 Thread Ihar Hrachyshka
With https://review.openstack.org/#/c/437699/ in, stadium projects
will no longer have any other option but to follow the common
schedule. That change is new for Pike+ so we may still see some issues
with Ocata release process.

Ihar

On Sun, Mar 5, 2017 at 8:03 PM, Jeffrey Zhang  wrote:
> Add [release] tag in subject.
>
> Do not follow the OpenStack release schedule will cause lots of issues. Like
> requirements changes, patches is merged which should not be merged into
> stable.
>
> It also may break the deploy project release schedule( like Kolla will not
> be
> released until sfc release ocata branch and tag ).
>
> I hope sfc team could follow the OpenStack release schedule, even though it
> may
> cause some cherry-pick, but it is safer and how OpenStack project live.
>
>
> On Sun, Mar 5, 2017 at 3:44 PM, Gary Kotton  wrote:
>>
>> Please note that things are going to start to get messy now – there are
>> changes in neutron that break master and these will affect the cutting of
>> the stable version. One example is https://review.openstack.org/441654
>>
>>
>>
>> So I suggest cutting a stable ASAP and then cherrypicking patches
>>
>>
>>
>> From: Gary Kotton 
>> Reply-To: OpenStack List 
>> Date: Sunday, March 5, 2017 at 9:36 AM
>>
>>
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [neutron][sfc] stable/ocata version
>>
>>
>>
>> Thanks!
>>
>>
>>
>> From: Jeffrey Zhang 
>> Reply-To: OpenStack List 
>> Date: Sunday, March 5, 2017 at 9:12 AM
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [neutron][sfc] stable/ocata version
>>
>>
>>
>> This is talked in [0]. sfc team said
>>
>>
>>
>> > we will pull a stable/ocata branch around end of Feb or early March the
>> > latest.
>>
>>
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2017-February/112580.html
>>
>>
>>
>> On Sun, Mar 5, 2017 at 3:01 PM, Gary Kotton  wrote:
>>
>> Hi,
>>
>> We are pretty blocked at the moment with our gating on stable/ocata. This
>> is due to the fact that there is no networking-sfc version tagged for ocata.
>>
>> Is there any ETA for this?
>>
>> Thanks
>>
>> Gary
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> --
>>
>> Regards,
>>
>> Jeffrey Zhang
>>
>> Blog: http://xcodest.me
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][keystone] better way to rotate and distribution keystone fernet keys in container env

2017-03-05 Thread Jeffrey Zhang
fix subject typo

On Mon, Mar 6, 2017 at 12:28 PM, Jeffrey Zhang 
wrote:

> Kolla have support keystone fernet keys. But there are still some
> topics worth to talk.
>
> The key issue is key distribution. Kolla's solution is like
>
> * there is a task run frequently by cronjob to check whether
>   the key should be rotate. This is controlled by
>   `fernet_token_expiry` variable
> * When key rotate is required, the task in cron job will generate a
>   new key by using `keystone-manage fernet-rotate` and distribute all
>   keys in /etc/keystone/fernet-keys folder to other by using
>   `rsync --delete`
>
> one issue is: there is no global lock in rotate and distribute steps.
> above command is ran on all controllers. it may cause issues if
> all controllers run this at the same time.
>
> Since we are using Ansible as deployment tools. there is not daemon
> agent at all to keep rotate and distribution atomic. Is there any
> easier way to implement a global lock?
>
> possible solution:
> 1. configure cron job with different time on each controller
> 2. implement a global lock? ( no idea how )
>
> [0] https://docs.openstack.org/admin-guide/identity-fernet-token-faq.html
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-05 Thread Ligong LG1 Duan
I agree that the idea of DIB's becoming a component of Glance is a little crazy 
and there is big difference between creating images and storing them.
My initial thought on this is to create an ecosystem of images, where user can 
do anything that are related to images. Since Glance is a well-known project 
for storing images, it might be a good place to implement that.
I would be prefer DIB to be an independent project if it is impossible to 
become a part of Glance.

Regards,
Ligong Duan
 

-Original Message-
From: Ben Nemec [mailto:openst...@nemebean.com] 
Sent: Saturday, March 04, 2017 3:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo][diskimage-builder] Status of 
diskimage-builder



On 03/03/2017 03:25 AM, Ligong LG1 Duan wrote:
> I am wondering whether DIB can become a component of Glance, as DIB is used 
> to create OS images and Glance to upload OS images.

I see a big difference between creating images and storing them.  I can't 
imagine Glance would have any interest in owning dib, nor do I think they 
should.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][keyston] better way to rotate and distribution keystone fernet keys in container env

2017-03-05 Thread Jeffrey Zhang
Kolla have support keystone fernet keys. But there are still some
topics worth to talk.

The key issue is key distribution. Kolla's solution is like

* there is a task run frequently by cronjob to check whether
  the key should be rotate. This is controlled by
  `fernet_token_expiry` variable
* When key rotate is required, the task in cron job will generate a
  new key by using `keystone-manage fernet-rotate` and distribute all
  keys in /etc/keystone/fernet-keys folder to other by using
  `rsync --delete`

one issue is: there is no global lock in rotate and distribute steps.
above command is ran on all controllers. it may cause issues if
all controllers run this at the same time.

Since we are using Ansible as deployment tools. there is not daemon
agent at all to keep rotate and distribution atomic. Is there any
easier way to implement a global lock?

possible solution:
1. configure cron job with different time on each controller
2. implement a global lock? ( no idea how )

[0] https://docs.openstack.org/admin-guide/identity-fernet-token-faq.html

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][release] stable/ocata version

2017-03-05 Thread Jeffrey Zhang
Add [release] tag in subject.

Do not follow the OpenStack release schedule will cause lots of issues. Like
requirements changes, patches is merged which should not be merged into
stable.

It also may break the deploy project release schedule( like Kolla will not
be
released until sfc release ocata branch and tag ).

I hope sfc team could follow the OpenStack release schedule, even though it
may
cause some cherry-pick, but it is safer and how OpenStack project live.


On Sun, Mar 5, 2017 at 3:44 PM, Gary Kotton  wrote:

> Please note that things are going to start to get messy now – there are
> changes in neutron that break master and these will affect the cutting of
> the stable version. One example is https://review.openstack.org/441654
>
>
>
> So I suggest cutting a stable ASAP and then cherrypicking patches
>
>
>
> *From: *Gary Kotton 
> *Reply-To: *OpenStack List 
> *Date: *Sunday, March 5, 2017 at 9:36 AM
>
> *To: *OpenStack List 
> *Subject: *Re: [openstack-dev] [neutron][sfc] stable/ocata version
>
>
>
> Thanks!
>
>
>
> *From: *Jeffrey Zhang 
> *Reply-To: *OpenStack List 
> *Date: *Sunday, March 5, 2017 at 9:12 AM
> *To: *OpenStack List 
> *Subject: *Re: [openstack-dev] [neutron][sfc] stable/ocata version
>
>
>
> This is talked in [0]. sfc team said
>
>
>
> ​> we will pull a stable/ocata branch around end of Feb or early March the
> latest.
>
>
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/
> 2017-February/112580.html
>
>
>
> On Sun, Mar 5, 2017 at 3:01 PM, Gary Kotton  wrote:
>
> Hi,
>
> We are pretty blocked at the moment with our gating on stable/ocata. This
> is due to the fact that there is no networking-sfc version tagged for ocata.
>
> Is there any ETA for this?
>
> Thanks
>
> Gary
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Regards,
>
> Jeffrey Zhang
>
> Blog: http://xcodest.me
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Create backup with snapshots

2017-03-05 Thread 王玺源
Thanks, Xing.

I got the history reason now. So is it possible that we can devide the
create API into two APIs? One is for create backup from volumes, another is
from snapshots. Then we can control the volumes' and snapshots' status
dividually and easily.

When create a backup from a large snapshot, such as larger than 1 TB, It
will costs few hours generally. It's really a problem that the volume is
not available for such a long time.

2017-03-03 22:43 GMT+08:00 yang, xing :

> In the original backup API design, volume_id is a required field.  In the
> CLI, volume_id is positional and required as well.  So when I added support
> to backup from a snapshot, I added snapshot_id as an optional field in the
> request body of the backup API.  While backup is in process, you cannot
> delete the volume.  Backup from snapshot and backup from volume are using
> the same API.  So I think volume status should be changed to “backing-up”
> to be consistent.  Now I’m thinking the status of the snapshot should be
> changed to “backing-up” too if snapshot_id is provided.
>
> Thanks,
> Xing
>
>
> 
> From: 王玺源 [wangxiyuan1...@gmail.com]
> Sent: Thursday, March 2, 2017 10:21 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [cinder] Create backup with snapshots
>
> Hi cinder team:
> We met a problem about backup create recently.
>
> The backup can be created from volumes or snapshots. In the both cases,
> the volume' s status is set to 'backing-up'.
>
> But as I know, when users create backup with snapshots, the volume is not
> used(Correct me if I'm wrong). So why the volume's status is changed ?
> Should it keep available? It's a little strange that the volume is
> "backing-up" but actually only the snapshot is used for backup creation.
> the volume in "backing-up" means that it can't be used for some other
> actions. Such as: attach, delete, export to image, extend, create from
> volume, create backup from volume and so on.
>
> So is there any reason we changed the volume' status here? Or is there any
> third part driver need the volume's status must be "backing-up" when create
> backup from snapshots?
>
> Thanks!
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?

2017-03-05 Thread Zhenyu Zheng
Hi, Matt

AFAIK, searchlight did delete the record, it catch the instance.delete
notification and perform the action:
http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n100
->
http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n307

I will double check with others from the SL team, and if it is the case, we
will try to find a way to solve this ASAP.

Thanks,

Kevin Zheng

On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann  wrote:

> I've posted a spec [1] for nova's integration with searchlight for listing
> instance across multiple cells. One of the open questions I have on that is
> when/how do instances get removed from searchlight?
>
> When an instance gets deleted via the compute API today, it's not really
> deleted from the database. It's considered "soft" deleted and you can still
> list (soft) deleted instances from the database via the compute API if
> you're an admin.
>
> Nova will be sending instance.destroy notifications to searchlight but we
> don't really want the ES entry removed because we still have to support the
> compute API contract to list deleted instances. Granted, this is a pretty
> limp contract because there is no guarantee that you'll be able to list
> those deleted instances forever because once they get archived (moved to
> shadow tables in the nova database) or purged (hard delete), then they are
> gone from that API query path.
>
> So I'm wondering at what point instances stored in searchlight will be
> removed. Maybe there is already an answer to this and the searchlight team
> can just inform me. Otherwise we might need to think about data retention
> policies and how long a deleted instances will be stored in searchlight
> before it's removed. Again, I'm not sure if nova would control this or if
> it's something searchlight supports already.
>
> [1] https://review.openstack.org/#/c/441692/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]reschedule the weekly meeting

2017-03-05 Thread joehuang
Hello,

According to the poll[1], the weekly meeting will be rescheduled to UTC 1400 ~ 
UTC 1500 every Wednesday, once the patch[2] was merged, we can start the weekly 
meeting at new time slot.

[1]poll of weekly meeting time: http://doodle.com/poll/hz436r6wm99h4eka#table
[2]new weekly meeting time patch: https://review.openstack.org/#/c/441727/

Best Regards
Chaoyi Huang (joehuang)

From: joehuang
Sent: 01 March 2017 11:59
To: openstack-dev; victor.mora...@intel.com; sindhu.dev...@intel.com; 
prince.a.owusu.boat...@intel.com
Subject: RE: [openstack-dev][tricircle]reschedule the weekly meeting

Hello, team,

According to the discussion in the IRC, we'd like to move the weekly meeting 
from UTC1300 to UTC1400, there are several options for this time slot, please 
vote before next Monday, then I'll submit a patch to occupy the new time slot 
and release the old time slot.

Poll link: http://doodle.com/poll/hz436r6wm99h4eka

Note: before the new weekly meeting time is settled, we have to have weekly 
meeting at old time slot.

Best Regards
Chaoyi Huang (joehuang)

From: joehuang
Sent: 27 February 2017 21:48
To: openstack-dev; victor.mora...@intel.com; sindhu.dev...@intel.com; 
prince.a.owusu.boat...@intel.com
Subject: [openstack-dev][tricircle]reschedule the weekly meeting

Hello,

Currently the Tricircle weekly meeting is held regularly at UTC 13:00~14:00 
every Wednesday, it's not convenient for contributors from USA.

The available time slots could be found at
https://docs.google.com/spreadsheets/d/1lQHKCQa4wQmnWpTMB3DLltY81kIChZumHLHzoMNF07c/edit#gid=0

Other contributors are mostly from East Asia, the time zone is around 
UTC+8/UTC+9.

Please propose some your preferred time slots, and then let's have a poll for 
the new weekly meeting time.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Weekly wrap-up

2017-03-05 Thread Rob Cresswell
Hey folks,

Great work since the PTG. In Pike so far we've already closed over 30 bugs (28 
tracked, with a few minor untracked fixes) and backported several fixes to our 
stable releases, which we'll tag this week. Several blueprints are well on the 
way too, but really need reviews. Please check out 
https://launchpad.net/horizon/+milestone/pike-1, and get some +/-1's on those 
patches!

I've been doing a lot of patch cleanup this week; if you believe something has 
been abandoned or blocked erroneously, please let me know via email or IRC 
(robcresswell).

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?

2017-03-05 Thread Matt Riedemann
I've posted a spec [1] for nova's integration with searchlight for 
listing instance across multiple cells. One of the open questions I have 
on that is when/how do instances get removed from searchlight?


When an instance gets deleted via the compute API today, it's not really 
deleted from the database. It's considered "soft" deleted and you can 
still list (soft) deleted instances from the database via the compute 
API if you're an admin.


Nova will be sending instance.destroy notifications to searchlight but 
we don't really want the ES entry removed because we still have to 
support the compute API contract to list deleted instances. Granted, 
this is a pretty limp contract because there is no guarantee that you'll 
be able to list those deleted instances forever because once they get 
archived (moved to shadow tables in the nova database) or purged (hard 
delete), then they are gone from that API query path.


So I'm wondering at what point instances stored in searchlight will be 
removed. Maybe there is already an answer to this and the searchlight 
team can just inform me. Otherwise we might need to think about data 
retention policies and how long a deleted instances will be stored in 
searchlight before it's removed. Again, I'm not sure if nova would 
control this or if it's something searchlight supports already.


[1] https://review.openstack.org/#/c/441692/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Newton: not able to login via public key

2017-03-05 Thread Amit Uniyal
Hi Kevin,


Thanks for response.

Can you tell which service or which configuration(file) is responsible for
adding metadata to instance. like adding adding keys in new instance ?


Thanks and Regards
Amit Uniyal

On Sun, Mar 5, 2017 at 8:18 PM, Kevin Benton  wrote:

> The metadata agent in Neutron is just a proxy that relays metadata
> requests to Nova after adding in HTTP headers that identify the
> instance.
>
> On Sun, Mar 5, 2017 at 5:44 AM, Amit Uniyal  wrote:
> > Hi all,
> >
> > I have reconfigured everything, working fine but not sure where what went
> > wrong last time,
> >
> > Can anyone explain, how this works, like metadata agent is neutron
> service,
> > is it responsible for adding key inside new instance? It should be a job
> of
> > nova service.
> >
> >
> > Thanks and Regards
> > Amit Uniyal
> >
> > On Wed, Mar 1, 2017 at 11:03 PM, Amit Uniyal 
> wrote:
> >>
> >> Hi all,
> >>
> >> I have installed a newton openstack, not able to login into machines via
> >> private keys.
> >>
> >> I followed this guide
> >> https://docs.openstack.org/newton/install-guide-ubuntu/
> >>
> >> Configure the metadata agent¶
> >>
> >> The metadata agent provides configuration information such as
> credentials
> >> to instances.
> >>
> >> Edit the /etc/neutron/metadata_agent.ini file and complete the
> following
> >> actions:
> >>
> >> In the [DEFAULT] section, configure the metadata host and shared secret:
> >>
> >> [DEFAULT]
> >> ...
> >> nova_metadata_ip = controller
> >> metadata_proxy_shared_secret = METADATA_SECRET
> >>
> >> Replace METADATA_SECRET with a suitable secret for the metadata proxy.
> >>
> >>
> >>
> >>
> >> I think region name should also be included here, I tried
> >>
> >> RegionName = RegionOne
> >>
> >> and then restarted even whole controller node (as it doesn't work by
> only
> >> restarting neutron meta-agent service)
> >>
> >>
> >> Another thing is on checking neutron agent-list status, I am not getting
> >> any availiability zone for mata-agent is it fine?
> >>
> >>
> >> Regards
> >> Amit
> >>
> >>
> >>
> >>
> >>
> >
> >
> > ___
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> > Post to : openst...@lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> >
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Newton: not able to login via public key

2017-03-05 Thread Kevin Benton
The metadata agent in Neutron is just a proxy that relays metadata
requests to Nova after adding in HTTP headers that identify the
instance.

On Sun, Mar 5, 2017 at 5:44 AM, Amit Uniyal  wrote:
> Hi all,
>
> I have reconfigured everything, working fine but not sure where what went
> wrong last time,
>
> Can anyone explain, how this works, like metadata agent is neutron service,
> is it responsible for adding key inside new instance? It should be a job of
> nova service.
>
>
> Thanks and Regards
> Amit Uniyal
>
> On Wed, Mar 1, 2017 at 11:03 PM, Amit Uniyal  wrote:
>>
>> Hi all,
>>
>> I have installed a newton openstack, not able to login into machines via
>> private keys.
>>
>> I followed this guide
>> https://docs.openstack.org/newton/install-guide-ubuntu/
>>
>> Configure the metadata agent¶
>>
>> The metadata agent provides configuration information such as credentials
>> to instances.
>>
>> Edit the /etc/neutron/metadata_agent.ini file and complete the following
>> actions:
>>
>> In the [DEFAULT] section, configure the metadata host and shared secret:
>>
>> [DEFAULT]
>> ...
>> nova_metadata_ip = controller
>> metadata_proxy_shared_secret = METADATA_SECRET
>>
>> Replace METADATA_SECRET with a suitable secret for the metadata proxy.
>>
>>
>>
>>
>> I think region name should also be included here, I tried
>>
>> RegionName = RegionOne
>>
>> and then restarted even whole controller node (as it doesn't work by only
>> restarting neutron meta-agent service)
>>
>>
>> Another thing is on checking neutron agent-list status, I am not getting
>> any availiability zone for mata-agent is it fine?
>>
>>
>> Regards
>> Amit
>>
>>
>>
>>
>>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton: not able to login via public key

2017-03-05 Thread Amit Uniyal
Hi all,

I have reconfigured everything, working fine but not sure where what went
wrong last time,

Can anyone explain, how this works, like metadata agent is *neutron *service,
is it responsible for adding key inside new instance? It should be a
job of *nova
*service.


Thanks and Regards
Amit Uniyal

On Wed, Mar 1, 2017 at 11:03 PM, Amit Uniyal  wrote:

> Hi all,
>
> I have installed a newton openstack, not able to login into machines via
> private keys.
>
> I followed this guide  https://docs.openstack.org/ne
> wton/install-guide-ubuntu/
>
> Configure the metadata agent¶
> 
>
> The metadata agent
> 
>  provides
> configuration information such as credentials to instances.
>
>-
>
>Edit the /etc/neutron/metadata_agent.ini file and complete the
>following actions:
>-
>
>   In the [DEFAULT] section, configure the metadata host and shared
>   secret:
>
>   [DEFAULT]...nova_metadata_ip = controllermetadata_proxy_shared_secret = 
> METADATA_SECRET
>
>   Replace METADATA_SECRET with a suitable secret for the metadata
>   proxy.
>
>
>
>
> I think region name should also be included here, I tried
>
> RegionName = RegionOne
>
> and then restarted even whole controller node (as it doesn't work by only
> restarting neutron meta-agent service)
>
>
> Another thing is on checking neutron agent-list status, I am not getting
> any availiability zone for mata-agent is it fine?
>
>
> Regards
> Amit
>
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev